The limits of philosophy
To take up again the theme of my most recent post: the hallmark of one who has abandoned common sense–whether for probability or philosophy–is a denial of my ability to be certain of any particular truth. The philosopher notices that the senses are fallible, and concludes they cannot be trusted. The probabilist notices the same, and concludes that I should trust them only on the probabilities. John Henry Newman has a more complex take on the problem. My last post looked at the matter of certainty, at why it matters. This one will look at its form: what does certainty look like?
In Newman’s metaphor, it looks like calculus:
I consider, then, that the principle of concrete reasoning is parallel to the method of proof which is the foundation of modern mathematical science, as contained in the celebrated lemma with which Newton opens his “Principia.” We know that a regular polygon, inscribed in a circle, its sides being continually diminished, tends to become that circle, as its limit; but it vanishes before it has coincided with the circle, so that its tendency to be the circle, though ever nearer fulfilment, never in fact gets beyond a tendency. In like manner, the conclusion in a real or concrete question is foreseen and predicted rather than actually attained; foreseen in the number and direction of accumulated premisses, which all converge to it, and as the result of their combination, approach it more nearly than any assignable difference, yet do not touch it logically (though only not touching it,) on account of the nature of its subject-matter, and the delicate and implicit character of at least part of the reasonings on which it depends. It is by the strength, variety, or multiplicity of premisses, which are only probable, not by invincible syllogisms,—by objections overcome, by adverse theories neutralized, by difficulties gradually clearing up, by exceptions proving the rule, by un-looked-for correlations found with received truths, by suspense and delay in the process issuing in triumphant reactions,—by all these ways, and many others, it is that the practised and experienced mind is able to make a sure divination that a conclusion is inevitable, of which his lines of reasoning do not actually put him in possession. This is what is meant by a proposition being “as good as proved,” a conclusion as undeniable “as if it were proved,” and by the reasons for it “amounting to a proof,” for a proof is the limit of converging probabilities.
–John Henry Newman, An Essay in Aid of a Grammar of Assent
At first this might sound just like probabilism dressed up. After all, once we formalize probabilities in a Bayesian manner, we can find the probability of an event by summing across independent evidences; given four pieces of evidence, with evidential “decibel” values of -10, 25, -7, and 15, then they sum to 23 “decibels” and give me a probability of about 99.5%–i.e. high enough for most practical purposes. Does Newman’s “as good as proved” mean anything more than this?
I’d say yes: not that it denies the usefulness of such formalization; rather, it addresses a different question entirely. This question has to do with what it is to think within space and time.
Consider how the calculus metaphor actually works. Newman, recall, is talking about limits: for example, “the limit of 1+1/2+1/4+1/8…+1/2^n as n approaches infinity,” or “the limit of (1-x)/(1-x) as x approaches 1,” or “the limit of 1/x, as x approaches 0.” This first has a value of 2, the second of 1, and the third does not exist; and these make a certain intuitive sense, since each new number in the first cuts in half the distance to 2; the second has a value of 1 at every input except 1, at which it is undefined; and the third, when graphed, shoots up and down to both infinities near the y-axis.
We can define limits, in a completely abstract but mathematically accurate way, as follows: Say there is some action, whose results interest us, but which for whatever reason cannot actually be performed; however, we can perform actions that resemble it in structure, though not necessarily in their result, to an arbitrary degree of precision. If, when we perform a sequence of actions, each in structure resembling the impossible action more and more closely, the corresponding results resemble each other more and more closely, again to an arbitrary degree of precision; then we can talk about the limit of this sequence: the value which this sequence of results approaches, whether or not it ever in fact meets it.
Newman’s calculus metaphor can be taken to invoke this definition literally, not just figuratively. The action in question is that of knowing all relevant facts about a given issue, and having infinite time to think about them; we want to know to what conclusion we would come, were we able to do that. So we consider at a sequence of actions: knowing more and more about a given issue, and thinking longer and longer about it, and asking better and better questions. If, as we do this, our answers to those questions seem to converge on a particular value; if, at first, our conclusions fluctuated wildly, dominated by whatever we had last read, but as we knew and thought more and longer, they settled down and began to approximate a particular conclusion; then we we can conclude that the sequence does converge, and put the issue to rest.
That we put the issue to rest does not guarantee that we are right in doing so. Newman admits that we cannot be certain that we are certain; we may have taken the limit wrongly. In epistemology, unlike in mathematics, we cannot prove that and on what the limit converges. It is always possible that, somewhere down the line, we will come to know or think of something that contradicts our conclusion, and forces us to investigate further. But this does not mean that epistemic limits fail to converge. Newman takes it as an article of faith that they do: that the cosmos has a transcendent coherence, that the questions we put to it have answers, and that behind both questions and answers lies reality. But then, so does the probabilist. It is difficult to see how such an assumption can be avoided.
Unlike the probabilist’s probabilities, the questions Newman puts to the cosmos are not all yes or no. This is, to my mind, one of the great advantages of his theory: it describes how our thoughts can converge on an answer, without assuming that we already had the answer in mind, and just wanted to know whether or not it was correct. It allows us to begin, not with a question, but with an issue, for which we find the appropriate question as we proceed. In brief, it makes room for a language that goes beyond formulae. The other great advantage is his account of when inquiry can cease: it explains what justifies the decision to leave off inquiry, without appealing to an arbitrary certainty-value (99%? 99.9%?) or to mere pragmatic necessity. We leave off inquiry when it looks to us as if we can tell to what view of things are our questions and answers converge.
Not all functions, however, have limits; remember 1/x. Newman seems not to consider the possibility that some issues are illusory: that the questions we put to the cosmos about it return answers with not enough in common to approximate a conclusion: that the more we know, the more confused we may become. This danger becomes even more serious when we allow language to mean more than the calculations we perform with it. If our inquiries do not issue in any conclusions, does that not mean that the words we use for them cannot be given coherent meanings?
This kind of worry is the bridge between Newman and the later Wittgenstein, who read the Grammar of Assent with great interest and complete revulsion.