Skip to content

Analogy like syllepsis

July 21, 2014

Thomas Aquinas argues that everything true we can say about God, we can say only analogously. I’ve noticed that people have trouble telling the difference between this claim and a much stronger one, namely, that everything we can say about God is, ultimately, false, even if useful for devotional purposes. Such an apophatic attitude tends to raise the hackles of more down-to-earth philosophers: if there’s something we can’t talk about, why talk about it? And these philosophers extend their approbation to Thomas as well. This seems to me unfair, for his position is really quite different from a simple quietism.

But I can also see why people are confused. They doubt Thomas’ answer, not to a question about God, but to a question about talk: of what philosophical use is analogical language?

*

As we use the word nowadays, an analogy is a ratio, a relation of one relation to another: HAND : PALM :: FOOT : SOLE. We express the mapping with words like “like”: “Hands are to palms as feet are to soles”; or, “Palms are like soles the way hands are like feet.” Such comparisons often feel informative. But what do they really tell us?

If analogies tell us that two things are alike, we might think that they let us draw conclusions about one based on what we know about the other. For example, when we realize that hands have fingers as well as palms, we can ask: HAND : PALM : FINGER :: FOOT : SOLE : ___? And we fill the blank in with TOE. But what about HAND : PALM : FIST :: FOOT : SOLE : ___? Nothing can fill in this blank; the analogy breaks down. So the analogy does not, in fact, let us draw any conclusions. It merely tells us that two things are in a certain respect alike, and prompts us to ask whether or not they are alike in some other way.

These kind of analogies are obviously useful for thinking; they help us come up with interesting hypotheses, for example, that the foot will have a feature corresponding to the hand’s finger or fist. But it seems as if they can’t do argumentative work. If someone denies, for example, that toes are like fingers, we might bring to his attention the analogy HAND : PALM :: FOOT : SOLE, and then ask if we cannot append … : FINGER :: … : TOE. Perhaps he will say yes. But he may say no, and pointing to the analogy is not enough to prove him wrong.

What would it even mean for him to be wrong? Suppose he does say no. If we ask why, he may reply: fingers are opposable, and toes are not. We of course agree that … : THUMB :: … : BIG TOE is not valid, or, at least, seems fishy. But we still want to say that fingers are like toes, and he still does not. What exactly are we disagreeing about? It seems that we’re disagreeing over whether or not to use use words like “like” to connect them; it’s not clear that we disagree about anything else.

This kind of analogy is just a way of putting similes and metaphors. “Achilles was a lion on the battlefield” is the same as ACHILLES : BATTLEFIELD :: LION : HUNT. Metaphorical language is essentially evocative, not meaningful; it prompts thoughts, but does not communicate thoughts in a way that allows disagreement. We cannot appeal to a metaphor in argument, we can only call it to our interlocutor’s attention; and we cannot disagree with a metaphor, we can only call it unhelpful.

*

This is the view of analogy at which we arrive when we see it as essentially a ratio, as A : B :: C : D. But in doing so, we lose sight of the analogical use of a word in which Thomas is so interested.

Analogical use stands between univocal and equivocal. It’s easy to see what is meant by the latter two. Take the sentences “I’m a fan of the Texas Rangers” and “You’re a fan of the Chicago Cubs.” These uses of “fan” are univocal because they mean it in the same way. We could combine them as follows: “We’re both fans: me of the Texas Rangers, you of the Chicago Cubs.” Take, on the other hand, “I took to the ballpark a fan of the Boston Red Sox” and “I took to the ballpark a fan to cool off with.” These uses of “fan” are equivocal because they mean it in unrelated ways. There is no valid way to combine them; the sentence “I took to the ballpark two fans, one of the Boston Red Sox and one to cool off with” is clear nonsense.

Now consider the sentences “His body was healthy,” “His food was healthy,” and “His saliva was healthy.” These uses of “healthy” are analogical because they mean it in different, but related ways. A body is healthy when there’s nothing wrong with it; food is healthy when it makes its consumer healthy; and saliva is healthy when it indicates that its salivator is healthy. The first use is primary; the second relates to the first causally; the third relates to the first symptomatically. These kinds of relations are termed an analogy of attribution.

Other kinds of different, but related uses exist in which there are not some uses whose meanings refer to other uses, but rather a commonality between all of the various meanings; these are called analogies of proportionality. I can “give salt,” “give an idea,” and “give counsel,” and these are different kinds of giving, but they have something in common. We might say: to give something to someone is to make it now his; sometimes giving results in my no longer having, as with salt; sometimes giving results in us now both having, as with an idea; and sometimes I can give that which I never have myself, as with counsel. So we can draw a proportion between the things we give: LOSE : SALT :: RETAIN : AN IDEA :: NEVER HAD : COUNSEL.

What seems to me significant about analogies, when described thsi way, is that sentences like  “His body, food, and saliva were all healthy” and “Give neither counsel nor salt till you are asked for it” sound strange, but still make sense. Both are examples of syllepsis, a figure by which a single word is made to connect to two or more other words in the sentence, applying to them in different senses. A few more:

  • She made no reply, up her mind, and a dash for the door.
  • Eggs and oaths are soon broken.
  • She went straight home, in a flood of tears and a sedan-chair.
  • God and his creation both exist, are true, are good, ….

A thesis: Analogical language makes sylleptic language possible.

A corollary: To reject analogical language is to deny that sylleptic language is grammatically permissible.

*

Why should we permit syllepsis in our grammar?

Syllepsis is a special form of zeugma, and zeugma is, essentially, an application of the distributive property of language: A*B+A*C=A*(B+C). If we reject zeugma entirely, we can never again say “I went to Texas and California,” or even “I went to Texas and went to California”; the only way to express this thought would be “I went to Texas and I went to California.” This would be far too restrictive; we must allow some distribution. But we should reject equivocal zeugmas, which are, essentially, attempts to distribute across words that are not in fact the same–like concluding A*B+E*C=A(B+C) just because A and E sound alike.

To permit syllepsis, then, is to say that the two uses of, for example, “healthy,” in “healthy body” and “healthy food,” are the same word, not just similar words, even though the uses of the word are similar but not the same. This at first glance looks strange. Why would not an ideal language differentiate between these, for example, by using “healthy” for one and “healthful” for the other, such that we could recognize the similarity from their common stem, and yet recognize their dissimilarity from their differing suffixes?

To be sure, appending differing suffixes is often worthwhile, helpfully disambiguating the language. But we should not imagine that it would be possible in all cases. This would assume that we can easily differentiate between the various meanings of a word and assign a separate suffix to each; but to the contrary, nothing assures that these meanings are clearly separated, or finite in number, or organized along any particular axis. There is no reason to think that any system of generated-on-the-fly suffixes, or of modulated pitch, speed, volume, etc (or size, angle, color, etc), would suffice to communicate exhaustively the shades of meaning contained in our words. No notational system, however complex, can make it possible to mechanically extract the meaning of a sentence from its representation.

Syllepsis is not, I would say, an abuse of zeugma, any more than analogy is an abuse of the fact that we can use a word more than once. They yoke together disparate meanings of a word in a way that makes us uncomfortable, and gives us the feeling that, perhaps, this one word ought, in fact, to be two different ones; but if we split words whenever we have such a feeling, our language would grow more complex without becoming any less ambiguous.

*

There remains, to be sure, a further question: why, in the particular case of God, we would not be better off eliminating the analogical language? We could say, for example, that “Creation exists, while God exists*.” This would, in the case of God, seem especially helpful: it would allow us to say “God exists* but does not exist,” rather than affirming “God does and does not exist,” an explosive contradiction–or at least it would explode, if it were permissible to mechanically draw inferences from a sentence’s representation without considering what it actually means.

There are, I think, arguments to be made in favor of retaining the same word. But more importantly, to even ask this question is to stop debating the philosophical question whether God does and does not exist, and to begin debating the practical question whether this sentence is the best way to express this fact. The stakes are certainly much lower.

The forms of late modernity

July 14, 2014

I’ve been in England for a seminar for the last week, so this week’s post will consist, not of any substantive writing of my own, but of a handful of links to recent articles I find interesting. The theme: the concept of personhood, and its relevance to the difficulties of what English professors like to call “Late Modernity,” and which I might prefer to call “what happens when we invent new ways of imposing form on matter, and must deal with how the facts of the matter sometimes resist the reformation.”

First up, The View from Hell, in “Why People Used to Have Children,” answers the question thus:

It’s hard for us even to imagine, but children used to be valuable – they used to be much more like slaves or farm animals, which are both very valuable. They were also treated much more like slaves, with patriarchs (at least) maintaining distance from children, as Caldwell notes. Consider the history of the study, compared to the lowly and shameful “man cave,” for a sense of the old style of family relations. A wife was not only a valuable RealDoll, but also a valuable slave factory. Making a new “person” – on which the state has claims, but you do not, and toward whom you have (class-dependent) obligations – is a much less economically attractive proposition than making a new slave.

The point is that “personhood,” while not “fictional,” is nevertheless an artificial category; and moving from a world of humans to a world of persons changes a number of things, some for the better–it’s surely an improvement that we no longer keep children as slaves–some not. But it doesn’t just complicate things on a practical level, it completely changes what it’s like to be a self at all; in Charles Taylor’s terms, it becomes buffered rather than porous. Alan Jacobs in “Fantasy and the Buffered Self” asks:

Might it not be possible to experience the benefits, while avoiding the costs, of both the porous and the buffered self? I want to argue here that it is precisely this desire that accounts for the rise to cultural prominence, in late modernity, of the artistic genre of fantasy. Fantasy — in books, films, television shows, and indeed in all imaginable media — is an instrument by which the late modern self strives to avail itself of the unpredictable excitements of the porous self while retaining its protective buffers. Fantasy, in most of its recent forms, may best be understood as a technologically enabled, and therefore safe, simulacrum of the pre-modern porous self.

There is, I think, a lot to this. And I would compare it to how our acceptance of artificial online territories–Facebook, Instagram, Twitter, etc–are a response to the complete ungroundedness of much of our everyday lives. The natural world is full of dangers, and so we create systems to protect us from it; but our life within these systems is completely abstract, taking place in legalese and economese; and so we accept the creation of new, safe, worlds, which excite in us a kind of nostalgia, and in which we now conduct our lives. The problem is not that the new world is new. Rather, it is that it we must cede ownership of the new world to the ones who create it. Hence jay in “Colonising the Clouds” argues that, if we treat information like territory, then Facebook, Google, Amazon, etc–what he calls “stacks”–are more like states than like corporations:

So the idea that Facebook has territory, is like imagining that when I log in to Facebook, I cross some kind of psychogeographical border and am now on Facebook’s turf

There are, however, important dissimilarities between real and simulated territory. The real need only be defended; its simulacrum, on the other hand, will simply vanish if work is not put into its constant maintenance. The work comes in many forms: generating electricity, laying cables, calculating prime numbers, generating user feedback.

Consider, for example, this newly invented app. The goal: to algorithmically determine the most aesthetically pleasing route from A to B. But the very idea of an algorithmic, i.e. effortless, aesthetics contains a hidden contradiction. The app can only create a simulacrum of an aesthetic judgment, by means of a statistical survey; and it does so by constantly forcing on the user the question: Do I agree with the algorithm’s decision? If so, is it only because everyone else does? If not, is it only because my tastes are perverse? What is the difference between what is good and what the algorithm says? In other words, the forming of the algorithm–

To work out whether the routes chosen by the algorithm are really more beautiful, Quercia and co recruited 30 people who live in London and are familiar with the area, to assess the recommended paths. And indeed, they agreed that the routes chosen by the algorithm were more beautiful than the shortest routes.

–is not a prelude to the algorithm’s use, but rather a description of it.

“But,” you say, “that’s true of all aesthetic judgment; just replace ‘the algorithm’ with ‘other people.’” Yes: but unlike algorithms, people I can talk to. When I can’t talk to someone else about whose reasons are better, I cannot, as the algorithmic method imagines, give feedback unconsciously, letting the A/B testing make my life effortless. Rather, I’m forced to talk to myself, to scrutinize my own judgment to see whether it was authentic. But discussing reasons, not achieving “authenticity,” is what gives aesthetic experience its real value.

The algorithm replaces the giving of reasons with the gathering of evidences. From the algorithm’s point of view, we cease to be persons, and become slaves. Our task: to walk the various routes and report back on the gap between reality and what the algorithm says of it. Our pay: aesthetic “pleasure,” and its accompanying anxiety, without the happiness of engaging in the aesthetic agon. The algorithm’s goal: dominion.

Much sharper outlines than now

July 7, 2014

[For context, read this post.]

Jan van Eyck - The Annunciation

Jan Van Eyck – The Annunciation

Johan Huizinga (pronounced “Housing-ha”) published his Herfsttij der Middeleeuwen (Autumn (or, Waning; literally Autumn-tide) of the Middle Ages) in 1919, and it was thought behind the times: too impressionistic, insufficiently scientific, for serious historical scholarship. Popular among literary types, it was an inspiration for the emergence of the field of “cultural studies”; historians paid attention to it when they felt they had no choice, then forgot about it when it became in fact behind the times–that is, when historians stopped paying attention to anything written more than thirty years ago. I found a used copy in a bookstore last year and read it immediately; I soon decided it would fit into the history section of my list as well as just about anything else.

Now, the “modern history” section of the Fundamentals exam always gives people trouble. There the usual Enlightenment suspects–Gibbon, Carlyle, Michelet, etc.–but their books are very long, and very much products of their time. In any case, by the time you get to the late 19th century, such grand unifying endeavors have for the most part been replaced by historical scholarship. We might define “scholarly” as the opposite of “fundmental”: a scholarly work is one which later scholarship can render irrelevant. I don’t know that such scholarly amnesia is necessarily a bad thing, but it does make locating a fundamental work of modern history difficult.

*

I thinks Huizinga’s Autumn makes the cut for three basic reasons: one historiographical, one psychological, and one historical.

First: While primarily a work of history focused on the culture of Burgundy (roughly north-eastern France) in the late Middle Ages, Autumn can also be read as an historiographical manifesto. At the time Huizinga wrote, historical research was dominated by study of legal charters and merchant records, in an attempt to uncover the hidden truth of how the society of the era “really” “worked”. Huizinga insisted on the importance of the chronicles and arts and literature of the time: that is, of its self-understanding and self-expression. Autumn thus argues against “objective” history and in favor of an entering into an era’s Weltanschauung–not naively taking its inhabitants at their word, but rather seeking to understand what lead them to value, believe, desire what and how they did. This is a timeless question about historiographic methodology, and Huizinga argues his position well. Such attempts to psychoanalyze the past have their dangers, but it seems right to me to think that, if we cannot describe what it was like for a 14th-century Burgundian to go on a pilgrimage, hear an itinerant preacher, attend a tournament, read the Roman de la Rose, then we really know very little about him.

*

Second: The approach Huizinga takes to such historical psychoanalysis (a phrase which need not invoke Freud) remains even today both compelling and problematic. The book opens with these words, from which the title of this post has been taken:

When the world was half a thousand years younger all events had much sharper outlines than now. The distance between sadness and joy, between good and bad fortune, seemed to be much greater than for us; every experience had that degree of directness and absoluteness that joy and sadness still have in the mind of a child. (p. 1)

And so it sounds, at first, as if this will be yet another narrative of mankind’s journey from childhood, Medieval superstition, through adolescence, Renaissance skepticism, to adulthood, Enlightenment secular humanism. But in fact, Huizinga says, the late Middle Ages are not young, but old, very old, and nearly dead; as the first chapter concludes:

It is an evil world. The fires of hatred and violence burn fiercely. Evil is powerful, the devil covers a darkened earth with his black wings. And soon the end of the world is expected. But mankind does not repent, the church struggles, and the preachers and poets warn and lament in vain. (p. 29)

The Middle Ages were what they were, not because they had yet to arrive at civilization, but because the forms of their civilization were old, worn-out, empty husks, letters without spirit. The possibilities of the forms of life had been exhausted, nothing new could be said, and all that was left was to repeat ad nauseam what others had said before, to mindlessly recite the prescribed formulae for times of sadness, times of joy, good fortune, or bad. The hallmarks of the era were repetition, exaggeration, multiplication of symbols, mechanical irony. There could be no nuance; it was impossible to mean what one said. The problem could only be solved by piecing together a new form of life–no less artificial, but at least younger, more flexible–out of the fragments of the old. (Huizinga’s account of the end of the Middle Ages is sarcastic and difficult to parse, but he seems suspicious of the idea that Renaissance humanism really changed much of anything; one wonders what he’d think of a narrative focused not on letter-writing circles, but on the printing press.)

The broadly-speaking-psychoanalytic innovation here is the concept of “form of life.” To be sure, it picks out something real; the conventions of social life are in many ways akin to the conventions of poetry or painting. And Autumn‘s account of the aging of the medieval mind is subtle and sympathetic. And yet, there’s a danger here, a trap Huizinga may not fall into, but which he leads us towards. Do we really want to say that medieval mind simply could not speak with nuance? Some languages may be more difficult to learn than others, some more difficult to use well, but are there really linguistic contexts in which fully honest thoughts cannot be spoken? Or even contexts in which any thought whatsoever can fail to find expression?

Literary and “cultural” scholars, of course, love the idea that there are things we can think that they could not, and vice versa, though somehow we’re always able to say what it is that we can’t. I find this tendency troubling. To be sure, there are things they didn’t say, and thoughts they would have found prima facie implausible, and ways of feeling they found natural that we find bizarre. But none of this makes them artificial and us natural (it’s all second nature), nor does it make them unable to understand us as we understand them. Intelligibility might be difficult, but it’s either mutual, or absent entirely. And the thought that the past is unintelligible I find even less plausible than the thought that it could not have understood us.

*

Third: Despite my concerns regarding Huizinga’s psychology, I find the interpretation Autumn offers of Burgundian art and literature quite compelling. The book originated, Huizinga said, in an effort to understand what kind of a culture could have produced a painter like Jan Van Eyck. This explains both the book’s limited scope–it examines only the 14th & 15th centuries, only Burgundy, and only, for the most part, the aristocracy–and how it might be responsibly extended. Where both the social phenomena and the artistic production appear to be of a similar cast to those of medieval Burgundy, there analogous interpretive arguments to those made about van Eyck will likely obtain. And where literature and art do not resemble those of medieval Burgundy–as, in my estimation, (and here I reveal an ulterior motive for interest in Autumn–what does it say about Chaucer and Langland and the Pearl-poet?), those of medieval England do not–there a correspondingly different analysis of society will be required.

Huizinga’s analysis of Burgundian art is difficult to summarize; to give a taste of it, I’ll close with a few quotations from the ante- and penultimate chapters of Autumn, in which he offers close readings of several poems and paintings, including Van Eyck’s Annunciation, the painting featured above. I find particularly fascinating the account the angel’s clothing.

The painting of the fifteenth century is located in the sphere where the extremes of the mystical and the crudely earthy easily touch one another. The faith that speaks here is so overt that no earthly depiction is too sensuous or too extreme for it. Van Eyck is capable of draping his angels and divine figures in the heavy ponderousness of stiff robes dripping with gold and precious stones; to point upwards he does not yet need the fluttering tips of garments and fidgety legs of the Baroque. (p. 317-8)

Literature and art of the fifteenth century possess both parts of that general characteristic that we have already spoken of as been essential for the medieval mind: the full elaboration of all details, the tendency not to leave any thought unexpressed, no matter what idea urges itself on the mind, so that eventually everything could be turned into images as distinctly visible and conceptualized as possible. (p. 333)

It seems as if Van Eyck [in the Annunciation] intended to demonstrate the complete virtuosity, shrinking away from nothing, of a master who can do anything, and dares everything. None of his works are simultaneously more primitive, more hieratic, and more contrived. The angel does not enter with his message into the intimacy of a dwelling chamber (the scene that the entire genre of domestic painting took as its point of departure), but, as was prescribed by the code of forms of the older art, into a church. Both figures lack in pose and facial expression the gentle sensitivity displayed in the depiction of the Annunciation on the outer side of the altar in Ghent. The angel greets Mary with a formal nod, not, as in Ghent, with a lily; he does not wear a small diadem, but is depicted with scepter and splendid crown; and he has a rigid Aegean-smile on his face. In the glowing splendor of the colors of his garments, the luster of his pearls, the gold and precious stones, he excels all the other angelic figures painted by Van Eyck. The dress is green and gold, the brocade coat dark red and gold, and his wings are decked with peacock feathers. Mary’s book, the pillow on the chair, everything is again detailed with the greatest of care. In the church buildings the details are fitted with anecdotal elaborations. The tiles show the signs of the zodiac, of which five are visible, and in addition three scenes from the story of Samson and one from the life of David. [...]

And again the miracle that in such an amassing of elaborate details [...] the unity of key and mood is not lost! [... T]he most mysterious darkness of the high vaulted church veils the entire scene in such a mist of sobriety and mysterium that it is difficult for the eye to detect all the anecdotal details. (p. 335-6)

Philosophical temperament

June 30, 2014

In this post I’d like to follow up once more on a theme I’ve been exploring recently, this time to extend John Henry Newman’s calculus metaphor from a single line of inquiry to the polyphony of language. But much of this post, while in the spirit of Newman, will go beyond what I imagine Newman himself would have agreed to.

*

Now, Newman goes quite far, I think, along the road to the major insights of 20th century philosophy of language–to the recognition, made by thinkers as diverse as Ludwig Wittgenstein and Mikhail Bakhtin, that we also have only indirect and limited access to what our words mean; that our words mean less precisely than we imagine, and depend for their meaning on the future as well as the past and present. Newman knows that the things we say about reality only approximate closer and closer to that reality, and that only time can tell how close any particular statement was. But he pays insufficient attention to how our words mean, not absolutely, but relative to the language in which they are spoken. There are moments in the Grammar where his reflections on language are quite insightful–I’m thinking, particularly, of his account of polysemy, and his excursus on the interrelated doctrines of the Trinity–but they’re not put together into any coherent theory.

More often, Newman looks, not at the difficulty of meaning precisely, but at the difficulty of converging on a given truth accurately. His account of language tends to rest at the simple division of thought into “real” and “notional”: either we have something concrete in mind, or we don’t, and it’s just a useful but potentially misleading abstraction. This approach gives us little help understanding how our abstractions work: how they fit together, either provisionally, or in such as way that (as Newman seems to think) we can pass back from abstraction to reality. If the modernist philosophy of language is right, this division is arbitrary, and so is the problem it raises. We need, not to see how we move back and forth between concrete precision and abstract vagueness, but how it is that we use language–how we give voice to many different thoughts–when we are incapable of being perfectly precise.

Put differently: we must understand, not just how we can reach a secure position on a single issue, but how we can hold together our positions on many issues when no issue can be definitely resolved and all of the issues interrelate with one another. We must look, not just at the (mathematical) limits of certain lines of inquiry, but at the ratios of lines of inquiry with one another. The relevant metaphor is no longer calculus, but harmonics. Our goal is no longer to step back from time to see where the lines converge; there are no longer lines to follow. Rather, our goal is to find how to move in time between the issues that concern us without stumbling into dissonance.

*

Naive philosophy of language, we might say, resembles naive Pythagorean harmonics, also called just temperament: each note, that is, each meaning, stands in a simple, definite, proportion to each other. The tonic to the octave is 1:2; the tonic to the dominant, 2:3; the tonic to the fourth, 3:4. We know exactly what each word statement means relative to each other statement, and can easily specify it.

Such harmonics offer a good account of what we do when we sing, but it’s a bad idea to just-temper a piano. Pick a note as the basis. Go up twelve perfect fifths, and you’ll be on the same note as if you ascended twelve octaves. But the latter, naively, should have a pitch ratio to the basis of (1:2)^7, that is, 1:128; the former, (2:3)^12, that is, 4096:531441, or about 1:129.746. A small difference, but one big enough to cause real problems. If you decide to tune each note according to the most simple pitch ratio available, the piano will sound fine–if you play with the 1:1 note as the tonic. Play in any other key, and it’ll be out of tune.

The usual solution, called equal temperament, is to tune the piano such that all semitones have the same ratio: since an octave has a ratio of 1:2, and there are twelve semitones in an octave, we give each semitone a ratio of 1:2^(1/12), that is, about 1:1.0595. But note this “about.” Equal-tempering is, in fact, impossible; it is in a mathematical sense irrational, and so can only be determined through an evaluation of a limit. We can approximate the desired ratio to an arbitrary degree of precision, but cannot actually reach it.

To return from metaphor-land: when we begin using the instrument of writing, it becomes necessary to keep our language in tune. Under just-tempering, only some of our concepts make sense together. There are some philosophical chords that must not be thought. Under equal-tempering, on the other hand, we can describe, but can never precisely specify, how our various concepts relate to one another. As time goes on we are ever progressing closer to perfect sense, without ever arriving.

*

Most modern philosophers, I think, imagine themselves to be questing after something like equal temperament. It’s an admirable goal, though an impossible one. But I’m not convinced that it should be ours. To take back up the metaphor, many options remain for how to tune an instrument; I’d like to explore two, each of which has its own particular application.

In the first, called well temperament (as in the “well-tempered clavier”), all ratios are rational, none are simple, and some approximate simplicity better than others. Well-tempering can be thought of as a compromise between just and equal: it is achievable, and sounds more right than equal-tempering in most keys, but sounds worse in others, and always worse than a perfect just-tempering. Well-tempering, however, preceded equal, and does not even imagine its possibility. It seems, not perfect equality, but to open up options. It allows us to play multiple keys on a single instrument while giving each key its own “color.”

The second I would discuss is not a kind of temperament, but the recognition we need only temper instruments that we tune in advance. With stringed instruments, like the violin, we need only tune the four strings according to 2:3 ratios. The rest the placement of the musician’s fingers determines. When he would play a perfect octave (1:2) above a string, he puts his finger 1/4th of the way down the string above it, for a ratio of (2:3)x(3:4)=1:2. When a perfect fourth (3:4) below a string, he puts his finger 1/9th of the way down the string below it, for a ratio of (3:2)x(8:9)=4:3. When he wants a minor third above that, he puts his finger 7/27ths of the way down the string, because (8:9)x(5:6)=20:27. That 1/4 and 7/27 are different numbers does not matter to him.

The work of well-tempering resembles the work of a dictionary. It gives each word its meaning, and the words that it ensures work well together, do. But inevitably, there will be words that can be used together in a way that the dictionary cannot account for. The dictionary tries to capture the color of most different branches of discourse, especially the most important ones, but always leaves some out.

Violin tuning, on the other hand, resembles nothing so much as a recognition that meanings change over time; or, more accurately, that it is not words, nor sentences, but statements that have meaning. To mean one’s words like a violin, rather than a piano, is to write while giving up on the idea of fixing one’s language once and for all. It is to cease worrying that a word in one place might have a different tone than in another, so long as the progression of one’s meanings can be followed. It is to be open to philosophical vibrato.

The limits of philosophy

June 23, 2014

To take up again the theme of my most recent post: the hallmark of one who has abandoned common sense–whether for probability or philosophy–is a denial of my ability to be certain of any particular truth. The philosopher notices that the senses are fallible, and concludes they cannot be trusted. The probabilist notices the same, and concludes that I should trust them only on the probabilities. John Henry Newman has a more complex take on the problem. My last post looked at the matter of certainty, at why it matters. This one will look at its form: what does certainty look like?

In Newman’s metaphor, it looks like calculus:

I consider, then, that the principle of concrete reasoning is parallel to the method of proof which is the foundation of modern mathematical science, as contained in the celebrated lemma with which Newton opens his “Principia.” We know that a regular polygon, inscribed in a circle, its sides being continually diminished, tends to become that circle, as its limit; but it vanishes before it has coincided with the circle, so that its tendency to be the circle, though ever nearer fulfilment, never in fact gets beyond a tendency. In like manner, the conclusion in a real or concrete question is foreseen and predicted rather than actually attained; foreseen in the number and direction of accumulated premisses, which all converge to it, and as the result of their combination, approach it more nearly than any assignable difference, yet do not touch it logically (though only not touching it,) on account of the nature of its subject-matter, and the delicate and implicit character of at least part of the reasonings on which it depends. It is by the strength, variety, or multiplicity of premisses, which are only probable, not by invincible syllogisms,—by objections overcome, by adverse theories neutralized, by difficulties gradually clearing up, by exceptions proving the rule, by un-looked-for correlations found with received truths, by suspense and delay in the process issuing in triumphant reactions,—by all these ways, and many others, it is that the practised and experienced mind is able to make a sure divination that a conclusion is inevitable, of which his lines of reasoning do not actually put him in possession. This is what is meant by a proposition being “as good as proved,” a conclusion as undeniable “as if it were proved,” and by the reasons for it “amounting to a proof,” for a proof is the limit of converging probabilities.

–John Henry Newman, An Essay in Aid of a Grammar of Assent

*

At first this might sound just like probabilism dressed up. After all, once we formalize probabilities in a Bayesian manner, we can find the probability of an event by summing across independent evidences; given four pieces of evidence, with evidential “decibel” values of -10, 25, -7, and 15, then they sum to 23 “decibels” and give me a probability of about 99.5%–i.e. high enough for most practical purposes. Does Newman’s “as good as proved” mean anything more than this?

I’d say yes: not that it denies the usefulness of such formalization; rather, it addresses a different question entirely. This question has to do with what it is to think within space and time.

Consider how the calculus metaphor actually works. Newman, recall, is talking about limits: for example, “the limit of 1+1/2+1/4+1/8…+1/2^n as n approaches infinity,” or “the limit of (1-x)/(1-x) as x approaches 1,” or “the limit of 1/x, as x approaches 0.” This first has a value of 2, the second of 1, and the third does not exist; and these make a certain intuitive sense, since each new number in the first cuts in half the distance to 2; the second has a value of 1 at every input except 1, at which it is undefined; and the third, when graphed, shoots up and down to both infinities near the y-axis.

We can define limits, in a completely abstract but mathematically accurate way, as follows: Say there is some action, whose results interest us, but which for whatever reason cannot actually be performed; however, we can perform actions that resemble it in structure, though not necessarily in their result, to an arbitrary degree of precision. If, when we perform a sequence of actions, each in structure resembling the impossible action more and more closely, the corresponding results resemble each other more and more closely, again to an arbitrary degree of precision; then we can talk about the limit of this sequence: the value which this sequence of results approaches, whether or not it ever in fact meets it.

*

Newman’s calculus metaphor can be taken to invoke this definition literally, not just figuratively. The action in question is that of knowing all relevant facts about a given issue, and having infinite time to think about them; we want to know to what conclusion we would come, were we able to do that. So we consider at a sequence of actions: knowing more and more about a given issue, and thinking longer and longer about it, and asking better and better questions. If, as we do this, our answers to those questions seem to converge on a particular value; if, at first, our conclusions fluctuated wildly, dominated by whatever we had last read, but as we knew and thought more and longer, they settled down and began to approximate a particular conclusion; then we we can conclude that the sequence does converge, and put the issue to rest.

That we put the issue to rest does not guarantee that we are right in doing so. Newman admits that we cannot be certain that we are certain; we may have taken the limit wrongly. In epistemology, unlike in mathematics, we cannot prove that and on what the limit converges. It is always possible that, somewhere down the line, we will come to know or think of something that contradicts our conclusion, and forces us to investigate further. But this does not mean that epistemic limits fail to converge. Newman takes it as an article of faith that they do: that the cosmos has a transcendent coherence, that the questions we put to it have answers, and that behind both questions and answers lies reality. But then, so does the probabilist. It is difficult to see how such an assumption can be avoided.

Unlike the probabilist’s probabilities, the questions Newman puts to the cosmos are not all yes or no. This is, to my mind, one of the great advantages of his theory: it describes how our thoughts can converge on an answer, without assuming that we already had the answer in mind, and just wanted to know whether or not it was correct. It allows us to begin, not with a question, but with an issue, for which we find the appropriate question as we proceed. In brief, it makes room for a language that goes beyond formulae. The other great advantage is his account of when inquiry can cease: it explains what justifies the decision to leave off inquiry, without appealing to an arbitrary certainty-value (99%? 99.9%?) or to mere pragmatic necessity. We leave off inquiry when it looks to us as if we can tell to what view of things are our questions and answers converge.

*

Not all functions, however, have limits; remember 1/x. Newman seems not to consider the possibility that some issues are illusory: that the questions we put to the cosmos about it return answers with not enough in common to approximate a conclusion: that the more we know, the more confused we may become. This danger becomes even more serious when we allow language to mean more than the calculations we perform with it. If our inquiries do not issue in any conclusions, does that not mean that the words we use for them cannot be given coherent meanings?

This kind of worry is the bridge between Newman and the later Wittgenstein, who read the Grammar of Assent with great interest and complete revulsion.

Dead certainties

June 16, 2014

Last November I took in interest in the difference between philosophical truth and probabilistic certainty. Reading John Henry Newman’s Grammar of Assent has recalled me to this theme, and to the problem of how the common-sense man (and the Christian) can find a middle way between the probabilist and the philosopher.

*

Socrates, in the Phaedo, says that to practice philosophy is to prepare for death. This means, most obviously, that the philosopher seeks freedom from worldly attachments and entrance into eternity. But it means, also, that he seeks a truth that he would die for. Athens gives Socrates the option to live, and even to remain in Athens, if he will just stop stirring up trouble. He refuses because he would rather die than cease proclaiming “Thou must know thyself.”

The philosopher seeks a truth that must be spoken–a truth about which it would be better to die than to remain silent. Such a truth is needed because only such a truth can be held with complete sincerity. After all, I cannot lessen the strength of my assertion by speaking less loudly. If I am unwilling to shout it from the rooftop, how can I whisper it in my innermost being? As Newman insists, assent is assent: it does not come in degrees.

So take the most obvious truth imaginable: “This is a table,” said of the table before me. Would I insist on it with a gun to my head? Probably not: I know my senses are fallible, and if someone puts a gun to my head it may well be because I need to be snapped out of a delusion. Before the belief was tested, I had no doubts, but now I know that I do. How can I justify saying something I would not stake my life on?

*

To this the probabilist says: fine, we’re not certain, so we can’t say “This is a table.” But we still have our uncertain opinion: “It is probably the case that this is a table.” No longer have I said anything about the table that I would refuse to die for.

But–will I die for this attribution of probability? Surely not. If I think of probability as something “out in the world,” then I can be no more sure of this probability than of the table itself; and if I think of probability as something “in my mind,” then I cannot be at all sure of it–after all, I expect it to change as I learn more about the world. Probabilities offer no relief. Still we must speak, and our opinions are the last thing we should speak about: we know we cannot trust them.

Of course the probabilist never claimed to be doing philosophy in the Socratic sense. He sees words as tools: we do not hold to our words, but use them. We assign our beliefs probabilities because beliefs are a sophisticated way of playing the odds, of trying to get ahead in the game of life. If I want to use my belief that “This is a table” is 99% likely to be “true,” and the bookie thinks it 99.9% likely, then I should play the odds and bet on “false.” So, who is the bookie here? On what event am I wagering, and for what prize, and with whom? And what is my stake?

*

Say I’m betting on whether there’s gold in them there hills, for possession of the gold, against everyone else who might at some point come to possess it, if it exists, and who might get rich at my expense, if it does not and yet I finance a search for it. If the potential gain justifies the expenditure, then I stake the cost of the expedition; if not, then I stake the lost opportunity to do so. This is the game of life; Nature is my bookie.

Similarly, in my “What is a table” scenario, I’m betting on whether this is a table, for my life, against a madman with a gun. I must weigh the worth of my word against that of my life: either he shoots me, or I say “I’m not sure whether this is a table.” But who is the bookie here?

For the probabilist, the answer is the same as before: Nature. Nature always sets the house rules. This neither philosopher nor common-sense man can accept. Whether or not I believe something ought to have nothing to do with whether or not a gun is to my head. I can’t just use certain words as a tool to preserve my life. In Stanley Cavell’s phrase, I ought to mean what I say.

*

Yet all three–probabilist, philosopher, man of common-sense–would do as the madman commanded. And none would be lying.

The probabilist, because for him there is no such thing as honesty; words are tools, which he uses, in this case, to preserve his life. There are no words the probabilist will die for.

The philosopher, because–as this thought experiment has taught him!–”I’m not sure,” and “Know thyself,” and the like, are the only things he can say honestly. The philosopher will die for these words, and no others.

The common-sense man, because if someone really held a gun to his head, he would become uncertain. He would stake his life on “This is a table,” but only if no one took him up on the bet. He would do so, for example, if he needed a table and nothing but a table would do, and would do so without the slightest doubt that this was indeed a table. But if someone threatened to kill him over saying “I’m not sure,” wouldn’t this, in itself, be reason to doubt? Who would make such a threat without a reason? Until you know he’s mad, his demand gives reason to not be sure; once you know he’s mad, it doesn’t matter–no sounds you make will count as talking to him.

It’s hard to be a martyr for the external world. So what will the common-sense man die for? So far we know too little about him; only, really, that he is ordinary. An ordinary Roman will die for “Rome is great.” An ordinary Christian will die for “Christ is Lord.” Newman thinks any decent man will die for “My mother is not a liar.” What these have in common, I suppose, is that they’re ethics, not physics. They’re matters of fact, but not only matters of fact. They’re not probabilistic stakes, but they’re also quite different from the philosopher’s wager.

Never, never, never, never, never

June 9, 2014

[For context, read this post.]

Alleged portrait of William Shakespeare

Alleged portrait of William Shakespeare

Not everyone, apparently, puts Shakespeare on their Fundamentals list; some even call him over-rated. The possibility of leaving him out never crossed my mind. Not because he’s the most important poet to ever live (though they say he’s this too, unless it’s Homer or Dante): fundamentality is more than just popularity. But I can’t imagine caring about literature without caring about Shakespeare. It’s not that I loved Shakespeare from an early age; quite the opposite, it took me until college to come around. Rather, I cared about literature from an early age, but until I read Shakespeare didn’t understand why.

My pre-Shakespearean thoughts on the subject were drawn from the theorists of “speculative fiction,” and centered around how literature can present counterfactuals. Reading Shakespeare made that way of thinking impossible. He can do the fantastic and the alien, to be sure, but it’s always obvious that supposing what is not is not the point. When we read, say, Macbeth, we are not trying to answer what would it be like if witches really could prophesy the future? Rather the witches, and the ghosts, and the many murders, and above all the language in which they are all realized, make tangible the personality of Macbeth: the play shows a way of being human. This makes the gap between realism and speculation beside the point. A Midsummer Night’s Dream, The Winter’s Tale, The Tempest, differ in degree, not kind, from Much Ado About Nothing, Othello, King Lear.

In other words: King Lear is just as fantastical as The Tempest. Both tell us, not about the world, but about our own conflicting desires; they reveal in their contesting characters facets of ourselves, and how those facets articulate with one another–without, like the medieval morality play, insisting that these facets of desire can be cleanly differentiated and delimited. So at all times Shakespeare deals in fantasies, in infinite desires. Moreover, I would say, at his best he shows how these desires, because infinite, must realize themselves in the fantastical strictu sensu. Whether or not the handkerchief in Othello or the storm in King Lear are “possible” or “probable” just does not matter. They are like witches or ghosts; the point is the handkerchief’s power to hypnotize, and the storm’s power to instantiate divine malevolence.

Such is my basic understanding of Shakespeare’s achievement. It also explains, I think, why Hamlet is not my favorite of his plays, and why I never contemplated putting Hamlet on my list. T.S. Eliot said that the outer plot of Hamlet fails to “objectively correlate” to Hamlet’s inner turmoil:

The artistic “inevitability” lies in this complete adequacy of the external to the emotion; and this is precisely what is deficient in Hamlet. Hamlet (the man) is dominated by an emotion which is inexpressible, because it is in excess of the facts as they appear.

Unlike Eliot, I’m not convinced that this makes the play into a “failure,” or a “problem,” but it does sound correct, and it means, I think, that unless we “already” know what it is to feel what Hamlet feels, we cannot fully understand the play. To be sure, like Hamlet, I often desire without knowing what I desire,  but Hamlet’s desire-without-an-object is of a very particular kind, closely bound to madness and revenge; this feeling is foreign to me. Hamlet speak to me best when, like James Joyce, I imagine that it is the Ghost, not the Prince, in whom Shakespeare sees himself. But this is too clearly a willful misreading.

So if not Hamlet, what? I could give reasons for picking any of the other plays mentioned above, but I always knew it would be King Lear. If Hamlet shows us a single mind, King Lear shows us the entire world. There seems to me little higher than the struggle to comprehend it.

Indeed, it may be that Hamlet bothers me, in part, due to its solipsism; the drama is all within. The solution to Hamlet’s problems is, after all, fairly simple, and we are apparently meant to be absorbed in the mere fact that he cannot bring himself to do it. King Lear is the opposite: Given Lear and Gloucester’s initial mistakes–and these mistakes, though egoistic, are not solipsistic; though self-absorbed, they are not self-obsessed–there may be no solution. History has been let loose upon the world, and cannot be put back into its box. Hamlet’s storm at sea is his solution, his theodicy; Lear’s theomachical storm on the heath is of his own creation, and yet it exceeds him. The latter seems to me more real.

Put differently, in a way somewhat borrowed from Stanley Cavell, both King Lear and Hamlet are about human evil, but in the latter evil seems external, in the past, over and done with; the question, disappointingly, is how to free ourselves from its taint, even if the answer is that we cannot. Only the former shows how evil is within, present, continual; only it asks how to live with this burden. When Hamlet dies at the end, we may perhaps feel a sense of satisfaction, of a journey well-ended; his corpse receives full military honors. Cordelia’s death makes this contentment impossible. She must die for no other reason that than Edmund orders her killed. We can blame Lear, or Edgar, but then the punishment seems wildly disproportionate to the crime. We can say that her death represents escape from the fallen world, or that she must die because France cannot conquer England, but then the abstract reason seems unrelated to the concrete human suffering. We cannot blame Lear, or Edgar, or God, or France, but only Edmund–but Edmund is us; we have laughed at his jokes, felt indignant as his mistreatment, made ourselves complicit in his crimes, recognized him to be a part of ourselves. And his repentance and death cannot purify us, for they do not save Cordelia’s life.

It may be that, with his dying breath, Lear finds redemption, or at least consolation; and it may be that, with Edgar taking charge, the kingdom has been left in a better place. I would, in fact, argue for both of these propositions. But the more important point of the play, I think, is that history never ends. Lear may find consolation, but it changes nothing; and Edgar may reign well, for a while, but he cannot marry Cordelia and inaugurate a golden age, as the 17th-century revision of the play would have it: Cordelia is dead, and Edgar may well become another Lear. The play offers no solution, no catharsis, but only “Never, never, never, never, never.”

Yet–and this will be my final point here–it does, I think, portray a change. King Lear takes place in a pre-Christian age that has already grown ancient and decrepit, grown into a theater of pomp obscuring an underlying brutality. At the end, Edgar announces a new reign of sincerity. But this is deeply ironic, given that the ever-loyal Fool and Kent, though entirely defined by their social roles, are nevertheless the most forthright characters in the play, and that both, while not quite dead, have nevertheless vanished: the Fool has been off-stage since act three, and Kent implies that, given Lear’s death, he now intends to commit suicide. An artificial age has passed–but the new age, one feels, will at best appear to be more sincere. Such is the 800 BC world of King Lear; but such also, I think, is Shakespeare’s newly-fragmented 17th century, and so too all of modernity. King Lear seems to me to be somehow about the end of Catholic Christendom, and about the inevitable failure of whatever comes next to be any more truly Christian.

Follow

Get every new post delivered to your Inbox.

Join 216 other followers