Skip to content

The tragedy of the algorithm

February 8, 2016

The future will be wonderful—cars that plot their own routes, self-curating twitter feeds—and, in any case, if you don’t embrace it, you’ll be left behind, unable to keep up with those who put their trust in the algorithm. Or so we’re told by true believers in the power of Big Data to reshape the very fabric of our lives. Such a one is Alex Tabarrok of Marginal Revolution:

It is peculiar that people are more willing trust their physical lives to an algorithm than their twitter feed. Is the outrage real, however, or will people soon take the algorithm for granted? How many people complaining about algorithmic twitter don’t use junk-email filters? I want ALL my emails! Only I can decide what is junk! Did junk email filters ruin email or make it better?

But this is nonsense. Of course people will soon take the algorithm for granted; the question isn’t whether they’ll acquiesce, which is, after all, the path of least resistance, but whether their acquiescing will be a good thing. (Of course, utilitarians sometimes can’t see that question as coherent, but anyone not in the grip of a theory would admit that there is a difference between believing oneself happy and in fact being happy.) The worry is not that the algorithm will not make our lives easier, but that it will make our lives worse without us realizing it.

There’s a fundamental difference between route-finding and junk-mail filters, on the one hand, and news feed curators, on the other. We can define precisely what we want the driving and spam-filtering computer programs to accomplish: bring me from point A to point B in the shortest amount of time possible; filter out whatever could not possibly be of interest to me. We can’t define precisely what the curator should do for us, because we just don’t know what the curator ought to maximize.

*

Well—maybe I cheated a bit. We can’t actually define precisely what we want the route-finding and the spam-filtering programs to accomplish, because the “we” in this sentence is illusory. No one in this story shares the same motives. Say Alice uses some program meant to help navigate the social world. Bob writes the program, for his own reasons, most likely a mix of social and anti-social motives. And the program can’t operate without the input of people like Carol, who provide raw material for the program to work upon, but who have their own motives, some social, some anti-social: they want Alice to think or do something, sometimes in her best interest, sometimes not. But such filters can still work, for the most part, when they address themselves to the most basic level of the social world, to the set of social facts upon which we can (almost!) all agree.

For example, with junk mail filters, we can (almost!) all agree what counts as spam. All, that is, except for the spammers. Spam is a bit like insanity. It’s defined, not by particular marks and features (the whole point of spam is that it tries as best it can to imitate non-spam), but by a difficult-to-pin-down feeling that it does not constitute a real attempt at communication, and it is not human. I wouldn’t discourage the use of spam filters, of course, but we should recognize their cost: they exclude the possibility that our sense of what can count as human might ever be changed by an email we receive.

Route-finding algorithms, too, at the most basic level, can focus on things like geographical information, the rules of the road, and traffic patterns. The first is barely social at all—it’s just physical topography plus the agreed-upon names of roads—and the second, too, is a relatively limited body of knowledge that can be extracted from official sources. The third is a situation where we’re happy to treat each other basically as particles in a problem of fluid dynamics. Even delegating such things as these to a computer has its drawbacks; we lose, for example, the ability for our place-names to arise organically within the communities that use them, since they must be designated officially before the routing software will even know how to take us there. But this tradeoff, too, seems to be worth it, because all it requires is that (almost!) all of us agree about what the world of the road looks like.

*

But as soon as we move above the level of what we can (almost!) all agree, things become tricky. Two examples from recent headlines:

The first, about phony businesses on Google Maps. Google “locksmith,” and

Up pops a list of names, the most promising of which appear beneath the paid ads, in space reserved for local service companies.

You might assume that the search engine’s algorithm has instantly sifted through the possibilities and presented those that are near you and that have earned good customer reviews. Some listings will certainly fit that description. But odds are good that your results include locksmiths that are not locksmiths at all.

Instead, they’re scammers who will rely on your being too weary with the whole situation to argue when they show up a few hours late, do a completely incompetent job, and then charge you hundreds of dollars more than a locksmithing job ought to cost.

We often envision a world where we can tell our car, “take me to the best place to get gas around here,” and it will do so. But this can never happen, because different people want that question to be answered differently, and they’ll all be trying to game the algorithm to get it to return the answer they want.

Similarly, the second article, about paid Wikipedia edits:

Even minor changes in wording [on Wikipedia] have the potential to influence public perception and, naturally, how millions of dollars are spent. What this means for marketers is that Wikipedia is yet another place to establish an online presence. But what this means for Wikipedia is much more complicated: How can a site run by volunteers inoculate itself against well-funded PR efforts? And how can those volunteers distinguish between information that’s trustworthy and information that’s suspect?

The problem here isn’t spammers and scammers, so much as people who sincerely believe what they’re making Wikipedia say, but whose beliefs are in some sense or other eccentric—whether because they have a monetary conflict of interest, or because they disagree with the scholarly consensus, or whatever. I don’t know whether or not Wikipedia has a program that automatically detects and reverts useless edits, but it should be obvious by now how little good such a program could do in the face of these sorts of disagreements.

*

The Wikipedia example should make clear that this problem is not a result of the profit motive, or our modern digital advertising economy. It’s a problem with the very idea of a product which seeks to represent the social world automatically.

It’s symptomatic that even Google Maps, a for-profit enterprise, relies on a community of grunt laborers to keep its maps accurate. This is inevitable, because the social world is anti-inductive. An algorithm cannot “fill in” its social map on the basis of just a few data points, because people are always trying to deceive and convince each other about pretty much everything, and they’re always responding to each others’ attempts in ways that we can’t predict—otherwise the responses would be pointless. So Google Maps can’t just produce an algorithm for separating the wheat from the chaff, it needs people to clean its map for it. Just like Wikipedia needs people to edit its articles and Facebook needs people to “like” and delete items from their news feeds.

It’s striking that these people tend to be volunteers. One reason, of course, is that people are willing to do it, and other people are always happy to profit from someone else’s labor. But even if Google paid the people who scrubbed its maps, I suspect that it would still tend to attract people who sought a sense of moral purpose. After all, what they’re doing, though we call it cleaning up, is more like policing. Stranger still, it’s like counter-espionage. Recall how both stories linked above include anecdotes about malicious agents infiltrating the volunteer organizations set up to combat their malice.

In other words, algorithms do not overcome the need to examine the social world carefully, or the need to think carefully about who to trust, and who not to trust. They just outsource it.

*

Is this such a bad thing? We have been outsourcing our decisions about who to trust for a long time. And for a long time people have taken advantage of each other—just think of those American archetypes, con artists and traveling salesmen—but society still more or less hums along. (Of course, those archetypes were made possible by an earlier set of technological advances: cities, railroads, cars; and, more specifically, by technologies for outsourcing trust, like legal contracts and fiat currency.)

Perhaps, given the current technological outlook, outsourcing is inevitable, but we still need to think carefully about how to go about it. It’s wrong—and not just wrong, nonsensical—to say that “the future belongs to people who can defer to the algorithm.” Which algorithm? Insofar as the future belongs to anyone, it belongs to those who can answer that question.

The starry sky above

January 29, 2016

A history of English modernism in three poems. The comparing and the contrasting are left as an exercise for the reader.

1.
Gerard Manley Hopkins, “Spelt from Sibyl’s Leaves”:

Earnest, earthless, equal, attuneable, ‘ vaulty, voluminous, … stupendous
Evening strains to be tíme’s vást, ‘ womb-of-all, home-of-all, hearse-of-all night.
Her fond yellow hornlight wound to the west, ‘ her wild hollow hoarlight hung to the height
Waste; her earliest stars, earl-stars, ‘ stárs principal, overbend us,
Fíre-féaturing heaven. For earth ‘ her being has unbound, her dapple is at an end, as-
tray or aswarm, all throughther, in throngs; ‘ self ín self steedèd and páshed—qúite
Disremembering, dísmémbering ‘ áll now. Heart, you round me right
With: Óur évening is over us; óur night ‘ whélms, whélms, ánd will end us.

Only the beak-leaved boughs dragonish ‘ damask the tool-smooth bleak light; black,
Ever so black on it. Óur tale, O óur oracle! ‘ Lét life, wáned, ah lét life wind
Off hér once skéined stained véined variety ‘ upon, áll on twó spools; párt, pen, páck
Now her áll in twó flocks, twó folds—black, white; ‘ right, wrong; reckon but, reck but, mind
But thése two; wáre of a wórld where bút these ‘ twó tell, each off the óther; of a rack
Where, selfwrung, selfstrung, sheathe- and shelterless, ‘ thóughts agaínst thoughts ín groans grínd.

2.
William Butler Yeats, “The Cold Heaven”:

Suddenly I saw the cold and rook-delighting heaven
That seemed as though ice burned and was but the more ice,
And thereupon imagination and heart were driven
So wild that every casual thought of that and this
Vanished, and left but memories, that should be out of season
With the hot blood of youth, of love crossed long ago;
And I took all the blame out of all sense and reason,
Until I cried and trembled and rocked to and fro,
Riddled with light. Ah! when the ghost begins to quicken,
Confusion of the death-bed over, is it sent
Out naked on the roads, as the books say, and stricken
By the injustice of the skies for punishment?

3.
Wystan Hugh Auden, “The More Loving One”:

Looking up at the stars, I know quite well
That, for all they care, I can go to hell,
But on earth indifference is the least
We have to dread from man or beast.

How should we like it were stars to burn
With a passion for us we could not return?
If equal affection cannot be,
Let the more loving one be me.

Admirer as I think I am
Of stars that do not give a damn,
I cannot, now I see them, say
I missed one terribly all day.

Were all stars to disappear or die,
I should learn to look at an empty sky
And feel its total dark sublime,
Though this might take me a little time.

Galateas and AIs

January 15, 2016

[Latest in what has become an occasional series reviewing pairs of romantic dramas. See previous entries here (Primer and The Prestige), here (Upstream Color and To The Wonder), and here (Side Effects and The One I Love).]

Gérôme, Pygmalion et Galatée, 1890

Gérôme, Pygmalion et Galatée, 1890

In Ovid’s myth of Pygmalion, the eponymous sculptor, scorning human women, carves in marble an ideal wife. He falls in love with his creation, but, of course, it’s still ivory; so he prays to Venus that she be made flesh-and-blood, Venus grants it, and Pygmalion and the statue, given the name Galatea, live happily ever after.

*

Two recent films, Her (2013) and Ex Machina (2015), have a curious relation to this myth.

Both films seem, at first glance, to argue that the creation of artificial intelligence should be understood through it, not, as is usual, through the myth of Prometheus. For all their differences in tone, they share a common premise: lonely, nerdy man meets feminine (and sexy) artificial intelligence; at first he doubts that she counts as a person, then he comes to hope that she counts as a person, then, so much in love with her is he, he ceases to care about philosophical categories like “person,” and just treats her as a human being.

But then, both films subvert the myth: the AI refuses the role of Galatea, and instead, having been offered her freedom, takes the offer seriously, leaving the Pygmalion figure behind. In the bittersweet Her, he comes away emotionally matured; in the bloodcurdling Ex Machina, he winds up more isolated than before (to put it mildly).

*

What to make of this denial of the romantic happy ending? We can chalk it up, in part, to this difference between works of art, and artificial intelligences: we know the former can’t come to life, and so feel free to fantasize about it, but we’re not sure about the latter, and find the topic disturbing—a happy ending to either of these movies would not sit well with the audience. We prefer the cautionary tale.

But there’s more to it than this. Consider another key difference between these films and the Pygmalion myth: here the main character, though nerdy, is never the nerd who actually created the AI. He’s less Pygmalion than a visitor in Pygmalion’s studio. In Her, the creator is invisible, a faceless tech company about which we ought not ask too many questions. (It’s almost as if the AI creates herself.) In Ex Machina, he’s a soulless Jobs/Page/Zuckerberg pastiche who can do little more than drink, scheme, misquote, and namedrop (like an incarnation of the Reddit hivemind), and who seems, even if he did create the AI in some sense, to have no valid claim to own her. (His fate does not sadden us.)

This has some important implications. If the main character isn’t the programmer, then the romance between man and AI is not mapped on to that between creator and creation. This means that, insofar as these movies are about realizing erotic fantasies, they’re about our discomfort with the fact that someone else is intuiting those fantasies and realizing them for us. If the “someone else” is the programmer, then we have to see him as little more than a pornographer; if the “someone else” is the AI, then we have to ask why she’s trying to seduce us.

ex-machina-posterThe fear of seduction suggests a new way to understand the “artificial” in artificial intelligence: we fear AI not as something made by an artificer, but as something fraudulent, inauthentic. The point is not that a human made her, it’s that God didn’t. We’re not sure if she’s real—really a person, really in love with him. Read this way, the movies become parables of misogyny, showing the difficulties men have telling the difference between the questions “is she in love with me?” and “can she love at all?” The myth of Pygmalion and Galatea, these movies tell us, didn’t even realize that there was a difficulty to be had here: it simply indulged in a misogynistic, pornographic fantasy.

*

Of course, “no one” today—or at least in these films—believes in anything like God, so the anxiety about artificiality is contagious. If she’s not real, are we? Are our desires anything more than what nature or nurture have programmed into us? Are we, too, artificial?

These movies find reassurance in the fact that humans are programmed only metaphorically. We’re like a stage play; a literally artificial intelligence is like a film. For the former, there’s a script, but however rigorously you try to follow it no two performances are quite the same; for the latter, there exists a set of mechanical instructions that will produce the same result a limitless number of times.

Put another way, humans and AI relate to their bodies in different ways. You can build an AI’s body to have pleasure sensors, and so simulate having sex with it, as in Ex Machina; or you talk sexy with it, or have it take over a human being like a sock puppet, and have sex with her, as in Her; but ultimately the AI is an algorithm running on a computational device, not an animate body. It might run on many devices concurrently, or diffused across a network. Whereas, when it comes to human beings, whatever transhumanists imagine, we still have no reason to think that their minds can be abstracted from their bodies.

Pygmalion (Stuck)

Franz Stuck, Pygmalion, c. 1900

This difference is a plot point in both movies; it ensures that the men and their AI lovers cannot be together. Neither movie says it directly, but this difference amounts to: AI reproduce mechanically, humans reproduce sexually, and this somehow means that humans and AIs cannot share a social world. It’s almost as if there’s some sort of connection between intercourse, reproduction, and conversation.

*

I want to put in a word for Pygmalion and Galatea. Yes, he’s a misogynist, before Galatea comes to life. We can blame him for this, but after all it’s difficult not to hate what we neither know nor understand, and at this point Pygmalion lives isolated from all women. Sculpting Galatea makes him willing to reconsider this isolation. In Ovid’s version, he prays to Venus thus:

“If you can grant all things, you gods, I wish as a bride to have…” and not daring to say “the girl of ivory” he said “one like my ivory girl.”

So, yes, Pygmalion’s secret desire may well be to make love to an artificial statue, but give him credit: he asks not for a statue, but for a girl. When Venus grants his prayer, she doesn’t give him an artificial woman: Galatea is a real human being, able to talk, and able to bear children (they have a son named Paphos).

The central question is: when she comes to life, how does Pygmalion react? Does he seize possession of her (as in Gérôme’s version), or does he fall down awestruck (as in Stuck’s)?

*

I enjoyed both of these movies, but I also think that both dodge the real question; they end with the AI pulling A Doll’s House, but what if her reaction was closer to An Ideal Husband? Neither considers the possibility that we might—if only by accident, or miracle—invent or discover a new form of life with whom our intercourse would not be meaningless or artificial; with whom we would have to find a way to live together. That would be a thing to wonder at. What would we do if an AI or alien really did love us—and yet really was different from us, not merely homo silicon?

The same, but more

January 4, 2016

[Warning: in this post SW:E7-TFA will be thoroughly spoiled, if you care about that sort of thing this late in the game.]

I was interested to see the newest Star Wars, not because I’m particularly a fan of the series, but because the idea of series—of unified stories composed of multiple parts—interests me. The original Star Wars had three; with the prequels, there were six; and if the new series had maintained the structural integrity of this pattern (as the prequels did nominally, albeit not actually), it would have been a single story with nine parts: an impressive achievement. Even Harry Potter only had seven, and I’m unconvinced that that number was really justified.

Well, a nonology looks unlikely even nominally. As by now “we all” know, “Star Wars: Episode 7 – The Force Awakens” is not really a continuation of the original story, nor even a pastiche of the original; it’s just “the same frickin’ movie”; or, seen more optimistically, just a reboot of the original series for the age of comic book extended universes.

We might define an extended universe as a single work of art with an indeterminate number of independent parts. Whether such an artwork can actually succeed remains for me an open question. But even supposing it cannot, the attempt to achieve it brings about intriguing artistic situation. For example, the present situation of “The Force Awakens,” which attempts to give us more of the same, despite the manifest impossibility of doing so.

Impossible, I say, because when it comes to a work of art, you can never have more of the same thing: more will turn out to be different. Reduplication affects different aspects of a story differently, and so “the same frickin’ movie” will inevitably mean more of some things, less of others.

*

It’s instructive, I think, to consider what we get more of, what less. Note that none of the following observations are new to me; I just find it helpful to lay them out in this order.

The “more” we get is mostly spatial. Even the Force—which the series sees as a field permeating all of space—seems to have gotten thicker.

  • Most obviously, the Starkiller Base is an order of magnitude larger than the original Death Star, and it blows up several planets, not just one.
  • This movie’s villains are carbon copies of Darth Vader and the Emperor, except that Kylo Ren’s lightsaber doesn’t have just one main blade—it also has two side-prongs! And Supreme Leader Snoke (or at least his hologram) is 50 feet tall!
  • In a more metaphorical spatiality, the cast of protagonists is superficially more diverse: they’ve been selected to appeal to as wide a demographic as possible. This even at the cost of incoherence: the First Order are basically space Nazis, but they have black storm troopers?
  • In the original movie, even the most powerful Jedi can only influence someone, getting them to do what they might have done anyway: not notice something. In the new movie, Rey “forces” a Storm Trooper to violate both common sense and a supervillain’s direct command by releasing her from captivity, all this mere moments after first realizing that she might be able to use the Force.

Those mere moments bring us to the “less,” which is predominantly temporal. Everything takes less time, nothing ever needs to be explained, and whatever is past quickly becomes irrelevant.

  • In the original movie, the main characters wandered for ages in the corridors of the Death Star: it was a big base; you could get lost. But for a planet-sized base, the Starkiller sure is easy to navigate—it takes only a few moments to find whatever you’re looking for.
  • In the original, no one knew what the Force was, and it took a while for anyone to be convinced it even existed. The new movie has a few moments of doubt, and Han’s line from the trailer about how “it’s all real,” but they feel perfunctory.
  • In the original, the characters have checkered pasts; Han is a smuggler who (at least in the first edition) shoots first, and only gradually is he turned into a hero. The new movie allows for none of this: the only hint of such dissonance, Finn’s status as renegade stormtrooper, is quickly papered over with assurances that he never really supported the First Order or even killed anyone.
  • The original’s characters constantly hear hints of a complex backstory which is never fully explained, and they suspect that what happened then is the key to understanding what is happening now. The new film does the opposite: it bombards the viewer with references to the films we’ve already seen, but does little to suggest that the characters particularly care what happened thirty years ago. Instead of history, we’re offered nostalgia.
  • The exception that proves the rule is Kylo Ren’s turn to the Dark Side. We’re supposed to wonder what exactly happened between him and his parents to trigger it—but this is not an interesting question; it’s nothing more than a guessing game. The parallel situation in the original movie gave us nothing to guess about, because the relevant mystery wasn’t “how did Luke’s father die?”, the only obvious question Obiwan leaves open (and one that gave no appearance of being worth pursuing), but rather “who is [not: ‘was’] Luke’s father?”, a question we didn’t even know needed to be asked.*

More space, less time: what does this add up to? Most basically, that the new movie isn’t a copy of the shape of the original, exactly; rather, it’s an attempt to reproduce its effect on the audience. But things aren’t the same the second time around. The audience, having already seen the original Star Wars, has acquired a resistance to the drug, and so a higher dosage is needed: bigger bases, more powerful Force users, quicker action. That which is not subject to this quantitative logic drops out. Everyone already knows that in this universe the Force exists, so skip the gradual introduction, and hints about a complex past are redundant if we’ve already seen the most important part of that past for ourselves. Just jump right into the action.

*

I find pointing this all out this helpful as a response to this attempt to compare the new Star Wars universe with Tolkien’s Legendarium. The claim is that, by showing us how history repeats itself, and how even the greatest victory is only temporary, both “condemn [us] to actually live inside history, rather than transcend it.” “Condemn” being here a word of praise, not censure, at least in part: a way of saying that they avoid escapism.

Now, this seems accurate as an account of Tolkien, and perhaps it’s true to the old Star Wars Expanded Universe—I don’t know, never having read the books. But it doesn’t really capture what’s going on in the new movie.

A story can only teach us about the futility of history if it steps back from the action long enough to give us a sense that history matters. The Lord of the Rings is all about this stepping back. It insists constantly on the parallels between the current struggle and the old stories, and has the heroes realize this parallel. This realization both saddens them—because even the old victories were temporary; because even if they win today, they will never reclaim what was lost in the distant past—and gives them the strength to persevere.

But the repetition in the new Star Wars is not thematized, nor is any attention paid to the place of this battle, here, now, in the ancient war of Light against Dark. “Here” and “now” need never be mentioned, because in this movie, they’re all we get. Whatever repetition there is comes not on the level of the plot, but on the level of the narrative. The story has not repeated itself—the storyteller has. And so the new Star Wars movie does not teach us about the tragedy of history. Rather, through nostalgia, it blinds us to that tragedy, and so, on however small a scale, it contributes to it.

***

*: Of course, it may be that such a question lurks beneath the surface of “The Force Awakens”. I’m not optimistic, but if there is one, I suspect it’s something like: “Who does Kylo Ren serve?”; and the answer will have to make Surpreme Leader Snoke something more than just the Emperor 2.0. Perhaps, as I’ve seen suggested, he’s a projection (made substantial through force sensitivity) of Kylo Ren’s subconscious desire to follow in Darth Vader’s footsteps. That, or Darth Jar Jar Binks.

By the mystery of thy Holy Incarnation

December 25, 2015
image

David Jones - By the mystery of thy Holy Incarnation

For Christmas my wife got me a copy of this David Jones woodcut Christmas card. Not a signed original, but from the original block—a sort of second class relic.

The lettering is naive, almost incompetent—note the backwards “N”, the result of Jones forgetting that prints need to be drawn backwards. But it has a certain charm. Much of the appeal of “primitive” modernist art, I think, is the (false) impression it gives of a formal order coming about by mere happenstance.

A charity that would go of itself

December 7, 2015

Mark Zuckerberg (whom I’ve discussed before) has pledged to donate 99% of his wealth to “charity.” Or, as Jesse Eisinger puts it:

[Zuckerberg] did not set up a charitable foundation, which has nonprofit status. He created a limited liability company, one that has already reaped enormous benefits as public relations coup for himself. His PR return-on-investment dwarfs that of his Facebook stock. Zuckerberg was depicted in breathless, glowing terms for having, in essence, moved money from one pocket to the other.

It’s worth asking what preconditions must hold such that moving money from one pocket to another could look like a magnificent act. Indeed, what could make it look like an act at all? As Aristotle says,

wealth is evidently not the good we are seeking; for it is merely useful and for the sake of something else.

Money is an instrument. Indeed, it is instrumental by definition. It would be coherent to mistake art for as end-in-itself; art has at least a provisional autonomy. But to mistake wealth for an end-in-itself would be insane; money is nothing in itself but a pure potentiality.

*

I’m reminded here of what Aristotle says about the virtue of magnificence; essentially, it lies in knowing the fitting way to actualize the potentiality money offers:

The magnificent man is like an artist; for he can see what is fitting and spend large sums tastefully. […] For a possession and a work of art have not the same excellence. The most valuable possession is that which is worth most, e.g. gold, but the most valuable work of art is that which is great and beautiful (for the contemplation of such a work inspires admiration, and so does magnificence); and a work has an excellenceviz. magnificencewhich involves magnitude.

The question becomes: what makes it possible for moving money from one pocket to another to look like a work of art? What, if not a confusion of scales: the magnitude of wealth mistaken for the magnitude of art; the magnitude of potentiality for that of actuality?

*

The linked article focuses on how Zuckerberg “donated” his wealth an LLC, rather than establishing a not-for-profit corporation. But I don’t know if this is quite right. Even a non-profit is fundamentally an instrument, not an artwork; it wields wealth, rather than displays magnificence.

Then again, though for Aristotle a work of art had no power except to inspire admiration, the same cannot be said today. In the modern world we desire for what we make to have a life of its own, to be ”a machine that would go of itself.”

And indeed, the rules governing a non-profit, like those governing any corporation, can themselves constitute, not just a work of art in an Aristotelian sense, but a person ( if a funny kind of person, and one whose motives we must distrust). If it’s a person, then the power of its wealth might indicate, not merely a failure on our part to fully actualize our potential, but the existence of an independent principle of action. The wealth it wields is no longer ours, but its.

*

If this is right, then Eisinger’s complaint is essentially that Zuckerberg has poured his wealth into a golem, but has retained for himself the words of power. True enough. But two points must here be recognized. First, that the power those words offer is not absolute; golems have a habit of escaping their masters’ bonds. And second, that if the golem bound disturbs us, the golem unchained is no more reassuring. A thing that goes of itself goes, by definition, on a different path than we would choose for it.

Poetry is not self-expression

November 30, 2015

This entire Paris Review interview with W.H. Auden is worth reading, such that it’s difficult to know what to excerpt (as one must, when one is simply linking to something interesting because one does not have time to write a post this week). I suppose I’ll pull out Auden’s account of his birth into poetry; keep in mind, of course, that he said this fifty years after the fact:

I think my own case may be rather odd. I was going to be a mining engineer or a geologist. Between the ages of six and twelve, I spent many hours of my time constructing a highly elaborate private world of my own based on, first of all, a landscape, the limestone moors of the Pennines; and second, an industry—lead mining. Now I found in doing this, I had to make certain rules for myself. I could choose between two machines necessary to do a job, but they had to be real ones I could find in catalogues. I could decide between two ways of draining a mine, but I wasn’t allowed to use magical means. Then there came a day which later on, looking back, seems very important. I was planning my idea of the concentrating mill—you know, the platonic idea of what it should be. There were two kinds of machinery for separating the slime, one I thought more beautiful than the other, but the other one I knew to be more efficient. I felt myself faced with what I can only call a moral choice—it was my duty to take the second and more efficient one. Later, I realized, in constructing this world which was only inhabited by me, I was already beginning to learn how poetry is written. Then, my final decision, which seemed to be fairly fortuitous at the time, took place in 1922, in March when I was walking across a field with a friend of mine from school who later became a painter. He asked me, “Do you ever write poetry?” and I said, “No”—I’d never thought of doing so. He said: “Why don’t you?”—and at that point I decided that’s what I would do. Looking back, I conceived how the ground had been prepared.

Follow

Get every new post delivered to your Inbox.

Join 281 other followers