Skip to content

The tragedy of the algorithm

February 8, 2016

The future will be wonderful—cars that plot their own routes, self-curating twitter feeds—and, in any case, if you don’t embrace it, you’ll be left behind, unable to keep up with those who put their trust in the algorithm. Or so we’re told by true believers in the power of Big Data to reshape the very fabric of our lives. Such a one is Alex Tabarrok of Marginal Revolution:

It is peculiar that people are more willing trust their physical lives to an algorithm than their twitter feed. Is the outrage real, however, or will people soon take the algorithm for granted? How many people complaining about algorithmic twitter don’t use junk-email filters? I want ALL my emails! Only I can decide what is junk! Did junk email filters ruin email or make it better?

But this is nonsense. Of course people will soon take the algorithm for granted; the question isn’t whether they’ll acquiesce, which is, after all, the path of least resistance, but whether their acquiescing will be a good thing. (Of course, utilitarians sometimes can’t see that question as coherent, but anyone not in the grip of a theory would admit that there is a difference between believing oneself happy and in fact being happy.) The worry is not that the algorithm will not make our lives easier, but that it will make our lives worse without us realizing it.

There’s a fundamental difference between route-finding and junk-mail filters, on the one hand, and news feed curators, on the other. We can define precisely what we want the driving and spam-filtering computer programs to accomplish: bring me from point A to point B in the shortest amount of time possible; filter out whatever could not possibly be of interest to me. We can’t define precisely what the curator should do for us, because we just don’t know what the curator ought to maximize.

*

Well—maybe I cheated a bit. We can’t actually define precisely what we want the route-finding and the spam-filtering programs to accomplish, because the “we” in this sentence is illusory. No one in this story shares the same motives. Say Alice uses some program meant to help navigate the social world. Bob writes the program, for his own reasons, most likely a mix of social and anti-social motives. And the program can’t operate without the input of people like Carol, who provide raw material for the program to work upon, but who have their own motives, some social, some anti-social: they want Alice to think or do something, sometimes in her best interest, sometimes not. But such filters can still work, for the most part, when they address themselves to the most basic level of the social world, to the set of social facts upon which we can (almost!) all agree.

For example, with junk mail filters, we can (almost!) all agree what counts as spam. All, that is, except for the spammers. Spam is a bit like insanity. It’s defined, not by particular marks and features (the whole point of spam is that it tries as best it can to imitate non-spam), but by a difficult-to-pin-down feeling that it does not constitute a real attempt at communication, and it is not human. I wouldn’t discourage the use of spam filters, of course, but we should recognize their cost: they exclude the possibility that our sense of what can count as human might ever be changed by an email we receive.

Route-finding algorithms, too, at the most basic level, can focus on things like geographical information, the rules of the road, and traffic patterns. The first is barely social at all—it’s just physical topography plus the agreed-upon names of roads—and the second, too, is a relatively limited body of knowledge that can be extracted from official sources. The third is a situation where we’re happy to treat each other basically as particles in a problem of fluid dynamics. Even delegating such things as these to a computer has its drawbacks; we lose, for example, the ability for our place-names to arise organically within the communities that use them, since they must be designated officially before the routing software will even know how to take us there. But this tradeoff, too, seems to be worth it, because all it requires is that (almost!) all of us agree about what the world of the road looks like.

*

But as soon as we move above the level of what we can (almost!) all agree, things become tricky. Two examples from recent headlines:

The first, about phony businesses on Google Maps. Google “locksmith,” and

Up pops a list of names, the most promising of which appear beneath the paid ads, in space reserved for local service companies.

You might assume that the search engine’s algorithm has instantly sifted through the possibilities and presented those that are near you and that have earned good customer reviews. Some listings will certainly fit that description. But odds are good that your results include locksmiths that are not locksmiths at all.

Instead, they’re scammers who will rely on your being too weary with the whole situation to argue when they show up a few hours late, do a completely incompetent job, and then charge you hundreds of dollars more than a locksmithing job ought to cost.

We often envision a world where we can tell our car, “take me to the best place to get gas around here,” and it will do so. But this can never happen, because different people want that question to be answered differently, and they’ll all be trying to game the algorithm to get it to return the answer they want.

Similarly, the second article, about paid Wikipedia edits:

Even minor changes in wording [on Wikipedia] have the potential to influence public perception and, naturally, how millions of dollars are spent. What this means for marketers is that Wikipedia is yet another place to establish an online presence. But what this means for Wikipedia is much more complicated: How can a site run by volunteers inoculate itself against well-funded PR efforts? And how can those volunteers distinguish between information that’s trustworthy and information that’s suspect?

The problem here isn’t spammers and scammers, so much as people who sincerely believe what they’re making Wikipedia say, but whose beliefs are in some sense or other eccentric—whether because they have a monetary conflict of interest, or because they disagree with the scholarly consensus, or whatever. I don’t know whether or not Wikipedia has a program that automatically detects and reverts useless edits, but it should be obvious by now how little good such a program could do in the face of these sorts of disagreements.

*

The Wikipedia example should make clear that this problem is not a result of the profit motive, or our modern digital advertising economy. It’s a problem with the very idea of a product which seeks to represent the social world automatically.

It’s symptomatic that even Google Maps, a for-profit enterprise, relies on a community of grunt laborers to keep its maps accurate. This is inevitable, because the social world is anti-inductive. An algorithm cannot “fill in” its social map on the basis of just a few data points, because people are always trying to deceive and convince each other about pretty much everything, and they’re always responding to each others’ attempts in ways that we can’t predict—otherwise the responses would be pointless. So Google Maps can’t just produce an algorithm for separating the wheat from the chaff, it needs people to clean its map for it. Just like Wikipedia needs people to edit its articles and Facebook needs people to “like” and delete items from their news feeds.

It’s striking that these people tend to be volunteers. One reason, of course, is that people are willing to do it, and other people are always happy to profit from someone else’s labor. But even if Google paid the people who scrubbed its maps, I suspect that it would still tend to attract people who sought a sense of moral purpose. After all, what they’re doing, though we call it cleaning up, is more like policing. Stranger still, it’s like counter-espionage. Recall how both stories linked above include anecdotes about malicious agents infiltrating the volunteer organizations set up to combat their malice.

In other words, algorithms do not overcome the need to examine the social world carefully, or the need to think carefully about who to trust, and who not to trust. They just outsource it.

*

Is this such a bad thing? We have been outsourcing our decisions about who to trust for a long time. And for a long time people have taken advantage of each other—just think of those American archetypes, con artists and traveling salesmen—but society still more or less hums along. (Of course, those archetypes were made possible by an earlier set of technological advances: cities, railroads, cars; and, more specifically, by technologies for outsourcing trust, like legal contracts and fiat currency.)

Perhaps, given the current technological outlook, outsourcing is inevitable, but we still need to think carefully about how to go about it. It’s wrong—and not just wrong, nonsensical—to say that “the future belongs to people who can defer to the algorithm.” Which algorithm? Insofar as the future belongs to anyone, it belongs to those who can answer that question.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: