Skip to content

Person, place, and thing

September 17, 2013

Over-use of SWAT teams; over-zealous zero-tolerance policies; over-broad NSA surveillance; what do they have in common? Bruce Schneier, one of my favorite technology commentators, has been saying for years that these are what happen when we over-react to small risks, trying to protect ourselves in ways that actually make us worse off. He’s recently written a wonderful summary of the argument, including explanations of the various different reasons why this happens. Namely:

  • With fewer dangers out there, we over-estimate the danger posed by the ones that remain. Think: how we put more and more effort into eliminating the fewer and fewer diseases that are still life-threatening.
  • We over-estimate the frequency of exotic dangers and under-estimate the frequency of normal ones. Think: how shark attacks sound scarier than dog attacks.
  • We blame people for failing to protect against something bad that happens, but don’t blame them for wasting time and energy trying to protect us against bad things that never happen. This one’s a bit less obvious, but it’s the same basic principle as the first two. Schneier explains:

Some of this fear stems from the fact that we put people in charge of just one aspect of the risk equation. No one wants to be the senior officer who didn’t approve the SWAT team for the one subpoena delivery that resulted in an officer being shot. No one wants to be the school principal who didn’t discipline — no matter how benign the infraction — the one student who became a shooter. No one wants to be the president who rolled back counterterrorism measures, just in time to have a plot succeed. Those in charge will be naturally risk averse, since they personally shoulder so much of the burden.

  • Finally, we mistake threats against our security as threats against our safety; that is, we think the threat comes from a natural cause, and so is directly predictable, when really it comes from other people acting rationally, and so only indirectly predictable. Two examples from opposite sides of the political spectrum: gun control and airport security. This is a fundamentally different kind of failure than the first two. Here’s Schneier at more length:

There’s a fundamental problem at the intersection of these security measures with science and technology; it has to do with the types of risk they’re arrayed against. Most of the risks we face in life are against nature: disease, accident, weather, random chance. As our science has improved — medicine is the big one, but other sciences as well — we become better at mitigating and recovering from those sorts of risks.

Security measures combat a very different sort of risk: a risk stemming from another person. People are intelligent, and they can adapt to new security measures in ways nature cannot. An earthquake isn’t able to figure out how to topple structures constructed under some new and safer building code, and an automobile won’t invent a new form of accident that undermines medical advances that have made existing accidents more survivable. But a terrorist will change his tactics and targets in response to new security measures. An otherwise innocent person will change his behavior in response to a police force that compels compliance at the threat of a Taser. We will all change, living in a surveillance state.

When you implement measures to mitigate the effects of the random risks of the world, you’re safer as a result. When you implement measures to reduce the risks from your fellow human beings, the human beings adapt and you get less risk reduction than you’d expect — and you also get more side effects, because we all adapt.

Schneier treats these all as cognitive errors: forgetting “First, do no harm,” in the case of the first three, and ignoring game theory, in the final instance. What I find fascinating is how they can also be understood as violations of ethical maxims. The first three relate to “The ends do not justify the means,” the last to “Treat persons as ends, not means.” And these aren’t just any ethical maxims, they are, to my mind, among the most important, and the most often forgotten.

Perhaps that’s why I like Schneier’s work as much as I do. Even if, like most techies, he’s a pragmatist at heart, his work has quasi-Kantian affinities. It’s almost metaphysical: he wants to help us recognize that we inhabit a world with different kinds of entities that must be treated differently. Learn game theory; that is, learn the distinction between persons and things. And, first, do no harm; that is, learn the distinction between things and places. This last point is almost Heideggerian. Pay attention not only to the goal of an action, but to any other consequences it might have, and notice that while the goal is always of finite complexity, the complexity of the consequences can be infinite. No matter how many discrete objects we identify and learn to manipulate, we’re lost if we fail to attend to the background environment. And since we can only see things, the place in which they exist is by its nature invisible to us, recognizable only after conscious reflection as an entity complex beyond imagination.

Advertisements
No comments yet

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: