I’m a bit late to the party, but here’s two articles about the Alan Turing centenary that I found particularly interesting. First, in case you’re not up to speed on who Alan Turing was, here’s Daniel Dennett, on how Turing is like Darwin:
Turing’s idea was a similar — in fact remarkably similar — strange inversion of reasoning. The Pre-Turing world was one in which computers were people, who had to understand mathematics in order to do their jobs. Turing realized that this was just not necessary: you could take the tasks they performed and squeeze out the last tiny smidgens of understanding, leaving nothing but brute, mechanical actions. In order to be a perfect and beautiful computing machine, it is not requisite to know what arithmetic is.
What Darwin and Turing had both discovered, in their different ways, was the existence of competence without comprehension. This inverted the deeply plausible assumption that comprehension is in fact the source of all advanced competence. Why, after all, do we insist on sending our children to school, and why do we frown on the old-fashioned methods of rote learning? We expect our children’s growing competence to flow from their growing comprehension. The motto of modern education might be: “Comprehend in order to be competent.” For us members of H. sapiens, this is almost always the right way to look at, and strive for, competence. I suspect that this much-loved principle of education is one of the primary motivators of skepticism about both evolution and its cousin in Turing’s world, artificial intelligence. The very idea that mindless mechanicity can generate human-level — or divine level! — competence strikes many as philistine, repugnant, an insult to our minds, and the mind of God.
I find Dennett charming, in a way, in his simplicity; he’s so sure that the parallel doesn’t run the other way, showing how, just as computers require a human lawgiver, the world requires a divine lawgiver. The part he bolded is in a way his downfall; everyone knows, after all, that to be a perfect and beautiful computing machine, it is necessary for the machinist to know what arithmetic is, otherwise the machine could never get started.
Of course the whole argument Dennett is orchestrating only overthrows a straw-man clockwork deity, but it’s good every once in a while to read such things to remind us why that deity is not the one we believe in. And Dennett does point out a worthwhile connection between AI and evolution I hadn’t noticed before. On the whole worth reading.
And Dennett does raise an important issue, even if he doesn’t realize it: what exactly is the connection between competence and comprehension? Rational behavior and reason? Dennett wants to be reductionist about it, and say reason doesn’t “actually” exist, only rational behavior. That’s clearly not satisfying. What is, then?
One answer involves an appeal to aesthetics as what distinguishes man from machine. Thus, second, here’s a talk from the always interesting Bruce Sterling, saying things one can only say if one’s a novelist, and generally complicating the naive scientistic worldview. It’s long, but well worth reading, especially if you care about artificial intelligence, mind-body dualism, or what it means to be human. It includes, among other things:
A thought experiment regarding Turing’s legacy had he been exactly the same person, only German. Would he be a sinister figure, a scary fringe character in WWII historical novels? What does that say about our mythologizing of historical figures?
Speculation regarding the composition of the original Turing test, which had the computer trying to imitate a woman. It wasn’t for weird biographical reasons involving Turing’s homosexuality, but because Turing wanted to imply that the question itself (is this machine intelligent?) was flawed, and we might as well ask, “is this machine a woman?” Perhaps the Turing test itself is a form of torture. Wouldn’t we feel tortured if we were locked in a room and forced to prove our intelligence before being allowed basic human rights?
An insistence on a distinction Dennett (see above) absolutely refuses to acknowledge, between computation and cognition. In a move I completely did not expect (though perhaps I should have), he insists that it cannot be understood outside the content of human embodiedness and sexuality.
Some thoughts on what suicide means for the possibility of artificial intelligence. How’s this for pithy?: “If we’re willing to learn from Alan Turing’s life experience, we should be devoting some thought to a suicidal Artificial Intelligence. Nobody does this, because they somehow imagine that code can’t simulate abject despair.” Or this, the climax of the Turing section of the talk:
If computation can really mimic cognition, there’s no reason for it to attain the cognitional level of one single human brain. It ought not to “mimic” a brain, but to instantly explore vast states of conscious being that have never existed before. Just roar right past us into a Vingean Singularity. Machines that can think like gods.… But could these gods talk Alan Turing out of eating that apple? I rather doubt that, frankly. Our record with divine proscriptions about apple-eating rather speaks for itself.
He ends with some thoughts on the nature of creativity in the digital age. These are more diffuse, and in a way less interesting, but they too have an important point: we just don’t yet understand what it means to be, as he calls us, “engaged, organic-intellectuals in a computational world,” and we won’t be able to do it well until we arrive at a “new aesthetics” with a stronger metaphysical foundation–one that can tell us how to place artificial intelligence. So aesthetics isn’t the solution to AI, just another side of the same problem. Anyway, he says it better than I can; just go read the talk.