The Turing Test Is Not What You Think It Is
Whether or not you caught wind of the excited announcement that "Eugene Goostman," a computer program ("chatbot") devised by Vladimir Veselov, Eugene Demchenko and Sergey Ulasen, had passed the Turing Test this past week, there's a good chance you've noticed the widespread public denunciations of the claims.
So noted a champion of artificial intelligence (AI) as Ray Kurzweil dismisses the assertion (and will not, presumably, pay out the money he has promised to the first team to actually pass the test). Gary Marcus, writing in the The New Yorker, is similarly critical.
I find these responses a bit odd. It's a bit like when the inferior team wins a soccer match in a penalty shoot out. They won, but you have the feeling they shouldn't have won, they didn't deserve to win, they didn't really win. Similarly, people here feel, I think, that the threshold for passing was set too low, or that the judges were too simple-minded; they didn't ask the right kind of questions to stump the robot. It won, yeah, but obviously the victory doesn't mark the emergence of true AI. (For a thoughtful discussion, see this piece by MIT engineer Scott Aaronson.)
Good points, all, but they miss the point. It was never Turing's aim to devise an empirically robust way of telling whether someone or something is really thinking. Can a machine think? For Turing that question was, as he wrote, "too meaningless to deserve discussion." What is "thinking" anyway? We can hardly hope to make that notion precise.
Turing's aim, rather, was to provide as a substitute for the meaningless question a perfectly meaningful one that has the virtue of being straightforward and decidable.
The Turing Test is an imitation game. The judge has the task of deciding, on the basis of remote, text-based communication, whether it is communicating with a person or a person-imitation (a programmed computer). The machine passes the test if the judge is unable to identify it as a machine after a suitable interval.
The test makes a lot of sense. On the assumption that people are real thinkers, there is something to the idea that the machine is thinking if you are unable to tell, on the basis of a conversation, that you are not, in fact, conversing with a person. Machines that pass the Turing Test, you might think, at least with regard to written communication, are as good as real thinkers as you can want.
I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. The original question, "Can machines think?" I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.
And the remarkable thing is that he seems to have been right. AI hasn't proved the existence of artificial thinkers; but it has changed the way we think about thinking and we now find it quite natural to say such things as that Watson, an IBM computer, is the Jeopardy! champion, or that Deep Blue really beat Gary Kasparov in a chess match. And the fact of the matter is, more than 30 percent of the judges in the recent test did not suss out that Eugene Goostman was an imitator, not a child.
Crucially, it was not Turing's view that his test gives you an operational procedure for deciding whether or not Eugene Goostman — or anyone else — is a real thinker.
Recall, Turing's own chief contribution in mathematics — in the theory of computability — was mathematizing the intuitive idea that that some questions, but not all, can be answered by using recipes (or algorithms). It was not his claim that he had proved that all and only the algorithmically computable functions can be computed by what has come to be known as a Turing Machine. At best this is just a conjecture. And anyway, it isn't a mathematical conjecture.
You can't prove, mathematically at least, that you have successfully mathematized a notion in a way that conforms to pre-theoretical understanding. This conjecture — which is now known as the Church-Turing Thesis — is a bit of philosophy, not a bit of mathematics. (Which takes nothing away from the mathematics!)
The Imitation Game is proposed as an instructive test in something like the same spirit.
Now, as a matter of fact, I think it is probably safe to say that work on chatbots, designed to pass the Turing Test, is not at the cutting edge of work in AI. As reaction to the Eugene Goostman test suggests, it it is just too easy for a mindless bag of tricks to imitate thinking within the confines of the game. But that fact itself is instructive and brings us a step forward.
Congratulations Eugene Goostman!
P.S. You can talk to Eugene Goostman yourself here. What do you think? (You may have trouble getting through; the site has been having lots of problems and was down the last time I checked.)