If Turing could see machines imitating humans

If Turing could see machines imitating humans

“I propose to consider the question: > Can machines think? < ” starts Alan Turing in his fundamental 1950 paper Computing Machinery and Intelligence, which is considered the initial starting point of artificial intelligence research.

The Imitation Game

Turing suggested the so-called Imitation Game for a method of deciding whether a machine is able be mistaken for a human. The game requires a human interrogator who communicates with a machine and another human. Natural language messages are typed by using a keyboard with the replies displayed on a screen. If the evaluator cannot reliably tell the machine from the human, then the machine is considered to be intelligent.

The Imitation Game, also known as the Turing test, is a way of qualifying machine intelligence. Intelligence has no exact definition, it is rather a set of human capabilities, like pattern recognition, heuristic problem solving, inference, planning or learning. Machine intelligence is based around the idea that computers should have similar capabilities.

Interestingly, the Turing test covers a set of capabilities, which are in some sense broader, but also more limited than basic intelligence, not machine intelligence.

Unfortunately, human behaviour is not always intelligent. Turing’s treatment is a salient example of this. His achievements are countless. He was the mastermind behind the top-secret British project at Bletchley Park during World War II which cracked the Enigma code of German U-boat messages.

The success of the Enigma project significantly shortened the length of World War II which is estimated to have saved hundreds of thousands of lives. Turing’s published scientific results were also highly influential among others working in the fields of algorithm theory, crypt-analysis and in general computer science. The ACM (Association for Computer Machinery) established the annual Turing Award in 1966 for contributions “of lasting and major technical importance to the computer field”. The Turing Award is widely recognised as the “Nobel Prize of computer science”. Regardless of his achievements, Turing was arrested for homosexuality in 1952, found guilty and sentenced to chemical castration. He was so deeply affected by this that he committed suicide in 1954.

Furthermore, Turing tests can only qualify systems with a chatbot interface which offer natural language communication through keyboards and displays. Although this limitation may seem too restrictive, in fact chatbots deserve our attention, since they reflect a significant aspect of machine intelligence history from the beginning to the state of the art.


The first chatbot, ELIZA, was built by Joseph Weizenbaum in 1965 at MIT. ELIZA was able to perform dialogues in English on any topic in the style of a Rogerian therapist, who uses non-directional questions related to patients’ statements to allow for them to open up. ELIZA uses a relatively simple algorithm based on pattern matching, keyword extraction from user statements, question templates and a limited set of concepts and rules.

Although Weizenbaum intended ELIZA to be a parody of a seemingly emphatic and omniscient therapists, he was surprised that people would take ELIZA seriously and open their hearts to her even after knowing ELIZA was a machine. It shows that people were able to accept chatbots even from the very beginning.

Nowadays we talk to chatbots regularly. They are commonly found in web pages, they ask for our opinion, offer their help and serve as entry level customer service operators for solving simple problems and to redirect customers to human operators for handling more complex issues. Some of them can even communicate in natural spoken language. A lot of them are painfully silly, while a few are surprisingly brilliant.

But do any of them pass the Turing test? Let us investigate.

The Loebner Prize

In 1991 Hugh Gene Loebner introduced an annual competition and prize for the most human-like chatbots. The capabilities of the chatbots are rated by Turing tests. Although the limited prizes ($1,000-$3,000) for the best competitors have been awarded year after year, there are 2 one-time prizes, the so called silver and golden prize, which have not yet been awarded.

The silver prize is $25,000 for a chatbot that can convince judges through use of the Turing test to be human. The golden prize is $100,000 and a solid gold medal for the first chatbot, which passes an extended Turing test using textual, verbal and visual communication channels, where the latter means successful interpretation of diverse kinds of images.

The Loebner prize has induced a lot of controversy.

First of all, the extended Turing test required for the Loebner golden prize is not a Turing test at all. Verbal communication and image interpretation are not directly related to thinking, these are instead sensory input processing activities. Turing himself declared that these should be omitted from the Imitation Game. We may conjecture that the rules for the Loebner golden prize were intentionally constructed so that the $100,000 award would not have to be potentially paid for several decades.

Loebner prize competition rules consequently have been changed over the years so that chatbots becoming better and better would not be able to pass the test. Among other changes, the length of individual test sessions has been increased from 5 to 25 minutes between 2003 and 2008, and each system is tested in multiple sessions.

Prominent AI scientists, like Marvin Minsky, often called the father of AI, think that the Loebner competitions have not aided AI development significantly, but rather serve to increase the fame of Loebner himself and that of the competition’s sponsors.

Though silver and golden Loebner prizes have not been awarded yet, it does not mean that currently no chatbot exists, which could pass the Turing test.

Watson and Eugene

Turing predicted in the 1950s that by the year 2000 there would be systems, which could pass the Turing test. Although he was seemingly wrong, in fact his estimate was quite accurate.

Today, there are several systems, which would be considered to be able to pass the Turing test, even if there are debates going on about this question.

The first reasonable candidate is IBM’s Watson, which won the Jeopardy! quiz show over two human quiz champions on public TV in 2011. Although it was not a classic Turing test, no one would think that Watson was a machine, if it were not clearly declared.

Perhaps even more interesting is the application of Watson, which has successfully silenced outcries about the method of receiving questions in plain text instead of spoken word. Watson is now incorporated into over a dozen different industries to sort through large amounts of data.

Watson however, did not explicitly pass the Turing test in the above-mentioned cases.

On the other hand, Eugene Goostman, a chatbot that simulated a 13-year-old Ukrainian boy did. Eugene Goostman reportedly passed a formal Turing test in 2015, where 33% of the panel judges could not identify that Eugene was a machine during the course of a five-minute chat conversation. So, Eugene Goostman fulfilled the criteria of passing a Turing test through being indiscernible from a human to more than 30% of the judges.

Eugene Goostman suggests that scepticism about the feasibility and existence machine intelligence is not reasonable any more. Machine intelligence is clearly possible, and it is just a question of time until chatbots can be created that are totally indistinguishable from human beings in conversations.

Turing would be definitely satisfied if he was able to see it for himself.

Károly Tilly
Written by

Károly Tilly

Senior Architect


If Turing could see machines imitating humans

8 min

Alan Turing

imitation game

Turing test


Turing Award


Joseph Weizenbaum


Artificial intelligence

Loebner prize

Hugh Gene Loebner

Marvin Minsky


Eugene Goostman

Contact us

+36 1 611 0462