How Alan Turing and His Test Became AI Legend


The Turing Test is legendary in the field of artificial intelligence. First proposed by the visionary British mathematician Alan Turing in a landmark 1950 paper, the test provides a practical (and pretty fun) way to determine if a computer has achieved human levels of intelligence. Turing called it “the imitation game.” If a computer — through a text-only chat — can convince a human that it’s a real person, then it passes the test. Simple in theory, but nearly impossible in practice.

Turing came up with the imitation game in response to colleagues and critics in the late 1940s who insisted that a machine could never be truly intelligent. But Turing had more faith in these primitive new machines he called “digital computers.” That’s because Turing was the very first to envision something that we take for granted today — a single machine that can be programmed to do almost anything. Odds are you’re reading this article on just such a machine.

Turing’s ‘Universal Machine’

Alan Turing was the eccentric British mathematician who came up with the idea of modern computing and whose code breaking played a major role in the Allied victory over the Nazis in World War II. He was prosecuted in 1952 for having a homosexual affair (homosexual acts being illegal in Britain until 1967) and accepted a form of chemical castration as a condition of probation in order to avoid jail time. His security clearance was revoked, ending his work for the British government. He was found dead of cyanide poisoning in 1954 and was posthumously pardoned of his conviction in 2013 by Queen Elizabeth II.

Turing was writing about computers well before any such thing existed. Back in 1936, he introduced the concept of the “universal computing machine” in a dense mathematical paper called “On Computable Numbers, With an Application to the Entscheidungsproblem.”

“According to my definition, a number is computable if its decimal can be written down by a machine,” wrote Turing, a decade before the first electronic computer was built. “It is possible to invent a single machine which can be used to compute any computable sequence.”

Alan Turing
Passport photo of Alan Turing, age 16.FLICKR

Turing’s definition of “computability” — something that a computer can do — is what’s known today as an algorithm. Turing was the first to lay out the design framework of a machine that could be programmed to run a series of discrete algorithms in order to achieve a desired task. Other mathematicians and engineers had toyed with calculating machines — most famously Charles Babbage’s 19th-century analytical engine — but Turing envisioned a device that wasn’t limited to solving one kind of problem.

Read More  Killer Robots Already Exist, And They’ve Been Here A Very Long Time

“Anything you can describe as an algorithm can be done by one machine,” says Andrew Hodges, a mathematics professor at Oxford University and author of “Alan Turing: The Enigma,” the inspiration for the Oscar-winning 2014 film “The Imitation Game.”

“The universal machine is essentially what we mean by a computer now, something on which you can store the instructions and it carries them out,” says Hodges. “No one else had formalized that idea.”

A Machine With ‘States of Mind’

From the start, Turing’s universal machine was conceived as a very simplified form of artificial intelligence, even though that term wouldn’t be coined until 1956. Hodges says that the design of the universal machine was meant to imitate the inner workings of the human mind, a subject that fascinated Turing almost as much as mathematics.

In fact, when describing how his universal machine would work, Turing used the term “state of mind” to label the different “read” and “write” functions of the machine. In Turing’s conceptual machine, a length of tape is run through a read/write scanner. The tape is inscribed with bits of information represented by symbols. The scanner head can either read the symbols or write new ones according to its “state of mind.”

“The operation actually performed is determined… by the state of mind of the computer and the observed symbols,” wrote Turing in his 1936 paper. “In particular, they determine the state of mind of the computer after the operation is carried out.”

A decade later, when Turing was leading the stalled British effort to build one of the first electronic computers in 1946, he also studied neurology and human physiology on the side. The result was an internal paper published for the National Physical Laboratory that modeled how a computer could be programmed to “learn” on its own. Hodges sees it as one of the earliest proposals of what are now called “neural networks,” a type of deep machine learning that’s at the bleeding edge of artificial intelligence.

Read More  Technology University Stops Information-Stealing Cyber-Attack With Darktrace AI

The Imitation Game

Turing wasn’t the only person intrigued by the similarities between human and machine intelligence. A surge of new technologies developed during World War II, including early computers, space satellites and nuclear power, had captured the intellectual and public imagination.

“As soon as computers are mentioned at all, people are talking about electronic brains and the possibility of the computer rivaling the brain,” says Hodges.

The 1948 book “Cybernetics” coined the prefix “cyber” and wondered whether it would be possible to “construct a chess-playing machine, and whether this sort of ability represents an essential difference between the potentialities of the machine and the mind.” The author, Norbert Wiener, concluded that such a machine “might very well be as good a player as the vast majority of the human race.”

It was during this era of excitement and nervous speculation about super-intelligent machines that Turing wrote “Computing Machinery and Intelligence,” what Hodges calls one of the most cited papers in philosophical literature.

“I propose to consider the question, ‘Can machines think?'” begins Turing. Since the definitions of “machine” and “think” are ambiguous, Turing narrows the scope of the question. For his purposes, the machine must be a “digital computer” and the test of whether or not it can “think” would be answered by the imitation game. It went something like this: There would be three terminals physically separated. Two would have humans as operators (one to be a questioner); the other would have a computer. One human would ask questions to the other human and to the computer via text and, from the answers received from both, determine who was “real” and who was computer. A computer was determined to have artificial intelligence (i.e. passed the test) if the questioner could not tell the difference between “man” and “machine.”

The game, now known as the Turing Test, is only mentioned briefly in the paper and Hodges says that Turing didn’t take the details of the test too seriously, publishing different versions in other papers. But Turing did like the playful simplicity of it.

Read More  Algorithms Might Be Everywhere, But Like Us, They’re Deeply Flawed

“In a way, he was making a drama out of it,” says Hodges. It presented this idea [of the possibility of advanced artificial intelligence] in a way that engages people and that ordinary people would make the decision, like a jury in a trial.”

enigma machine
An enigma machine is displayed outside the Alan Turing Institute entrance inside the British Library, London.WILLIAM BARTON/GETTY IMAGES

Will a Computer Ever Pass the Turing Test?

When the Turing Test was first published in 1950, Turing himself was confident that “intelligent machinery” (as he called it) would be able to win the imitation game within 50 to 100 years. Will his predictions come true?

We already have superintelligent computers capable of outwitting the smartest players in other types of games. In 1997, IBM’s Deep Blue defeated the reigning chess champion Garry Kasparov, and Watson beat the record-breaking “Jeopardy!” champion Ken Jennings in 2011.

But the imitation game raises the bar high on artificial intelligence and no computer has come close to convincing ordinary humans that it’s one of them. At least not yet. An annual contest called the Loebner Prize conducts its own Turing Tests on the top chatbots to see if the latest AI software can convince a panel of judges that it’s more human than its human competitors.

None of the chatbots have succeeded. The best performer, a conversational chatbot called Mistuku, has only achieved a rating of “33-percent human.” But when I went online to chat with her, I was impressed by her natural-language responses and deep knowledge (albeit probably too deep for a typically dopey human).

And when I asked her if a chatbot will ever pass the Turing Test, she had the perfect answer:

“You be the judge of that.”

Editors Note: The Loebner Prize has been defunct since 2020. But as of 2022, the Turing test has still not been passed.

In addition to being a wildly original thinker in mathematics, philosophy and engineering, Turing was a world-class cross-country runner who may have qualified for the 1948 Olympics if not for an injury.

Originally sourced from

For enquiries, product placements, sponsorships, and collaborations, connect with us at [email protected]. We'd love to hear from you!

Read More


Introducing Apple Intelligence, the personal intelligence system that puts powerful generative models at the core of iPhone, iPad, and Mac

10 June 2024PRESS RELEASE Introducing Apple Intelligence, the personal intelligence system that puts powerful gener
Read More
tvOS 18 introduces intelligent new features like InSight that level up cinematic experiences. Users can stream Palm Royale on the Apple TV app with a subscription.

Updates to the Home experience elevate entertainment and bring more convenience 

10 June 2024 PRESS RELEASE tvOS 18 introduces new cinematic experiences with InSight, Enhance Dialogue, and subtitles CU
Read More