AI Program Texts Its Way to Turing Test Award

Researchers at the University of Reading in England awarded the 18th annual Loebner Prize Sunday. The competition tests the conversational capabilities of artificial intelligence using text messages, an examination known as the “Turing Test.”

The winner of the bronze medal — the highest award given to date — was Fred Roberts of Hamburg, Germany, whose Elbot program was pegged as the best of the five participants by a panel of 12 judges. Elbot managed to convince three of the judges that they could be texting with a human, not a machine.

“This has been a very exciting day with two of the machines getting very close to passing the Turing Test for the first time. In hosting the competition here, we wanted to raise the bar in artificial intelligence, and although the machines aren’t yet good enough to fool all of the people all of the time, they are certainly at the stage of fooling some of the people some of the time,” said Kevin Warwick, a professor at the University of Reading’s School of Systems Engineering and the competition’s organizer.

Making Chit Chat

The Turing Test is the brainchild of Alan Turing. It tests the ability of a machine to hold a conversation via text messages with a human in order to see whether the machine can dupe a human into believing he or she could be dealing with another human rather than a computer.

Can a computer think? That’s a question Turing attempted to answer with an article published in 1950. He posited that a computer could be said to think if it was indistinguishable from a human based on its conversational responses.

In Sunday’s test, the five entrants from programmers around the world engaged in five-minute conversations that covered a range of topics from the weather to the global economic crisis to jokes.

To pass the Turing Test, a computer must convince 30 percent of the judges that it is human. Elbot came very close to passing the test, convincing a quarter of the panel it could be human.

No computer has taken the competition’s silver or gold medals. The silver would go to a machine able to fool half the judges, and the gold would be awarded to one that could process both audio and visual data in addition to text.

“In 10 to 15 years, you’re going to start getting interactions with online agents that are able to integrate across much of the conversation sources of knowledge from different computer systems that people will say ‘Goodness, that’s an intelligent system I’m dealing with,'” Jackie Fenn, a Gartner research analyst, told TechNewsWorld.

Artificial Smarts

The University of Reading’s examination is less a test of where programmers and scientists are with artificial intelligence than it is a test of one area of general intelligence and flexibility, according to Fenn.

“There have been many instances in the several decades people have been studying AI problems of very specific tasks that computers have been able to do as well and eventually better than people, like playing chess. That was once viewed as the ultimate challenge of AI, and now computers regularly beat humans. The Turing Test is a particular area of AI that is focused on general conversational ability that, as it turns out, is one of the most challenging to program because it’s not targeted at a specific task,” she said.

1 Comment

  • I’m afraid the story says more about those 3-out-of-12 judges than about Elbot. I gave it a try myself and knew after about 3 questions. Nice improvement over Eliza but no cigar.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

How often do you update your passwords?
Loading ... Loading ...

Technewsworld Channels