AI Program Thinks Like a 4-Year-Old

Researchers at the Rensselaer Polytechnic Institute have created a reasoning virtual 4-year-old child.

The “child,” named “Eddie,” can reason about his own beliefs to draw conclusions in a manner that matches human children of that age.

To test Eddie’s reasoning powers, the group created a demo in Second Life in which Eddie was shown someone placing an object in one location then leaving the virtual room, followed by a second person who moved the object to another location in the room. Eddie was then asked where the first person would look for the object when he got back.

Eddie’s response was the first location — incorrect, but typical of a 4-year-old child in the real world.

Like real children, Eddie can learn from his mistakes and, if the test is run again, will give the correct answer.

Not Your Everyday Avatar

Second Life is a virtual world launched in 2003 in which its users, called “residents,” can interact with each other through virtual representations of themselves, called “avatars.”

That isn’t what Eddie is about. “By definition, creatures like Eddie are not avatars, which are being directly controlled by real humans in the real world,” Selmer Bringsjord, head of the department of cognitive science at Rensselaer and the leader of the project, told TechNewsWorld.

“The idea is, you might have a doppelganger, a counterpart in the virtual world, but that’s only because the synthetic character in the virtual world has your memories, your background and your capacities.”

Bringsjord’s project wants to create virtual people who reason, have beliefs and emotions, and even have religion. Ultimately it will re-create the Starship Enterprise’s holodeck, where humans will be able to interact with holographic representations of other people.

The virtual characters will be powered by theorems because “this approach to artificial intelligence is based on mathematical logic so everything the character thinks, believes, says is a theorem,” Bringsjord said.

Powering the Math

Those theorems that replicate the “very rich set of beliefs and knowledge” that create the behavioral repertoire of humans are so complex and difficult that supercomputing power is necessary.

Two of Rensselaer’s state-of-the-art research facilities — the Computational Center for Nanotechnology Innovations (CCNI) and the Experimental Media and Performing Arts Center (EMPAC) — will be used to power the project.

CCNI, the most powerful university-based supercomputing system in the world, consists of massively parallel IBM supercomputers in the Blue Gene family; IBM Power-based Linux clusters; and AMD Opteron processor-based clusters. These provide more than 100 teraflops of computing power.

EMPAC, opening in October 2008, will feature powerful visualization, audification, immersive environments, sensor applications, communication technology and physical modeling capabilities.

Strong AI or Weak AI?

Whether or not you need all that computing power is open to debate.

Rensselaer is adopting what is known as the “Strong AI” approach, where you need gobs of computing power because you want to duplicate what the brain does.

However, advocates of the “Weak AI” approach hold that it is more important to reproduce the results of the brain’s operation — human behavior — and this requires far less computing power.

Ai Research, with a research center near Tel Aviv, takes this approach. It is building a child machine, a computer program designed to converse with humans in natural language and learn from its spoken interactions with human caretakers the same way and at the same rate a human infant would. This will grow from infancy to adulthood and pass the Turing Test — where a machine can’t be distinguished from a human in conversation — in 10 years.

“My choice to pursue Weak AI was motivated by my belief in the contingency of the hardware, the essentiality of the logic making up the mind, and by my personal training — computer science, linguistics and the philosophy of language,” Yaki “Jack” Dunietz, Ai’s president and project leader, told TechNewsWorld.

It’s Just a Kid

“I’m always suspicious of this kind of thing where they’re dealing with children,” anthropologist and sociologist Paul Jorion told TechNewsWorld. “I always have the feeling that there are some major issues they haven’t been able to solve yet.”

Jorion developed ANELLA, the Associative Network with Emerging Logical and Learning Abilities, whose intelligence was guided by the dynamics of affect, or feeling, back in 1989 for the artificial intelligence unit of British Telecom.

Most of the approaches toward AI “have taken an over-sophisticated view of the problem,” Jorion said. His, on the other hand, was “very simple — I’ve got a universe of words, and you just find a way to connect them that makes sense.”

Saving the Country

Virtual synthetic characters with the thoughts, feelings and beliefs of real humans can also contribute to homeland security, Bringsjord said.

One area is training, where running simulations where the troops interact with synthetic characters that are “very robust and have these actual religious and other mindsets” common to people from other societies will prove useful, he explained.

Hand-to-hand combat is another area of training.

Yet another possibility is detective work, where you could “create scenarios in the past that are very detailed and run in real time,” Dr. Bringsjord said.

Finally, there are “less appetizing” possibilities such as providing synthetic characters with control over weapons, but then “you have to make sure the synthetic characters have a strong code of ethics,” Bringsjord said.

The project is ambitious, and “I’m not claiming that these synthetic characters now, or in the future, will be genuinely conscious or self-conscious,” Bringsjord added.

2 Comments

  • I agree with Eddy! Can someone please tell me why the first person’s belief that the object is in the first location is not logical? After all, he didn’t know it had been moved! Or is there a mistake in the article?

    • There’s obviously a mistake in the article. This cognitive test is a standard one for children that age. Eddie must have actually thought it was the second location.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

How confident are you in the reliability of AI-powered search results?
Loading ... Loading ...

Technewsworld Channels