A team of university researchers have been busy building Robo Brain, a large-scale computational system that collates information from the Internet, as well as data from computer simulations and real-life robot trials, and learns from it.
That knowledge base will be used to drive prototypes for robotics research, household robots and self-driving cars.
The researchers — from Cornell, Stanford and Brown universities and the University of California at Berkeley — have begun downloading 1 billion images, 120,000 YouTube videos, and 100 million manuals and how-to documents to the database.
Robo Brain will process images to pick out the objects in them, and connect images and video with text to learn to recognize objects, how they are used, human language and behavior.
Deep learning techniques will be used with this data to help Robo Brain learn the relationships between humans and everyday objects.
However, accessing the Web for source material also means running into 404 URLs, unsupported video types or invalid file paths, and video compilations like this one.
“If the information fed to the robot is accurate and complete, it will find the best solution based on that information, but all information has some inaccuracies in it, and those … will likely lead to mistakes,” Rob Enderle, principal analyst at the Enderle Group, told TechNewsWorld.
How Robo Brain Will Help Robots Learn
The Robo Brain is best pictured as a large graph with multiple branches, rather like a chart of relationships between Facebook friends, remarked Aditya Jami, a visiting researcher at Cornell who designed the database.
A robot’s computer brain stores what it has learned in a Markov model, which consists of a list of the possible states of a system, the possible transition paths between those states, and the rate parameter.
Each state, or node, could represent an object, an action, or a part of an image; each is assigned a probability. The robot’s brain will link these in a pattern, with each state depending on the probabilistic outcome of the preceding one.
Think of this as successive snapshots of a chair tilting. Each snapshot is a node. Up to a point, there are two transition paths for each node: The chair either will tilt further and eventually tip over, or it will tilt back and eventually be stable again.
The robot’s brain will look for a chain in the knowledge base that matches those probability limits.
If a robot comes across a new situation, it can query Robo Brain, which essentially will be a database in the cloud.
Tapping the, Ahem, Wisdom of the Crowd
The Robo Brain website will display things the brain has learned, and visitors will be able to make additions and corrections. When last viewed, at publication time, the site had a “click to upvote” triangular button and a comments section.
Visitors have to log in to comment, but it’s not clear whether there are any other processes to control input.
“The more data you feed into an AI algorithm, the better the intelligence,” Jim McGregor, founder and principal analyst at Tirias Research, told TechNewsWorld. “However, you must remember the term ‘garbage in, garbage out.'”
Trouble in the Wind?
YouTube videos and other Internet sources may not always show the safest or most appropriate way to perform tasks, McGregor cautioned. The robot “could learn something undesirable.”
Also, the quality of the information entered by visitors might be questionable, and there is the potential for disruption of the information and, possibly, the algorithms, McGregor suggested.
“Programming intelligence is incredibly hard and very complex,” Enderle said. It’s possible to introduce a glitch into an algorithm, which might result in an unfavorable outcome.