IBM, Academics Seek to Create a Computer That's More Like Us
Computers can calculate at speeds and scales that far outstrip what an ordinary person can manage, but they still aren't anywhere near as complex as a human brain. IBM and five major universities plan to change that through a DARPA-funded initiative designed to build a computer that can mimic the way the mind works.
IBM and five universities are receiving funding from a government agency to build a supercomputer -- but not just any supercomputer. They've been tasked with building hardware and software that mimics the human brain.
"There are no computers today that can even remotely approach the robust and versatile functionality of the brain," said Dharmendra Modha, manager of cognitive computing at IBM Research.
"The mind is a collection of mental processes dealing with sensation, perception, action, cognition, emotion and interaction," he told TechNewsWorld. "It can integrate senses such as sight, hearing, touch, taste and smell. And it can act in a context-dependent way in real-world complex environments in the presence of ambiguity, while requiring very low power consumption and being very compact."
Cognitive computing, explained Modha, is the quest to engineer mind-like intelligent business machines by reverse engineering the computational function of the brain and packaging it in a small, low-power chip.
IBM and top researchers from Stanford University, University of Wisconsin-Madison, Cornell University, Columbia University Medical Center and University of California-Merced have received US$4.9 million in funding from the Defense Advanced Research Projects Agency for the first phase of DARPA's Systems of Neuromorphic Adaptive Plastic Scalable Electronics, or SyNAPSE, initiative.
During the first nine months, researchers will focus on developing nanoscale, low power synapse-like devices, and on uncovering the functional microcircuits of the brain.
The research will build on the IBM cognitive computing team's recent work with the BlueGene supercomputer: the near-real-time simulation of a brain the size of a small mammal, using cognitive computing algorithms to develop mathematical hypotheses of brain function and structure.
Besides Modha, other members of the team include Stanford University's Kwabena Boahen, H. Phillip Wong and Brian Wandell; University of Wisconsin-Madison's Gulio Tononi; Rajit Manohar of Cornell; Columbia's Stefano Fusi; and Christopher Kello of the University of California-Merced. IBM researchers include Stuart Parkin, Chung Lam, Bulent Kurdi, J. Campbell Scott, Paul Maglio, Simone Raoux, Rajagopal Ananthanarayanan, Raghav Singh, and Bipin Rajendran.
Artificial Intelligence vs. Cognitive Computing
The goal of cognitive computing is to engineer holistic intelligent machines that can connect huge amounts of sensory data.
"The underlying issue driving this is that as computers become used for increasingly complex and large problems, you run into some serious challenges with how to approach those problems in traditional linear computational fashion," Charles King, principal with Pund-IT, told TechNewsWorld.
"Artificial intelligence starts with a problem -- not a question -- and then seeks to develop an algorithm to solve that problem. Cognitive computing approaches it backwards; the idea is to create a mechanism that is capable of acting like a brain for assembling pieces of complex puzzles and then speed decision making."
Real world applications might include a computer that can assemble and digest the massive volumes of information from the global financial system -- and then make decisions based on that input, King said. "It is virtually impossible for a human to make that kind of calculation."
Another possibility might be an application that can identify areas of the world that will be affected by climate change to a much higher degree of accuracy, suggested King. Sensors can now be deployed by the millions to measure changes in ocean levels -- but there is no way to effectively monitor and then analyze all of that data.
On the consumer level, Modha said, it is conceivable that a small device -- an "iBrain," let's call it -- could be developed to alert the user when something untoward happens, based on the sensory information it receives. For instance, a portable device could monitor an unoccupied home and alert the homeowner when a system or situation requires attention.
Applications such as these are at least 10 years away, though, and the team must solve a few practical issues first, said David Orenstein, spokesperson for the Stanford School of Engineering.
"Fundamentally, the issue is that this is a very different way of designing a computer from the current structure of binary 0s and 1s," he told TechNewsWorld. "The brain's structure allows it to form new connections among switches on the fly and is connected to many other elements, as opposed to a step-by-step linear progression."
In short, new computing designs and materials will be needed. A researcher at Stanford has been working on this problem, using standard transistors "in creative arrangements," Orenstein said.
"There is some thinking that we might want to explore other ways, as well as trying to scale up what we are already doing."