'Mind-Reading' Tech May Give Speechless a New Voice
Brain-reading technology tested in a UC Berkeley experiment might eventually help people with speech defects or who are unable to speak because of illness or injury, but "that is far in the future, and we need to understand more about how the brain processes speech imagery," said team leader Brian Pasley. The main obstacle right now is the question of whether speech imagery is similar to speech perception.
Someday, people whose ability to speak has been damaged by illness or injury may be able to vocalize anyway with the help of technology. Researchers at the University of California, Berkeley, have made strides toward translating the words a person thinks into real speech.
The researchers used 15 patients undergoing neurosurgery as subjects.
They placed electrodes on the subjects' brains, then recorded the activity detected as the subjects listened to a conversation. This recorded data was reconstructed and played back.
Algorithms were used to process the data.
The subjects were exposed to both English words and nonsensical words, and the system worked equally well for both.
That means once the technology becomes practicable, it could be used anywhere.
"The approach was based on auditory features, so it's not specific to English," research team leader Brian Pasley, a post-doctoral student at UC Berkeley's Helen Wills Neuroscience Institute, told TechNewsWorld.
Details of the Research
Up to 256 surface electrode arrays were placed on the surfaces of the superior and middle temporal gyri of the participants' brains.
The superior temporal gyrus includes the primary auditory cortex, which is responsible for the sensation of sound, and Wernicke's area, which is involved in processing speech so it can be understood as language.
It's not clear how the middle temporal gyrus functions, but it's been connected with various processes, including distance contemplation, facial recognition, and accessing the meaning of words while reading.
The participants listened to words for five to 10 minutes while neural signals were recorded from the electrode arrays. Those signals were recorded and played back.
Pasley used two different computational models to match the spoken sounds to the pattern of activity in the electrodes. The better of the two reproduced a sound close enough to the original word that the researchers could correctly guess the word.
Funding and Other Details
The results of the research were published in the January 2012 issue of PLoS Biology.
The research was funded by the National Institute of Neurological Disorders and Stroke, which is part of the United States National Institutes of Health, and the Alexander von Humboldt Foundation.
It builds on work done previously by other research teams elsewhere. For example, Pasley's coauthors of the study at the University of Maryland were able to guess the words that scientists read to ferrets by interpreting recordings from the animals' brains, even though the ferrets couldn't understand the words.
In 2009, IBM scientists Craig Becker and Leugim Bustelo were awarded United States Patent US7574357 for a method of communicating using synthesized speech based on subvocal speech signals.
This technology might be complementary to technology that lets people control wheelchairs or prosthetic arms with their thoughts. All these technologies fall under the general category of brain-computer interfaces.
In 2005, for example, Matt Nagle, who became a quadriplegic after being stabbed in an attack, participated in a clinical trial in which a neural interface system was implanted on the surface his brain. This let him control a computer mouse cursor to check email, control a TV set, and send commands to an external prosthetic hand, among other things, using only his thoughts.
Uses for the Technology
The technology tested in the Berkeley experiment might eventually help people with speech defects or who are unable to speak because of illness or injury, but "that is far in the future, and we need to understand more about how the brain processes speech imagery," Pasley said.
The main obstacle right now is the question of whether speech imagery is similar to speech perception, he noted.
Other problems exist.
"Context still needs to be determined, and mapping language by words would still require some kind of a logic engine to capture the true meaning of the phrase, Rob Enderle, principal analyst at the Enderle Group, told TechNewsWorld.
It might be possible to make the technology bidirectional, bypassing the vocal chords altogether and using a form of encryption that would make it more private than normal speech, Enderle speculated.
"Think of a reporter being able to report on an event without having to actually say anything," he suggested. "And this capability could be impressive for any kind of spying."