Researchers led by speech neuroscientist Edward Chang at the University of California San Francisco (UCSF) on Tuesday reported their success at decoding speech attempts in real time by reading the activity in the speech centers of test subjects’ brains.
Three persons capable of normal speech, who were being treated for epilepsy at the UCSF Medical Center, participated in the study.
They permitted the researchers to make use of tiny recording electrodes that previously were placed on the surface of their brains to map the origins of their seizures in preparation for neurosurgery.
The technique, called “electrocorticography,” or ECoG, provides richer and more detailed data about brain activity than technologies like EEG or fMRI.
Because ECoG electrodes do not penetrate the brain tissue, they could be a better option for long-term brain-computer interfaces (BCIs) than electrodes physically inserted into the brain.
Chang is a member of the Weill Institute for Neurosciences at UCSF.
The Mechanics of the Experiment
Chang’s team asked the participants nine simple predefined questions and gave them a choice of 24 possible answers.
The team developed machine learning algorithms to decode specific speech sounds from the test participants’ brain activity.
After some training, the algorithms were able to detect when participants were hearing a new question or beginning to respond, and to identify which of the 24 standard responses participants gave, with up to 61 percent accuracy.
The algorithms’ speed and accuracy improved when they used the test subjects’ brain activity first to identify which of the predefined questions they had heard, in order to provide context. That method yielded up to 75 percent accuracy.
The technology could help people “who have lost the ability to speak but still have cognitive capability that would allow understanding and the ability to form speech-related thoughts,” noted Rob Enderle, principal analyst at the Enderle Group.
However, training the algorithms “could be very resource-intensive,” he told TechNewsWorld. “Generally that training would be part of getting the system linked to the individual’s brain.”
Patients probably would have to get updated training every six to 12 months, Enderle said. “Accuracy would need to exceed 90 percent to be practical for the rest of us, and 99 percent for general adoption.”
Funding From Facebook
Facebook Reality Labs (FRL) funded the study as part of its Project Steno collaboration effort with UCSF, a Facebook spokesperson said in a response provided to TechNewsWorld by company rep Eloise Quintanilla.
Facebook researchers have provided input and engineering support to Chang’s lab, but UCSF oversees the research program and works directly with volunteer subjects. The Facebook researchers have limited access to de-identified data, which remain on site at UCSF and under the university’s control.
Chang’s lab has launched another study, BRAVO (BCI Restoration of Arm and Voice), with Karunesh Ganguly, an associate professor of neurology at UCSF. That research aims to determine whether ECoG neural interface implants can be used to restore movement and communication abilities to patients paralyzed due to brain or nerve issues.
FRL is funding part of the BRAVO study “that is focused exclusively on demonstrating the ability to allow the participant to generate text on a computer screen using their brain activity,” the spokesperson told TechNewsWorld. “Our understanding is that UCSF’s BRAVO study is broader in scope.”
Facebook wants to use the technology for augmented reality glasses.
Privacy and Other Ethical Issues
Facebook is not known for its adherence to user privacy principles. The FTC recently fined the company US$5 billion for breaching users’ privacy, a penalty widely criticized as inadequate.
Further, Facebook is part of the Data Transfer Project with Apple, Google and Twitter, an effort to develop interoperable systems to transfer data between services.
Facebook tracks people ubiquitously on the Web though its “Like” button, which is used by more than 8 million websites, and its Facebook software development kits, which are embedded in more than 60 percent each of the top iOS and Android apps.
That raises the question of whether the data gleaned from research into brain-computer interfaces is safe, and what ethical issues arise with related research and product development.
The U.S. National Institutes of Health has developed a Neuroethics Roadmap for the NIH BRAIN Initiative.
The roadmap lists several neuroethics challenges:
- Developing ethical standards of biological material and data collection and evaluating how local standards compare to those of global collaborators;
- Protecting the privacy of human brain data (eg. images, neural recordings, etc.) and data in immediate or legacy use beyond the experiment;
- Understanding the moral significance of neural systems under development in neuroscience research laboratories;
- Determining the requisite or minimum features of engineered neural circuitry that trigger concerns about moral significance; and
- Evaluating whether ethical standards for research are adequate and appropriate for evolving methodologies and brain models.
“Right now there is a debate on whether the data aggregated and derived from research will be privately owned by an individual, collectively by society, or come under corporate ownership,” noted Ray Wang, principal analyst at Constellation Research.
Data derived from such research potentially could “create a biological signature that is no longer unique and can be hacked,” he told TechNewsWorld. “Other ethical questions are raised by what data has been captured and whether that data can adversely impact privacy.”
Facebook or UCSF would own the data from these research projects under way with Chang’s lab, but “the issue still needs to be resolved,” Wang said.
Personal data should be made a property right, he suggested. Once that is done, “we can transact, trade, gift, or lease that information for monetary or nonmonetary reasons.”
Regulators “should be surrounding this technology with laws that prevent abuse,” Enderle said. However, “it might destroy much of the value to Facebook.”