Science

Intel Explores New Modes of Communication for Stephen Hawking

World-renowned physicist Stephen Hawking’s physical condition is further deteriorating, and Intel wants to help the famous scientist continue to share his ideas with the world. Hawking, who at 21 was diagnosed with a motor neurone disease related amyotrophic lateral sclerosis, known as “Lou Gehrig’s disease,” has been confined to a wheelchair for much of his adult life.

However, he has been able to communicate with others — and produce intellectual feats such as breakthrough theories about time and gravity, as well as bestsellers including A Brief History of Time, Black Holes and Baby Universes, and The Grand Design — through adaptive speech and computing technologies.

Hawking’s degenerative disorder has progressed to the point where his current assistive technology may soon be rendered useless.

“Stephen’s current system works by hanging an optical sensor from his glasses that detects a twitch in his cheek muscle,” Intel CTO Justin Rattner explained to TechNewsWorld.

“This is used to stop a cursor that is constantly scrolling through the letters of the alphabet,” he explained. “This is a slow process and relies on being able to detect the twitch of the cheek muscle. As his condition deteriorates, this may no longer become viable.”

Rattner was set to introduce Hawking at his 70th birthday celebration, however the scientist was too ill to attend.

Intel is planning to gather data for further study on developing a new speech system for Hawking, Rattner said.

A New Concept

Intel has developed technology for Hawking before. The scientist publicly praised the notebook computer the chipmaker specially designed for him in 1997.

Hawking’s infirmities require new directions in thinking and experimentation now, and Intel is considering a range of options to explore, Rattner said.

“We can look at expression detection. Those who know him say they can detect expressions, and even if we only detect two, we have Morse code for example,” he suggested.

Another option might be a system of cameras combined with software algorithms to build up a bigger vocabulary for Hawking, Rattner said. Eye movement detection is another option, but may be limited due to problems with accuracy.

“Brain waves has also been mentioned, and while it is an option, it is perhaps not the first avenue we are exploring because it requires the wearing of a headset,” continued Rattner. “Whatever techniques we end up using, they have to be instantly usable, requiring no learning curve, and throughout all of this, we have to take great care to be sensitive to Stephen and his condition.”

Eye-Tracking Possibilities

All of the possibilities Rattner mentioned are under development in some form in either a university setting or even commercially.

Students at Georgia Tech are combining eye-tracking technology with video game engines to study how people look to find certain points of interest, Ellen Yi-Luen Do, associate professor at the College of Architecture & College of Computing, told TechNewsWorld.

“Imagine being able to use eye tracking to control the movement of a mouse, to click and select different words and objects, and to process interaction, she said. “This could be useful for people with disabilities, but could also be useful for people driving, or performing other operations.”

Elements exist for a brain-computer interface, Do continued. “There are already commercial headsets that can detect brain waves for simple commands to play video games, such as Emotiv and the NeuroSky Mind Set. So instead of tracking eye movement, we could detect brain waves and use that to communicate.”

One of Do’s students is working on something called “biofeedback art therapy,” in which a NueroSky Mind Set can detect a person’s relaxation Alpha wave and use that to control imagery.

Other experiments include using the headset to detect brain waves to control opening and closing of blinds, and adapting facial recognition technology so that a camera can recognize a hand gesture meant to signal that, say, the volume of a stereo should be turned down, she said.

Focus on Language

As these experiments move closer to real-life adaptions, it is important that scientists remember the language-processing piece, Carrie Bruce, a speech pathologist and research scientist at Georgia Tech’s Center for Assistive Technology and Environmental Access, told TechNewsWorld.

“Many of the innovative technologies — such as gesture control, eye tracking and brain control — being designed for controlling an interface can be used for connecting a person to a communication system, since most communication technologies require an interface of sorts,” she said.

“However, it is less about the control interface and more about an efficient and effective way for people to use language,” noted Bruce. “There is a science to representing a person’s whole system of language through software, and often HCIs (health care information solutions) focus more on the interface and less on language.”

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Technewsworld Channels