Skydivers, mountain bikers and rappellers donned Google Glasses on Wednesday and captured video of their stunts at the Google I/O conference, giving the world a glimpse of the capabilities of this new wearable technology.
Google is selling the glasses for US$1,500 to U.S.-based developers who were in attendance at the conference. In true Google fashion, developers will be playing around with the glasses, fine-tuning them, and coming up with apps.
“It’s standard for Google to do this,” said Lynette Young, a social technology specialist and owner of Purple Stripe Productions.
“A lot of their standards and development are open. Google puts things out there and lets the community work on them,” she told TechNewsWorld. “They run counter to the way Apple is. Apple’s products are perfect even when they reach the developers. Google wants you to play with things.”
The glasses will likely be integrated with other Google products and apps, according to Young.
“A lot of it’s visual,” she said. “There’s a strong possibility of mixing Project Glass with other Google projects, such as Goggles. I might travel to another city, take a picture of a building with the Glasses, and then run it through Goggles and ask what building it is.”
Extending the Body
The glasses essentially offer a new way of interacting with, recording and documenting the world.
“I think that Google Glass is an innovative and unique step into a new area of wearable computers,” Will Powell, mobile technology innovation lead with Keytree, told TechNewsWorld.
“It is introducing a new field of peripheral vision technology as an all-in-one package. Whereas a lot of previous glasses-based technologies are aiming at full-vision displays or a connection to another device, Google Glass is really about staying out of the way and providing a platform for developers to work with,” he explained.
The glasses can be used to extend the capabilities of the human body, according to Young, since they’re hands-free.
“The glasses let you forget about technology,” he said. “Project Glass will be an extension of the person. It’s not intrusive. It can be there without being there. The glasses are going to be an extension of a person’s vision and hearing.”
They might, for instance, be used by everyone from emergency service workers to people in wheelchairs.
“When I saw this, I saw it as a lot of fringe applications,” said Young. “I see it working well in emergency services and the disabled community. I have a friend who had a stroke and is in a wheelchair full time. For her, the glasses would mean she could interact without the need to have two hands to type. She would be able to do things in a much more fluid manner. It would let her interact online and offline with limited mobility.”
The glasses also have plenty of augmented reality possibilities, since they can overlay maps, images, and information on top of reality — and without the need to hold up a mobile device to do it. They’re already right there, on the eyes.
“I would love to see the glasses do augmented reality,” said Young. “For people that have the need to put other layers of communication on top of what they know, adding this would be amazing.”
Another term for what the glasses offer is “mixed reality,” according to Blair MacIntyre, associate professor of interactive computing at Georgia Tech. He prefers this term, instead of “augmented reality,” to refer to the overlay of 2D and 3D information on a screen — or in the case of Google Glasses, off to the side of the field of vision.
“I call it mixed reality,” MacIntyre told TechNewsWorld. “There’s a whole bunch of ways to mix the virtual world with the world around you.”
Potential mixed reality applications include work-related uses for people who need access to information while keeping their hands to be free to complete tasks, he said.
“What they can do is display contextually relevant information to you continuously,” MacIntyre noted. “It’ll be useful for people in work contexts. UPS guy can have information on his glasses when he’s delivering packages. Tour guides or teachers could have access to information while they’re doing their work.”
So far, the glasses are available only to developers, and for a steep $1,500 price tag. Still, they will likely jump at the chance to get them and start to work on them.
“It’s difficult for [solo] developers, but [for] people who are in shops that are developing, it’s not a problem at all,” said Young. “Android developers usually have half a dozen phones. This is just a tool of the trade. If you’re going to be a developer, $1,500 is not a lot.”
What developers do with them will be key to the success of the Google Glass project, said Avi Greengart, research director for consumer devices with Current Analysis.
“One of the intentions here is to give it over to people who aren’t Google engineers to see what they do with it, in a very controlled way,” said Greengart. “By giving it to developers, you can get a larger pool of feedback. There are going to be a lot people who are going to try to Glass-enable their apps. There are endless possibilities.”