Search This Blog

Monday, September 23, 2013

Google Glass for the Blind?

As reported by the Boston Business JournalGPS navigation systems are increasingly being used to help blind people get around on the street, but what happens when blind individuals move into areas where GPS doesn't work?

This is the problem that Cambridge-based Draper Laboratory and Alabama-based Auburn University, are working to address in a project funded by the Federal Highway Administration.
The collaborators are building a prototype that can work indoors, and can also alert users to the presence of objects not found on maps, such as crowds and cars. The model will include technology that Draper Laboratory developed for soldiers and unmanned vehicles.
While the device will track the movements of the wearer while integrating data from GPS satellites, when indoors, it will use visual information from cameras, and wireless information from pedestrian signals in order to enhance safety and mobility. It is designed to be able to work in challenging unstructured environments such as MBTA stations, construction sites and event arenas.
Auburn and Draper are working with the National Federation of the Blind to ensure all of the visually impaired wearers’ needs will be addressed in their design. A prototype is expected to be ready in 2015 and is likely to take the form of an ankle bracelet with movement sensors and a small camera placed in a pair of glasses. Tactile vibrators will likely be used to provide directional guidance to users.

Similarly Google Glass is being used to help the visually impaired through the Dapper Vision's OpenGlass Project.  Harnessing the power of Google Glass’ built-in camera, the cloud, and the “hive-mind”, visually impaired users will be able to know what’s in front of them. The system consists of two components: Question-Answer sends pictures taken by the user and uploads them to Amazon’s Mechanical Turk and Twitter for the public to help identify, and Memento [Memento, allows people to record descriptions or commentary about a certain scene] takes video from Glass and uses image matching to identify objects from a database created with the help of seeing users. 

Information about what the Glass wearer “sees” is read aloud to the user via bone conduction speakers.