Figure from the paper illustrating the AR application in use (green spheres are areas tagged as text by the AR app).

Figure from the paper illustrating the AR application in use (green spheres are areas tagged as text by the AR app).

Fresh research from Dartmouth’s Visual Computing Lab demonstrates how augmented reality could be used to help people with reduced vision read signs in their environment. The team not only designed an application able to detect and display enhanced text via Microsoft’s HoloLens, they also conducted a behavioral experiment to validate their system’s effectiveness.

Reaching across the Green and the Pond, the study was a collaborative effort between Dartmouth’s Visual Computing Lab, its Department of Psychological and Brain Sciences, and Cardiff University. A research paper detailing the new application was recently published in Open Access journal PLOS ONE.

Full details of the study (and code) can be found here.

Authors of the study, including the VCL’s Jonathan Huang and Director Wojciech Jarosz, noted there are currently a myriad of tools available for helping those with impaired vision navigate their surroundings – namely, canes and GPS – but these are only useful for obstacle detection and large-scale outdoor environments. Recent research has begun to investigate navigation tools better suited to indoor environments, many of which leverage increasingly fast and accurate computer vision APIs.

The team decided to use the HoloLens for their experiment in order to allow people with reduced vision to see and interact with their environment as they usually would, but also allow them the option to present an enhanced version of nearby text on the lens. 

The user would initiate the system by identifying a potential sign in their environment and asking the system to process it. The HoloLens would snap a picture, send the picture to Google’s Cloud Vision API for processing, and receive the decoded information three to four seconds later. It would then project a large, bright version of the enhanced text over the sign in the environment.

What is a college psychology study without some undergraduates eager to propel science? In a behavorial experiment, some Dartmouth students donned swim goggles obstructed by blurry plastic in order to simulate reduced vision. The students were then asked to find the office of a specific professor, given only their name, in a small hallway with several doors and signs. An experimental group used the AR system for this task, whereas a control group did not.

The students who used the AR system subjectively found the task easier, felt more comfortable and confident in their search, and walked more direct paths to the professor’s door, but took longer time to complete the task. The authors speculate this was likely due to the time spent waiting for the computer vision system to process the text.

This research presents an exciting new direction in the realm of human-centric AR. 

The full roster for the paper includes Jonathan Huang (Dartmouth College), Max Kinateder (Dartmouth College), Matt J. Dunn (Cardiff University), Wojciech Jarosz (Dartmouth College), Xing-Dong Yang (Dartmouth College), and Emily A. Cooper (Dartmouth College).