Close

VizLens V2

A project log for VizLens: A Screen Reader for the Real World

VizLens uses crowdsourcing and computer vision to robustly and interactively help blind people use inaccessible interfaces in the real world

anhong-guoAnhong Guo 09/29/2016 at 04:430 Comments

Based on participant feedback in our user evaluation, we developed VizLens v2. Specifically, we focus on providing better feedback and learning of the interfaces.

For VizLens to work properly it is important to inform and help the users aim the camera centrally at the interface. Without this feature, we found the users could `get lost' - they were unaware that the interface was out of view and still kept trying to use the system. Our improved design helps users better aim the camera in these situations: once the interface is found, VizLens automatically detects whether the center of the interface is inside the camera frame; and if not, it provides feedback such as ``Move phone to up right" to help the user adjust the camera angle.

To help users familiarize themselves with an interface, we implemented a simulated version with visual elements laid out on the touchscreen for the user to explore and make selection. The normalized dimensions of the interface image as well as each element's dimensions, location and label make it possible to simulate buttons on the screen that react to users' touch, thus helping them get a spatial sense of where these elements are located.

We also made minor function and accessibility improvements such as vibrating the phone when the finger reaches the target in guidance mode, making the earcons more distinctive, supporting standard gestures for back, and using the volume buttons for taking photos when adding a new interface.

Discussions