Those of us who have never struggled with a vision impairment beyond a need for corrective lens may take for granted the ability to see obstacles, gauge oncoming vehicles before crossing a street or find the entrance to our favorite coffee shop. These everyday activities can be challenging and intimidating for the visually impaired. Guide dogs are one option to provide this additional guidance. However, access, cost, maintenance and/or allergies may make ownership impractical. 'Guiding Eyes for the Blind' estimates that “only about 2 percent of all people who are blind and visually impaired work with guide dogs.” (“Mobility” https://nfb.org/blindness-statistics)
Assistive canes have their own limitations. While useful to detect low-level obstacles and walls, a cane cannot detect head-to-chest level obstacles (i.e., tree branches). Assistive canes cannot determine the location of entryways without direct contact, identify objects, or detect traffic condition.
There are wearable devices in or near market to address some of these issues, but their costs are in the range of $2000.
How Visioneer works
The Visioneer, appearing as a set of sunglasses, will perform vehicle detection using two cameras and a combination of OpenCV and a local neural net to recognize objects in the user's path. It will provide user feedback via a bone conductor, without interfering with their ability to hear normally.
Usage Flow Diagram
To illustrate how Visioneer works, we drew the flowchart shown below. The key here is to first determine whether the user is walking or stationary. This makes a difference in how the user interacts with their surroundings and decision making. When the user is walking, Visioneer's obstacle avoidance ability will come into play. When the user is stationary, that signifies to Visioneer that the user either is trying to identify something at a near distance or waiting to cross the street. The easiest way to determine the user's situation would be to use speech recognition but considering its unreliability and potential social awkwardness, we decided to go with other options that include the combined use of software and hardware components.
Schematic (First Draft)
Based on the usage flow diagram, we decided to use an accelerometer to determine if the user is walking or stationary. We use OpenCV to perform obstacle avoidance. To determine if the user wants to identify something at a close distance, we use lidar. If the user is stationary and isn’t close to any objects, OpenCV and a local neural net will identify surroundings to determine if the user is looking at traffic or other objects. Everything will operate on a Raspberry Pi Zero.