Countless devices have been created to help 4.2% of Americans who are legally blind to become more independent in their daily lives. Urban environments are notoriously hard to traverse and with about 80% of the visually impaired living in urban environments, it’s no wonder that many have trouble navigating. Some solutions to this problem have been created like the probing cane, service animals and camera based obstacle avoidance systems. Though these can prevent collisions with nearby objects, they provide no feedback in the way of walking directions. The ones that do perform this task are usually audio based and read directions into the ear. They do not provide very accurate turn information, but instead provide general directions like left and right.
The device is based around an ankle band that all the components are mounted to. During operation, the device receives input from three time of flight (ToF) laser distance sensors. These detect when the user gets closer than 30 cm to an obstacle. An Arduino Pro Mini microcontroller processes the sensor data in conjunction with inertial measurement unit (IMU) data to compensate for the orientation of the foot when reading ToF distances, and to provide more accurate heading data to the Android application. This Arduino is also connected to a bluetooth module that communicates with the phone to receive navigation information. The Arduino uses these inputs to control an array of eight vibration motors that are distributed equally around the users ankle.
The Arduino Microcontroller performs all the computation that happens on the device itself. It has two major functions. It must use data from the three ToF sensors and IMU to determine when to notify the user of an obstruction and it must also send compass data for more accurate heading information. The ToF sensor data is constantly being monitored, so when the user enters within 30 cm of an object, it will notify the user. Though to prevent false readings, the device will only notify the user when the foot is relatively flat against the ground.
The modern smartphone provides a relatively vast array of sensors, and combines them with internet connectivity in a user-friendly package. Thus, it made sense to use such a platform to provide location data, generate walking directions, and determine the direction in which the anklet should direct the user to proceed. For this we selected the Android platform, for its open development platform and low cost compared to competitors. The application performs three main functions. Most importantly, it uses GPS and other location sources to determine the user’s current location and heading. It also utilizes the Google Maps API in order to obtain walking directions that are used to guide the user to their destination. Finally, it implements Bluetooth to communicate with the anklet. This allows it to send the direction in which the user should be directed to turn, and to receive heading data from the the IMU and magnetic compass sensor unit mounted on the anklet.
After extensive testing, we found that although there are some aspects that could be improved, the device as a whole works rather well. We tested our device by wearing the unit and testing our ability to use it’s feedback to travel along various test routes while blindfolded. The app produced navigation routes and obtained location data using the Google Maps API. An algorithm processed this data and determined the heading that the user needed to be directed to take, and this was then used to determine which motor the anklet should be directed to vibrate. The user generally experienced little difficulty determining what the intended direction of travel was. In addition, they were able to effectively navigate around obstacles using the ToF distance sensors and haptic feedback.
Although this model works as a proof-of-concept, there are still many improvements that could be made. They are outlined below by subsection of the project:
One major set of improvements that could be made to the Android application would be to improve the accessibility options. Since the use case for our device is primarily the visually impaired, in order for the device to be fully effective for such users, accessibility options would need to be improved. These would include, but would not be limited to:
- Screen reading
- Selection of destination through speech recognition
- Greater automation of pairing and setup process
Due to time constraints, the development of the app had to follow a design process that prioritized basic functionality over usability. However, making such improvements would result in a less complex and easier experience for the end user.
Another improvement that could be made would be to improve the design of the application back-end, to prevent it from crashing at random times and to improve efficiency. In relation to this, the code cleanliness, design flow, and cleanliness could generally be improved. Such changes would be relatively simple to implement given time.
After testing the device, we noticed many areas in the design where improvements could be made. The haptic feedback motors provide a relatively large location of stimulus and do not provide very granular angle amounts. This could be improved by using more advanced control algorithms for the haptic feedback motors. This would give the user a tapping sensation in a very localized area. Adding a fourth ToF distance sensor on the anklet would also improve the user experience. It would give them a full 360o of obstacle avoidance. In addition, making the device easier for the visually impaired to use would drastically improve the usability. This could be accomplished with the addition of braille text. This could aid the user in setting it up, putting it on and charging it.