close-circle
Close
0%
0%

Visioneer

AI glasses that provide traffic information and obstacle avoidance for the visually impaired.

Similar projects worth following
close
Worn as a pair of sunglasses, the ‘Visioneer’ will provide timely traffic information and obstacle detection to the visually impaired. A trained neural net will provide a level of speed and accuracy necessary for real-time recognition and response. Our design will strive to maximize ease-of-use, comfort and reliability to supplement the user’s existing navigation options and “feel for the world”.

GitHub:  https://github.com/MakerVisioneer/prototype

Google Drive: https://drive.google.com/drive/folders/0B3wmJbFj6cTCTGMzemFlTmdGbnc?usp=sharing

The problem

Those of us who have never struggled with a vision impairment beyond a need for corrective lens may take for granted the ability to see obstacles, gauge oncoming vehicles before crossing a street or find the entrance to our favorite coffee shop.  These everyday activities can be challenging and intimidating for the visually impaired.  Guide dogs are one option to provide this additional guidance. However, access, cost, maintenance and/or allergies may make ownership impractical.  'Guiding Eyes for the Blind' estimates that “only about 2 percent of all people who are blind and visually impaired work with guide dogs.” (“Mobility” https://nfb.org/blindness-statistics)  

Assistive canes have their own limitations.  While useful to detect low-level obstacles and walls, a cane cannot detect head-to-chest level obstacles (i.e., tree branches).  Assistive canes cannot determine the location of entryways without direct contact, identify objects, or detect traffic condition.

There are wearable devices in or near market to address some of these issues, but their costs are in the range of $2000.

How Visioneer works

The Visioneer, appearing as a set of sunglasses, will perform vehicle detection using two cameras and a combination of OpenCV and a local neural net to recognize objects in the user's path.  It will provide user feedback via a bone conductor, without interfering with their ability to hear normally.  

Usage Flow Diagram 

To illustrate how Visioneer works, we drew the flowchart shown below. The key here is to first determine whether the user is walking or stationary.  This makes a difference in how the user interacts with their surroundings and decision making.  When the user is walking, Visioneer's obstacle avoidance ability will come into play.  When the user is stationary, that signifies to Visioneer that the user either is trying to identify something at a near distance or waiting to cross the street.  The easiest way to determine the user's situation would be to use speech recognition but considering its unreliability and potential social awkwardness, we decided to go with other options that include the combined use of software and hardware components.

Schematic (First Draft)

Based on the usage flow diagram, we decided to use an accelerometer to determine if the user is walking or stationary.  We use OpenCV to perform obstacle avoidance. To determine if the user wants to identify something at a close distance, we use lidar.  If the user is stationary and isn’t close to any objects, OpenCV and a local neural net will identify surroundings to determine if the user is looking at traffic or other objects. Everything will operate on a Raspberry Pi Zero. 

Adobe Portable Document Format - 69.27 kB - 10/17/2017 at 21:49

eye
Preview
download-circle
Download

JPEG Image - 534.98 kB - 10/17/2017 at 21:49

eye
Preview
download-circle
Download

Standard Tesselated Geometry - 862.78 kB - 10/15/2017 at 06:30

download-circle
Download

Visioneer3DPrintSet.stl

3D modeling file of Visioneer's modular parts.

Standard Tesselated Geometry - 644.42 kB - 09/04/2017 at 06:32

download-circle
Download

Visioneer Assembly Diagram.pdf

Visioneer assembly parts list and diagram

Adobe Portable Document Format - 114.11 kB - 09/01/2017 at 02:04

eye
Preview
download-circle
Download

View all 8 files

  • 1 × Raspberry Pi Zero W - $10
  • 1 × Memsic 2125 - $7.99 9DOF Sensor
  • 2 × Coin flat vibrating motor - $0.35
  • 1 × Arduino Nano - $3.5
  • 1 × MaxSonar sensor - $35

View all 15 components

  • Schematic for Visioneer V2.0

    MakerVisioneer3 days ago 0 comments

    In Visioneer V2.0, we've added vibration motors to better alert deaf blind users in both obstacle avoidance and traffic detection. Arduino Nano is also included to offload Raspberry Pi Zero for the signals from the two sensors and the 3.3V/5V logic level converter is also added to step down 5V on Arduino and step up 3.3V on Pi Zero at  the same time and both Arduino and Raspberry have a connection.

  • 3D design of Visioneer V2.0

    MakerVisioneer5 days ago 0 comments

    Here are 3D images of Visioneer V2.0 at different angles and an image showing labeled components in Visioneer's housings. 

  • Updated functionality flowchart

    MakerVisioneer6 days ago 0 comments

    In this updated flowchart, Visioneer's functionality focuses on obstacle avoidance and traffic detection. We've added object recognition for pedestrian button and walk lights to improve traffic detection, helping the user determine when to cross the street.

  • Pedestrian hand button recognition

    andrew.craton10/08/2017 at 01:14 0 comments

    Today, Visioneer took its first baby steps toward classifying crosswalk objects! (pedestrian hand button)

    As image gets closer (zoom to USB web cam), neural net eventually decides its not just random traffic and more likely to be a a pedestrian button.  Zooming back out, you can see it decides picture is more like traffic overall.  This represents how a user will need to be near enough to the button (be in crosswalk area) or else the neural net will only detect random traffic.

    It is far from perfect (only 275 images of one button style and 275 of random traffic).  I will try to get a total of 3000 images of the most common button styles in the U.S, along with 3000 of random traffic areas.

    My next steps are:
    1) Add real-time bounding box to "locate" where the button is in frame, for guidance.
    2) Add walk signal (images of person walk symbol, not the word WALK) dataset, also with real-time box locator.
    3) Deploy both Button and Walk detection to Pi Zero and test FPS in live scenario.

    4) Improve overall accuracy, while keep Pi Zero FPS high.

  • Gesture detection data

    MakerVisioneer09/27/2017 at 04:26 0 comments

    The purpose of wearer's gesture detection by MPU6050 accelerometer/gyroscope on a testing prototype is used to activate obstacle avoidance mode or traffic detection mode. The test data below was collected as the wearer  started stationary and then began to walk ( figure 1 ) and when the wearer turned their head ( figure 2).  Notice there are zeros in the x, y, z accelerations in the pattern of walking.  Values need to be averaged in the algorithm for next step.


  • Experiment on OpenCV, sensors, and bone conduction transducer

    MakerVisioneer09/18/2017 at 13:09 0 comments

    OpenCV experiment on color and circle detection with a traffic light picture. These techniques will be used to detect traffic lights for the traffic detection in Visioneer. Other OpenCV techniques will also be used for traffic detection.

    The image of a traffic light arrow is detected with recognizing the shape of pentagon and rectangle using contour approximation from OpenCV.

    Sensor experiment

    Here are two videos of  the experiment on LIDAR and MPU6050. The first video is testing two TOF LIDARs VL53L0. The conclusion is stable readings and narrow detection range. So we've decided to experiment with Maxbotix Sonar sensor for obstacle avoidance.

    The second video is using MPU 6050 to detect movement of a user. Turning on red LED light means a user is stationary and turning on blue LED light means the user is walking or moving. It is a way to switch between two modes in Visioneer.

    Testing audio output on bone conduction transducer

  • Deep Learning Experiment 1

    andrew.craton09/16/2017 at 16:10 0 comments

    The first video shows my first experiment of live recognition using a Pi3, USB webcam, Movidius\Caffe and OpenCV on a pre-trained neural net called SqueezeNet.

    The second video shows the same Pi3  setup classifying a single cat pic at 307ms.  Hopefully you can see in the video the number 307523 which is = 307ms.

    Now that we have a successful benchmark for Movidius, we will turn our efforts to benchmarking without it, and using YOLO\Darknet instead of Caffe\Squeezenet.

    After that, we will train\compile a custom neural net on traffic-related images which should achieve a faster recognition speed on either platforms.  The goal is < 40 ms on a small custom set of objects.

  • (5) Research: Accessible Pedestrian Signals

    MakerVisioneer09/04/2017 at 04:53 0 comments

    The National Cooperative Highway Research Program Project 3-62, Guidelines for Accessible Pedestrian Signals highlights the procedure used by the visually impaired to cross an intersection.  

    http://www.apsguide.org/appendix_d_understanding.cfm

    We've outlined the procedure and how Visioneer could help.  Visioneer's phase two design will be implemented based on the outputs of the flowchart below.

  • (4) Testing Visioneer's phase I design with software implementation

    MakerVisioneer09/04/2017 at 00:19 0 comments

    Testing Visioneer's phase I design implemented with OpenCV in cross traffic.

    https://www.youtube.com/watch?v=74L4lV5V-yQ&feature=youtu.be

    Visioneer's phase II design will focus on improving traffic detection and accessibility with neural nets , machine learning, deep learning, or other open source AI frameworks.

    The Pi Zero doesn't have the hardware to support sophisticated neural nets, so we are designing custom datasets which will be minimal in size and complexity.  We are also optimizing OpenCV to maximize the camera FPS.  TinyYOLO\Darknet is our current open-source choice of local neural net, but we also have a Movidius (Myriad 2) USB ($79), which can process Caffee 1 networks at around 100 gflops, while only using 1W of power.

  • (3) Assembly of 3D printed parts

    MakerVisioneer08/30/2017 at 00:22 0 comments

    Unfortunately we found the glasses and hand band design was not comfortable to wear.  So we decided to go with a design that incorporates 3D printed modular parts that snap onto existing sunglasses.  This design is much more comfortable and we think it will be more appealing to potential users. 

    Here is an assembly diagram with the 3D printed parts.

View all 12 project logs

  • 1
    Assembly of Visioneer frame to pre-wired components

    Video walks user through assembling a set of components onto one side of the prototype frame.  NOTE: we have transitioned away from using the sonar, so you will see only the LIDAR in this video. Also, the power carrier position has changed to behind the main carrier (see current gallery), rather than next to it, as shown in this video.

  • 2
    Quick OpenCV installation

    Claude Pageau's instructions: https://github.com/pageauc/opencv3-setup

View all instructions

Enjoy this project?

Share

Discussions

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates