Close
0%
0%

Stereo Photonics

Using stereo cameras and haptic feedback to assist people with visual impairments to navigate unfamiliar environments

Similar projects worth following
217 million people worldwide suffer from moderate to severe impairments to their vision. The Stereo Photonics project will investigate if it is possible to create a low cost device that can provide an element of vision through converting depth information from stereo cameras to a vibration based human-machine interface. All hardware and software will be open source.

The Goal

Build an assistive device that provides a sense of depth perception, via a haptic interface, to a user with a visual impairment in order to provide them with better situational awareness than is available from existing assistive devices.

To be deemed a success the final system must:

  • Provide better situational awareness to the user that existing assistive devices, either when used on its own, or in combination with existing assistive devices (like a white cane).
  • Operate stand-alone without tethered power or external processing.
  • Operate both indoors and outdoors.
  • Last at least 20 hours without recharging or switching batteries.
  • Weigh less than 500 g.
  • Cost less that £150 per device to manufacture in small quantities.

The Plan

Sensor

Initially the idea was to utilise a low cost radar module such as the K-LC5_V3 from RF Beam Microwave in order to detect depth. However, the basic non-pulsed operation of these radar modules would have made the signal processing for this application excessively complex since there would be multiple static and mobile objects within the field of view of the device.

Two further options would be to use a laser range finder or an ultrasonic distance sensor. However, both these sensors have a narrow field of view and can typically only resolve the distance of one object at a time. This would require the user to constantly move their head to gain situational awareness and smaller obstructions may be missed. 

Lidar would be ideally suited to this application, but despite self driving cars continuously driving down the cost, it is still too expensive and cumbersome. 

Using two cameras offset slightly from each other (stereo cameras) allows for distance information to be resolved in the same way in which our eyes work. The signal processing required to do this can be complex, but it is a well researched area with plenty of information available online via published papers as well as examples in open source libraries like OpenCV. Additionally the resolution and accuracy requirements in this project are likely to be much lower than most typical applications of stereo cameras as the human-machine interface will be the limiting factor in the information that can be presented to the user. This means simpler, lower performance algorithms can be used. Cameras can be purchased for under £5 and the algorithms to do the depth calculations can be accelerated using a processor aimed at performing DSP, or potentially using an FPGA. One further benefit of using a camera based system is that it leaves open the possibility of performing image recognition to detect features of interest to the user, for example a pedestrian road crossing.

Human computer interface

While an obvious choice would be an audio based interface, this would likely become annoying with extended use. Additionally people with visual impairments can gain significant situational awareness through listening to ambient sounds. Disrupting this would likely lead to a worse situational awareness even if the depth sensing system worked perfectly.

A better option is to use haptic feedback in the form of vibration to alert the user to the presence of obstructions. To be useful the system must allow for providing directional information on where the obstructions are as well as the approximate distance. Additionally the feedback provided to the user should be adapted based on the scenario; for example, if the user is not moving but facing a wall the device should not continuously vibrate warning the user that the wall is there.

We anticipate the majority of the engineering effort to go into designing the human-computer interface.

Stage 1 - Proof of concept

The first stage is all about proving the concept works, getting the team up to speed with the required technologies, and researching the best way to go about the project.

There are a large number of...

Read more »

  • Building a super cheap stereo camera

    James Gibbard04/22/2018 at 20:27 0 comments

    While there are plenty of commercial stereo cameras available they all tend to be a bit expensive. We needed a stereo camera, just to start experimenting with the algorithms needed for depth estimation, so we built a really cheap one using two old webcams.

    The final result ^

    Parts needed

    • 2x Logitech E3500 USB Webcam
    • 1x Hammond 1599B project box (or similar)
    • 1/4-20 unc Nut (for connecting to a tripod
    • Super glue

    Tools needed

    • Small philips screw driver
    • Drill with 12 mm, 6 mm, and 4 mm bits
    • Circular file

    Instructions

    Step 1

    Dismantle the two webcams by unscrewing the philips screw on one side of the webcam and prying apart the case. Unplug the microphone and small button from the camera PCB.

    The camera has a detachable lens assembly. We will use this to our advantage by attaching the two lens assemblies to a project box and then using the existing screws to reattach the lens assembly to the camera PCB.

    Step 2

    Position the cameras within the project box and mark the position of the lenses on the lid of the project box.

    Step 3

    For each lens drill a 12 mm hole with two 4 mm holes for the alignment tabs on the camera lens assembly.

    You may have to expand the holes slightly if they are not perfectly aligned. 

    Step 4

    With the actual lenses removed glue the two lens casings to the project box being careful to not get glue on the lens threads.

    Step 5

    Screw the camera PCBs back to the lens casings.

    Step 6

    Drill a hole in the bottom of the project box to allow a tripod to be connected. Use super glue to attach a 1/4-20 unc nut over the hole. This will allow most tripods to be connected to the camera.

    Step 7

    Use a file or a dremel tool to create small notches in the project box and lid in order to slot the existing USB strain relief system into. Screw the project box back together.

    Step 8

    Place the lenses back in their casings. The tighter the lenses are the further the focus point will be. They need to be quite tight. As a side note, these cameras are able to focus really close to an object.

    Step 9

    Plug both the cameras into a computer. VLC can be used to view both the webcams. Open VLC and go to Media -> Open Capture device. Select direct show and then under "Video Device Name" select one of the webcams. Press "Play" to start showing the video. Open a second copy of VLC and repeat with the other camera. Adjust both the lenses so that they are focused at roughly the same distance.

    Next Steps

    The next steps are to calibrate the camera using OpenCV and to take our first stereo photo. This will be covered in the next build log.

View project log

Enjoy this project?

Share

Discussions

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates