Close

Hardware Demo 1

A project log for SNAP: Augmented Echolocation

Sightless Navigation And Perception (SNAP) translates surroundings into sound, providing continuous binaural feedback about the environment.

dan-schneiderDan Schneider 09/03/2017 at 16:410 Comments

Demonstrating some basic functionality of the hardware prototype. I am not well practiced at using the system yet, and we have yet to optimize the feedback, but the binaural feedback is so natural it can already be used. 

Morgan and I can also play a sort of "hide and seek" where she tries to sneak by me, but we will need another person to film it. After this demo, Morgan donned the headset and had a go at finding me in the room with much success. 

On a more technical note, the experiment setting here is important. We are located indoors, meaning there are walls and several pieces of furniture surrounding me. These objects come through as sound sources, and it is important that we are able to distinguish them from one another. Identifying Morgan's hand may seem trivial, but it is significant that I am able to detect her hand apart from the nearby wall. 

This first generation software is producing audio feedback which fades Left-Right-Left. This meant that I had to wait for the feedback to sweep back and forth before I could tell which hand Morgan was raising. The inherent delay was somewhat disorienting, and resulted in my slow reaction times, but nevertheless I was able to successfully identify the correct hand each time she raised it. 

We will be adjusting the feedback to sweep from center outward, and again to remove the sweep altogether and give the user a full field of sound all at once. 

While we definitely have more work to do in developing better feedback parameters, these simple experiments make it clear that the idea is completely feasible and we are on the right track. 

Discussions