Close
0%
0%

Project BASICS

A concerned effort to improve the basic standard of living in underdeveloped and developing countries.

Similar projects worth following
A minimalistic approach towards solving the pressing problems faced by people , that should have been corrected by now, but haven't because, well, they can't afford the solutions.
The overall goal is to basically improve the standard of living of the people in developing and underdeveloped countries.
This was a finalist at the Intel IRIS Sci Fair ( The Indian version of ISEF ). SInce this stuff is open source , all the proceedings have been published in international , open source journals. ( Link to everything in my ResearchGate profile ).\

Project “AWAAZ” essentially uses a set of strategically (based on crypto graphical analysis of words) placed momentary tactile switches on a wearable system that sends characters or phrases to an Arduino board that processes the respective hand movement to form letters and words out of it. The Arduino board is programmed in such a way that it is capable of emulating an HID keyboard of the device that is attached. The letters are then sent to my app that converts the text to speech using standard android based text to speech conversion system that doesn’t require any internet connection through a third party word prediction software (reduces time by 32%). The system is designed to draw charge directly from the phone battery hence it doesn’t need any external power source, has a 100% accuracy rate and doesn’t have any limitations regarding connectivity. It also allows easy hand mobility with low finger stress, easily replaceable parts, and is developed inside a budget of 450 Rupees.

Project BATEYE fundamentally uses an ultrasonic sensor mounted on to a wearable pair of glasses that measures the distance to the nearest object and relays it to an Arduino board. The Arduino board then processes the measurements and then plays a tone (150-15000Hz) for the respective distance (2cm to 4m) till the data from the second ultrasonic pulse (distance) comes in, and then the same process gets repeated. This cycle is repeated almost every 5 milliseconds. The person hears sound that changes according to the distance to the nearest object. The head provides a 195-degree swivel angle and the ultrasonic sensor detects anything within a 15-degree angle. Using systematic, cognitive and computational approach of neuroscience, with the hypothesis that the usage of the occipital lobe of blind people goes into processing other sensory feedback., and using the brain as a computational unit, the machine relies on the brain processing the tone produced every 14 mS to its corresponding distance and producing a soundscape corresponding to the tones and the body navigating using the same. During experimentation, the test subject could detect obstacles as far away as 2 – 3m, with horizontal or vertical movements of the head the blindfolded test subject could understand the basic shape of objects without touching them, and the basic nature of the obstacles.

Maze compressed.mp4

MPEG-4 Video - 20.43 MB - 05/23/2017 at 00:41

Download

Bateye Run 2 E 1.mp4

MPEG-4 Video - 42.71 MB - 05/23/2017 at 00:14

Download

Published paper.pdf

Published research paper ( Project Bateye)

kswps - 629.46 kB - 05/21/2017 at 12:37

Preview

Bateye- Research paper.pdf

Updated research paper - after publication

kswps - 1.44 MB - 05/21/2017 at 12:37

Preview

IEEE Project AWAAZ.pdf

Research paper documenting research done on Project AWAAZ

kswps - 693.19 kB - 05/21/2017 at 12:36

Preview

  • 250 × Jumper wires
  • 1 × Arduino Nano
  • 1 × Arduino Micro
  • 1 × Arduino Uno
  • 1 × Arduino Leonardo

View all 12 components

  • bat-eye stereo version

    Frank Buss06/10/2017 at 23:18 0 comments

    I wrote a new Arduino script, based on my improved mono version, for two sensors and two speakers:

    https://github.com/DebarghaG/Project_Basics/blob/master/bateye-stereo.ino

    There was no problem with the two sensors, they don't influence each other. The echo signal is only evaluated by the sensor which sent the trigger signal last.

    Other changes compared to the mono version: The frequency gets higher now when the distance gets lower. I think this is better, because higher frequencies are associated with danger, and if you are about to crash into something, this is dangerous :-)

    I also added a 10 cm offset: If it is below 10 cm, then the frequency is clamped to the max frequency. The idea behind this is that it is not useful to navigate that near, and with the offset you get higher resolution for distances greater than 10 cm.

    And the frequency is now scaled logarithmically. The reason for this is, that the human ear is logarithmic as well. For lower frequencies, the frequency steps are smaller than for higher frequencies, but for the ear it sounds linear. The ranges and exponent has to be determined experimentally, see the fscale function call for this. For the old behaviour, only the 1000.0f and 150.0f values have to be swapped, for lower frequencies when nearer, and the exponent 7.0f has to be set to 0.0f for linear mapping.

    Some notes about how the sound generator works: Because the Arduino tone-function doesn't work for 2 pins, I had to write my own tone generator. The idea is to use an accumulator and an increment value. The accumulator is 32 bit and the increment is calculated. This is an old idea, the general idea is described for example here. Because the accumulator increments are done in an interrupt with high frequency (10 kHz samplerate), I used fixed point arithmetic for it, and by testing the most significant bit of a 32 bit accumulator, I can create a square wave with high frequency resolution. Outside of the interrupt the timing is not critical and I can use float calculation.

    The old SID of the C64 used this phase accumulator concept, too, see this video from Jeri Ellsworth for an explanation:

  • Bugs , oh so many bugs ! I need some bug spray.

    Debargha Ganguly05/27/2017 at 03:15 0 comments

    A shout out to @Frank Buss for optimizing the code. I was struggling to solve some of the bugs there , for quite some time.

    Here is his description of what he has done -

    "

    Hi @Debargha Ganguly. As promised, I wrote a better Arduino script. This is your original code from your paper:

    https://gist.github.com/FrankBuss/d78911fae66d83bbf234b28f27b97460

    It has several problems. I rewrote it completely with interrupts. The improvements:

    - the measurement interval is fixed at 16 Hz, so that it has a 60 ms measurement cycle, as suggested by the datasheet of the HC-SR04

    - the output tone is continuous and with less distortions than your version (there are still some minor distortions when the tone value is changed, but this would require more work to make it perfect)

    - on timeout, the speaker is disabled

    - the main loop is free to do other things, all measurements and the speaker output is done in the interrupt, which might be useful for your speech output, to put the distance data out at a slower rate than what you hear in the speaker

    - high end of the frequency is changed to 5 kHz for 4 m, instead of 15 kHz, for a wider range

    - the resolution for the distance is changed from cm to mm

    "

    The GitHub repository can be found here -

    https://github.com/DebarghaG/Project_Basics

  • Lim x---> ∞ Where x is possibilities

    Debargha Ganguly05/23/2017 at 16:31 0 comments

    The possibilities are endliess when you add to the budget .

    i'll try and jot down my thoughts, and I'll modify this when I come up with new ones.

    1.) Get a fancy LIDAR based system , supplemented by a camera to do the same thing ., and also change the pitch and timbre with colour and shape. ( Done by the team at Berkeley )... But the challenge is to get to the commercial scale.

    2.) Ultrasound and fabrics don't mix. Hence, Project Batyeye is pretty much useless around fabrics right now . ( The fabric will absorb the waves , so , no reflection = no sensor data ) . If you're using it in rural India or somewhere in africa , you don't really find curtains or drapes around very often . So , solving this problem wasn't really the bid priority in the first prototype/. Addition of a second sensor ( IR ) for when the ultrasonic sensor goes out of range. ------> That brings up another problem , black objects outdoor. They won't get detected. So pretty much black fabrics will be the achilles heel of this device.

    I will try and add as much as possible as I remember ....

  • Algorithm for Project BAT-eye

    Debargha Ganguly05/21/2017 at 12:56 0 comments

    {Repeat these steps every 5 milliseconds}

    Step 1- Send pulse to Ultrasonic sensor

    Step 2- Measure the time required for the sound

    wave to return

    Step 3- Calculate the distance to the obstacle based

    on the time required for the wave to return, and the

    speed of sound

    Step 4- Print the distance to the serial monitor

    Step 5- Generate a frequency to correspond to the

    distance from the obstacle

    Step 6- Play a tone on the speaker

  • Algorithm for Project AWAAZ

    Debargha Ganguly05/21/2017 at 12:55 0 comments

    Step 1.) Identify the key that has been pressed.

    Step 2.) Identify if the control or alt key has been pressed,

    accordingly selecting the correct alphabet or letter to be sent

    Step 3.) The data goes to the phone through the USB OTG

    Cable and the phone thinks that an external keyboard is

    sending that data

    Step 4.) The data passes through the Swiftkey keyboard autocorrect

    engine for physical keyboards, if a space is pressed

    the closest prediction will be written onto the app that converts

    the text to speech.

    Step 5.) A simple offline android text to speech engine converts

    the text to speech and then speaks it out through the

    phone speaker.

  • Experimentation with Project Bateye

    Debargha Ganguly05/21/2017 at 12:54 0 comments

    Approximately 90% of visually impaired people live in developing countries according to WHO projections. Since they the low cost is a very essential criterion, it is a must to make the device as economical as possible. The initial research started off with analysing echolocation. Echolocation is the same as active sonar, using sounds made by the animal itself. Ranging is done by measuring the time delay between the animal's own sound emission and any echoes that return from the environment. The relative intensity of sound received at each ear as well as the time delay between arrival at the two ears provide information about the horizontal angle (azimuth) from which the reflected sound waves arrive. Echolocation had also been mastered by various humans too, who use clicks to find their way around, strengthening my hypothesis that soundscape based navigation was possible.

    The selection of the correct sensor for the correct measurement of distance was extremely important because it needed to be cost effective, have a wide beam, and at the same time be able to detect versatile objects. In terms of accuracy, the infrared sensor was an obvious choice, but since it could only detect objects that weren’t black, it wasn’t used. I used an ultrasonic sensor (HC SR-04), which has a wider beam and can be used on all rigid bodies. Various positions were tested for the mounting of the sensor- 1.) When mounted on the chest it could just detect objects in front 2.) When mounted on the palm of the hand the direction the palm was pointing was proving too difficult to judge for the blindfolded test subject 3.) when mounted on the head, it gives the most swivel angle, hence was used. The original idea was to have two units mounted on the other sides of the head to test the curvature of any object in front, however when conducting experiments, the pulses from both the sensors were nterfering causing bogus values to be returned by the sensors. Hence the idea of having two sensors working seamlessly was abandoned.

    3. RESULTS-

    1.) Swivel angle covered by the system- 195°

    2.) Values returned by the sensor- When graphed.

    Additional tests with obstacles- The blindfolded test subject was introduced to an environment with obstacles, a normal car parking area. He was able to detect obstacles as far as 3 metres (In an unknown environment). Initial disorientation was observed. Experimental testing exposed flaws with the system such as few inaccurate values returned by the sensor and problems with detecting soft objects and sometimes amounts of noncomprehendible noise. This was primarily produced by pointing the sensor towards objects that are rapidly shifting in position, or many objects kept at a faraway distance (caused by the beam angle). It also produced some unexpected results like being able to detect guide rods.

  • Experimentation with Project AWAAZ

    Debargha Ganguly05/21/2017 at 12:50 0 comments

    The feasibility of using gesture mapping was checked first. Sign languages use a lot of different parameters while communicating, which would create a lot of parameters and sensor data to be processed. The amount of data to be processed was becoming so much that low cost microprocessors wouldn’t be able to process that much seamlessly, and was significantly increasing the cost of the device. ( 30+ locations were to be mapped at once ). I wanted to use flex sensors initially, because they are easier to integrate into wearable electronics, however, since accurate character reproduction is required, plotting and tracking movements on the X-Y-Z axes require powerful processors that further increase the cost of the project. A system with flex sensors and powerful enough processors would cost over a 5000 Rupees, well above what the target demographic would have access to, or would be willing to spend. Flexsensors also create a lot of sensor noise, that would have reduced the accuracy rate of the device. Tactile momentary switches are much better because they are much cheaper, easily replaceable and extremely easy to work with producing predictable results. ( Debounce time of 20ms ) While coding, the usage of the serial functions would seem like the easier option however it would result in the display of the results on a separate terminal window, which is highly inconvenient for any text to speech engine to read. The keyboard library functions were used instead because it allows HID device functionality, solving a lot of problems at once. Arduinos can also be attached with certain shields that make them capable of text to speech capabilities, however it isn’t feasible because it requires preprogramed text strings to be sent, and it would be impractical too as the shield cost almost the double the price of most lower end smartphones. The device is able to log 60- 120 keystrokes per minute. Although this is fast for a AAC, it still isn’t as fast as normal speech. If required, the user can use third party text prediction services like SwiftKey to engage auto correction, hence vastly increasing the speed, including being able to type words by just a single keystroke. The code for the Arduino is written in such a way that the software detects it’s a physical keyboard, and hence, is compatible with most auto correct engines. The system was also checked with a third party keyboard software that detected it as a hardware keyboard (SwiftKey) and it was found that the system used 67.82% of the total time without it, which is a substantial saving in terms of time if the person is using it for daily usage. A bluetooth or Wi-Fi based data transfer system wasn’t used because it requires an additional power source whereas using an USB OTG cable network allows the charge to be drawn directly from the phone battery. This also removes problems like restrictions caused due to range of the connections.

  • Problem statement

    Debargha Ganguly05/21/2017 at 12:44 0 comments

    The WHO projects that 285 million people are estimated to be visually impaired worldwide: 39 million are blind and 246 have low vision. About 90% of the world's visually impaired belong to the low income group. Scientists are trying to make eyes for the people who are blind. Some have even tried cathode ray implants inside the brain, but these are extremely expensive, provide very little vision and these procedures are invasive. But what if we use another unconventional sense for sight? Bats can do it, dolphins can, why can’t we? Echolocating animals emit calls out to the environment and listen to the echoes of those calls that return from various objects near them. They use these echoes to locate and identify the objects and obstacles. Since, about 90% of the world's visually impaired live in low-income settings, they can’t afford anything but a walking stick that can’t detect objects outside a 0.5 metre range or anything above waist height unless they don’t collide with it. Basic spatial awareness is extremely important for any person, and therefore this device tries to solve the same problem. When the brain is deprived of input from one sensory organ, it can change in such a way that it augments other senses, a phenomenon called cross-modal neuroplasticity.

    1.4 percent of the population suffers from sorts of speech disorders and they experience a lower life-expectancy in part due to lack of expression. Improvement of lives of millions of people can be done by giving them a means of communicating.. Most literate people now have smartphones which are essentially processing powerhouses. A huge percentage of people from the middle and lower classes of developing countries are unable to buy standard devices for speech production because they retail at over thousands of dollars. Most low cost prototypes are created in such a way that they either have high independent costs (For just the system) or they are restrictive for motion and aren’t accurate enough for day to day usage. The lowest priced prototype right now costs INR 12,000+ and the lowest priced commercial alternative costs INR 30,000+

View all 8 project logs

Enjoy this project?

Share

Discussions

Les Hall wrote 05/27/2017 at 02:04 point

I'm a real fan of @Debargha Ganguly's work here.  I am looking forward to helping with his projects in November when I will be available.   

  Are you sure? yes | no

Frank Buss wrote 05/26/2017 at 21:22 point

Hi @Debargha Ganguly. As promised, I wrote a better Arduino script. This is your original code from your paper: 

https://gist.github.com/FrankBuss/d78911fae66d83bbf234b28f27b97460

It has several problems. I rewrote it completely with interrupts. The improvements:

- the measurement interval is fixed at 16 Hz, so that it has a 60 ms measurement cycle, as suggested by the datasheet of the HC-SR04

- the output tone is continuous and with less distortions than your version (there are still some minor distortions when the tone value is changed, but this would require more work to make it perfect)

- on timeout, the speaker is disabled

- the main loop is free to do other things, all measurements and the speaker output is done in the interrupt, which might be useful for your speech output, to put the distance data out at a slower rate than what you hear in the speaker

- high end of the frequency is changed to 5 kHz for 4 m, instead of 15 kHz, for a wider range

- the resolution for the distance is changed from cm to mm

The new version:

https://gist.github.com/FrankBuss/9c33a78764c518cf0c8713ea7d71c646

I have a few of these distance sensors, let me know if you want a stereo version of it. I tested my version on an Arduino Nano, which you can get for cheap from eBay.

PS: It might be better to use an exponential curve instead of the simple Arduino map function for the tone output and maybe the closer it gets, the higher the tone should get. And you should create a github repository, if not already done, for source code and version management.

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates