Close
0%
0%

Eye-to-Speech Module

We want to help people who are unable to communicate verbally and cannot use sign language by providing a wearable “eye-to-speech" solution.

Similar projects worth following
As part of our student project called Speak4Me we want to help people who are unable to communicate verbally and suffer from severe physical disabilities by providing a wearable “eye-to-speech” solution that allows basic communication. By using our device, users can express themselves with customizable phrases using situational and self-defined profiles that allow them to communicate with their environment with 128 phrases.
Through simple eye movements within certain timeframes and directions, Speak4Me will output audible predefined phrases and allow basic communication. A simple Web-Interface allows the creation of profiles, without any limitation on word complexity or length.
Following a simple step by step video tutorial Speak4Me can be built for around 100$, which is way below given solutions. All product related information regarding needed materials and construction manuals are provided via a central homepage.

1.Introduction

As part of our student project called Speak4Me we want to help people who are unable to communicate verbally and suffer from severe physical disabilities by providing a wearable “eye-to-speech” solution that allows basic communication. By using our device, users can express themselves with customizable phrases using situational and self-defined profiles that allow them to communicate with their environment with 128 phrases.

Through simple eye movements within certain timeframes and directions, Speak4Me will output audible predefined phrases and allow basic communication. A simple Web-Interface allows the creation of profiles, without any limitation on word complexity or length.

Following a simple step by step video tutorial Speak4Me can be built for around 100$, which is way below given solutions. All product related information regarding needed materials and construction manuals are provided via a central homepage.

Project GitHub page including step-by-step video tutorials and the no-code customization tool: https://speak4me.github.io/speak4me/

1.1 Motivation

The UN has defined 17 Sustainability Goals (SDGs) that all members of the UN have adopted in 2015. Main focus of the SDGs is the desire to create a shared blueprint for more prosperity for the people and the environment by 2030 (cf. UN, 2015). Speak4Me embraces SDG 10 which aims to remove inequalities within and among countries, especially in poorer regions. Within this SDG our solution can bring value to people with speech related disabilities by giving them a way to communicate. Worldwide over 30 Million people are mute and therefore unable to communicate using their voice. (cf. ASHA, 2018)

Our main focus are patients with serious mental and or physical disabilities, that are not only verbally mute, but are also unable to communicate using gestures and other means. This includes patients with ALS, Apraxia and other degenerative diseases that lead to a slow loss of control over body functions, as well as patients affected by spine damage that makes it impossible to communicate via body language and other means. Birth defects, damage to vocal cords and accidents that damage relevant organs and many other conditions, can also lead to muteness.

Physical muteness is rarely an isolated condition. Most commonly it is just the result of other underlying conditions like deaf-muteness which is the most common reason, for people to be unable to communicate verbally (cf. Destatis 2019). 

Speak4Me could also provide help to patients with temporary conditions. Just in Germany over 250.000 people suffer from strokes every year (cf. Destatis 2019). During recovery our solution can help strongly affected patients to communicate with their environment, which otherwise might be impossible. Other nursing cases like patients suffering from the Locked-In syndrome could also benefit, as only control of the eyes is required.

1.2 Objectives

Speak4Me wants to provide an affordable, customizable, easy to build and use device to support handicapped people to communicate with their environment. Language synthesizers and speech computers exist, but are very expensive which can make them unaffordable depending on socio economic background including the country of residence. Our target is to deliver a solution below 100$ in total cost, to reach as many affected people in the world as possible. By providing blueprints and code basis of the entire solution we want to encourage others to build upon our work an improve or adapt it.

1.3 Background

Our entire solution is based on the open Arduino platform, which is built around standardized hardware and a basic coding language. Previous projects based on the Arduino platform, have shown promising results in relation to eye tracking (cf. Arduino, 2018). Using infrared sensors attached to an ordinary pair of glasses, a project team was able to visualize the eye movement on LEDs arranged...

Read more »

3d-printing-stl-files.zip

All required 3d printing files you need to print/order components to better adjust the sensors on your glass and have a custom fit box for the Arduino and the circuit board. For more information check our DIY video tutorial: https://speak4me.github.io/speak4me/step-1-what-you-need.html

Zip Archive - 111.27 kB - 08/07/2021 at 21:06

Download

Gerber.zip

All required Gerber files if you like to like professional designed circuit board instead of messy cables. For more information check our DIY video tutorial: https://speak4me.github.io/speak4me/step-1-what-you-need.html

Zip Archive - 34.76 kB - 08/07/2021 at 21:06

Download

architectural_overview_EUeKgRyTmt.png

Architecture schema of our solution

Portable Network Graphics (PNG) - 457.22 kB - 08/07/2021 at 21:04

Preview
Download

arduino_code.ino

The coding for the Arduino. To customize your expression, checkout our customization tool on our project, you do not need any coding skills to customize your required phrases. #noCode

ino - 26.31 kB - 08/07/2021 at 21:04

Download

  • 3 × QTR-1RC Reflectance Sensor
  • 1 × Parallax Emic 2 Text-to-Speech Module
  • 1 × 3.5mm angle/straight jack cable (AUX, 1.5 m)
  • 1 × Insulated braided copper wire (3m)
  • 1 × USB 2.0 cable (EASY male A > male mini-B, 0,5m)

View all 14 components

View all instructions

Enjoy this project?

Share

Discussions

nick vuono wrote 08/31/2021 at 21:21 point

thank you for submitting this. I’d tried looking into similar technology a couple years ago but didn’t have a lot of luck.


one thing to consider as a point of extension is using the eye movement as input to a “switch control” device that would be used to control existing AAC speech apps on ios or android devices. Most people don’t know how easy it is to go to “IOS settings > accessibility > switch control” and start using switch control inputs.

The simplest implementation for a Bluetooth “switch control” device is just a Bluetooth keyboard input.  You would just be mapping eye movement commands and sending a keypress of 7 for “move to previous item”,  8 for “move to next item”, 9 for “tap”

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates