As a team, we considered many high-level design choices before building our current system. Building a sign language interpreter is an immensely complex task, which could not be perfected in the time we had. Operating under such tight time constraints, we realized early on that we would have to carefully choose which aspects of the project to complete and which to leave as future work. When deciding which areas of the project to focus on, we chose functionality that would serve as a strong foundation for further development. By focusing on basic functionality with this project, we will be able to return to this project later and easily improve upon our existing work.
A robust sign language interpreter robot would have two key features:
- Two hands and arms, with full control of all degrees of freedom to closely emulate human arms.
- A robust voice detection system, which would be constantly detecting audio.
It is worth noting that non-verbal communication is just as, if not more important in sign language as it is in spoken languages. Facial expression and body language are difficult to emulate accurately in robotics, requiring much more time, research, and resources than were available to our team.
As the foundation of our project, we chose to design and implement scaled-down versions of both those features:
- One hand, with two degrees of freedom in each digit and one degree of freedom in the wrist.
- A voice detection system capable of capturing three seconds of audio.
When deciding to implement those two features, we discussed many alternatives:
- Visually displaying detected words on a screen.
- Displaying a simulated hand or hands on a screen, which would sign the detected audio.
- Using a web server or the command line for the user to type in words or letters to be signed by the robotic hand.
- Using a web server with a list of available signs for the user to select words or letters to be signed by the robotic hand.
Each alternative design only captures one of the two key features of a sign language interpreter described above. For this reason, we split our project into two parts – the robotic hand and the audio detection.
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.