Initial System Design

A project log for Onero - Sign Language Translation Device

The focus of this project is to build a low cost, minimalist sensor glove that can be used to translate Sign Language for the Deaf community 04/24/2016 at 21:170 Comments


The hardware used in this project will grow and evolve as the design improves. The hardware will be developed in three main stages: Proof of concept, prototype and finally (if time and budget allows for it) custom boards.

Proof of concept

Unfortunately due to budget constraints and the weak South African Rand, I am limited to creating a single glove at this point in time. The proof of concept will be made using off the shelf components and will most likely be held together with duct-tape. I am hoping to have access to a 3D printer to create housings for the components.

The basic components that I will use for the proof of concept are as follows:

Microcontroller: Teensy LC (

The Teensy will do some basic processing, handle the communications (both bluetooth and the collecting data from the sensors) and do power management. The Teensy is great because it is powerful, low cost and can be programmed using the Arduino IDE which will help to speed up prototyping time.

Inertial Measurement Unit: 6DoF IMU board (LSM6DS33)

Six IMU’s will be used in this device; one for each finger and one at the back of the hand. Ideally 9DoF devices would be used to help reduce drift in the gyro readings but at this time it is out of the budget. THe LSM6DS33 was chosen for its very low power consumption and “Always On” feature which will be very useful in this application.

Communication: Bluefruit EZ-Link Bluetooth Module

Bluetooth works great because it is low power and can communicate easily with both PC’s and smartphones. This is important as most of the processing is going to be done off the device in order to save power and reduce cost.

The total came to R2305 which is about $162. This is the bare minimum for one of the gloves.. As mentioned this is for the proof of concept and so the device will be tethered to a desktop power supply. Once the design has proven successful, the necessary components will be added to cut the tether. A fully fledged prototype will be tested.


The prototype will be an extension of the proof of concept, it will need to meet the criteria laid out in the previous section. It is likely that there will be several iterations of the prototype by the end of this project. As those iterations are created I will be sure to update this section.


The device will be programmed using the Arduino IDE and try to use available open-source libraries instead of trying to reinvent the wheel. The goal is to get up and running as soon as possible. My goal is to try to make the code modular so that it will be easy to maintain and change as the project evolves.

The Repo will be made available as soon as the basics are up and running. Any comments or advice is always welcome! I am always happy to hear constructive criticism because I am always eager to learn something new.

As I see it the software will be broken up into the main functions. The firmware will have several main functions: calibrating the device, capturing the user data, transfering the data to another device.

There will need to be software that can receive and store data from the Onero device, once this is done the software needs to use machine learning in order to translate the data into words. A GUI would be useful to help visualise the data as well as to display the data as a digital hand.

Gesture Recognition Approach

Raw Data: The data needs to be collects in a precise and organised manner. It will require several fluent signers as well as a few learner signers to ensure that the models can generalise well. Each signer will have to do several sets of tests. These tests will start with simple static gestures (like the alphabet) and then move on to dynamic gestures. The dynamic gestures will also start with single words and then move on to full sentences and conversations. The collection of data will be the most time consuming part of this project and will need to be very clearly thought out in order to make sure that the project is a success.

Phonomes: A phoneme is a single "unit" of sound that has meaning in any language. There are 44 phonomes in the English language and by using different combinations of these phonomes you can recreate every single word that exists. This is the approach I want to take in order to translate Sign Language. The idea is that every gesture can be broken down into different basic units and if you can recognise the different basic components, you can group them together to recognise any word in Sign Language. For example there is a limited number of finger positions (eg thumbs up) and only so many ways you can move your hand (back and forth, short arc, long arc, spiral, etc). I

Translation: This is going to be done using machine learning. A lot has changed in terms of machine learning since I did my original project, the field has grown considerably and so I am going to have to do a lot more research as to the best approach. I will start by using my experience with Neural Networks and build on from there. I have seen that Hidden Markov Models have been very successful in the past and I will try it out for this project.


This is a very far off consideration but will kept in mind while developing the device. As I design the device I will make sure to update this section with any production based decisions I have made.