My daughter, Lorelei, was diagnosed with Acute Flaccid Myelitis a rare polio-like syndrome and lost the use of her arm. Traditional treatments would have given her about a 5% chance at recovery, and state of the art treatments were closed to us or were too costly. Without any experience, we decided to tackle this challenge by setting up an open project and reaching out to experts for help.
[Update] Lorelei is now able to move her arm again! It has regained about 50% of its strength, however, her shoulder remains paralyzed.
We are looking for some help to develop open source code for this project
We aim to develop an open-source platform for muscular activity signal detection —recorded non-invasively on the skins surface— and using that for exoskeleton control.
The muscle signals will be recorded using the Myo Armband and processing/exoskeleton control will take place on a Raspberry Pi. The goal is to detect 2 to 5 patterns from myo signals using machine learning / pattern recognition methods.
Many open-source libraries are available and can be used in the project subject to minor modifications. Guidance and support will be provided.
We printed out our first attempt at the arm brace. We decided to use PLA plastic as it would allow us to print out the brace flat and them mold around Lorelei's arm after we heat it. To make that easier and to make sure I don't burn lorelei, we made a cast of her upper and fore arm, and molded the brace around those instead. It worked really well!
We have been testing out the arm brace design a little by printing it out on paper. And Now that we have a design that should work well we sent it off to the 3D printer, and will hopefully get it by the weekend.
Super Excited today. Lorelei and I figured out how to reliably get a signal from her arm to control the robotic prosthetic! (Disclaimer the video and lighting is poor but I was just excited to quickly show you all).
Background: Up until now we have been struggling to get an accurate signal from her arm to control the robotics. We are looking for the signal that is being sent from Loreleis brain to her arm, but because of the motor neuron damage, the signal is really weak. The sensors we used until now essentially filtered and normalized the raw signal so that it can isolate spikes within the signal. When a spike reaches a given amplitude (threshold) we would trigger the robotic actuator to move the prosthetic arm, thereby helping lorelei move it. However, given the weakness of my daughters signal in her muscles, we had to set the threshold really low which in turn caused the robotic actuator to be triggered by signals from her heart or other muscles in her arm or body. Using this traditional approach resulted in a significant roadblock to our project.
I was thinking that there must be a better way... Thinking of approaches that are used within things like image and voice recognition, we can train algorithms to recognize objects in pictures (this is how Google can tell you what is in your photos) or how we train algorithms to recognize speech (Like Siri and Google Now). In both these examples they train the algorithms with vast amounts of data, they do not filter the data or normalize it. What if we did the same with Lorelei's signals, could we not use a similar machine learning technique to look at the unfiltered, full signal that Lorelei's arm produces and have an algorithm learn to recognize the hidden signals within that noise.
It turns out there is a company CoApt who are doing something really similar. After a few emails and a call, they kindly sent us an evaluation unit. After placing 17 sensors on her arm and a bit of calibration we got it to work. We were able to control a virtual arm you can see in the video. This is so amazing! Really amazing stuff CoApt, thank you!