Interpreting EMG signals was harder than we initially anticipated

A project log for uEMG - small 4-channel EMG wearable device

A 4-channel EMG wearable (with a bracelet!) to control stuff with it.

Olya GryOlya Gry 10/24/2021 at 21:060 Comments

Simple methods - from thresholding on individual channel to neural network processing raw input data to PCA calculated over high part of the spectrum - failed miserably, providing like 60% recognition rate on humble 4 gestures set. It became clear that something more sophisticated is necessary - but we desperately wanted to keep required calculations at minimum (so that in the worst case one core of a weak laptop could handle it, and ideally simple enough to place it on-board).

After some trials, we ended with an unusual method of visualizing data. We have 4 channels, and on each channel most EMG information is located in 2 upper bins of 8-point FFT calculated on-board. So we have 8 numbers, two for each channel. Representing each channel as a point on XY plane was obvious - and for more intuitive representation, we added offsets to them so when signal is zero, they form a perfect square.

At this point it became interesting. With right scaling, different combinations of muscle activity both shifted this square as a whole, and deformed it - in quite distinct ways. Looking at those shapes, it became immediately obvious which gestures can be recognized and which look the same and no matter how complex machine learning would be applied, results still won't be great.
But this was only the first step.

After some more experiments, we decided to apply k-means clustering to these data. Creating proper distance functions took some time (euclidean distance led to unsatisfactory results - our goal was to separate clusters by shapes, not by signal amplitude), but a combination of angle differences and center distance produced really good clusters.

This was a big step forward - but on top of it, we added signal renormalization using mean and standard deviation calculated over all currently detected clusters. That approach extracts really a lot of information and gives its immediate visual representation, which you can see on this video (9 clusters are shown on the right, and current signal on the left in purple, with green square drawn to provide a visual reference).

Adding MLP on top of that processing gave much, much better results - but more on that later!