Motion Capture system that you can build yourself

An open hardware-software framework based on inertial sensors that anyone can use to build a human motion capture system.

Similar projects worth following
A couple of years ago I wanted to make a digital performance with a dancer on stage, and wanted to use a mocap suit to get his movements. There was none available at an affordable price, so I started the development of this one.

In the meantime, several cheaper options came out, but those remain out of the capability of most users and, more importantly, they work under proprietary licenses. You can’t modify the way they work, or use part of them in another software project.

As an alternative, this is a motion capture system that can be easily assembled by anyone in order for them to start capturing as soon as they are able to build it. Additionally, it is an open hardware-software framework that can be freely tweaked, enhanced, or used as part of another project.

This project consists on 3 parts:


Motion capture is about getting the orientation of every body limb or part at real time as accurate as possible. A simple MEMS IMU device*, and freely available sensor fusion algorithms are enough to get a decent result. The problem starts when you want to get the data of several devices. Most of this devices came with an i2c interface, but their address is fixed in the hardware. So one of the building blocks of Chordata is the sensing unit capable of coexisting with several “siblings” on the same bus. At the moment I have developed the “IMU-Proto” that allowed me to develop the rest of the project. It consists of a LSM9DS0 IMU, and a PCA9544A i2c multiplexer. The focus of the whole project is to reduce costs, so all the passive components on the board are through hole, passing most of the work of assembling from the industrial manufacturer to the final user while saving money in the process.

Software (Hub):

Getting the data of a lot of sensors on real time, processing it, and send it in an easy-to-read format to some client is not a simple job, so I’m developing a software from scratch to deal with it.

It is responsible for:

  • Building a digital model of the physical hierarchy of sensors. Initializing the i2c communication on the Hub, and running the configuration routine on each of the sensors.
  • Performing a reading on each of the sensors at the specified refresh rate.
  • Correcting each sensor reading with the deviation obtained on a previous calibration process.
  • Performing a sensor fusion on the corrected sensor reading, obtaining absolute orientation information in form of a quaternion.
  • Sending the orientation data, together with the sensor_id and a timestamp to the client using an open protocol (such as OSC)

After several testing I discovered that using an ARM computer running linux was the best choice to host such a program, so all of the development of this part of the software has been done on C++, using a Raspberry Pi 3 as the hub. Some of the advantages of this type of hub, in comparison with simpler microcontrollers are:

  • It’s not an expensive component.
  • Programming and debugging is enormously simplified.
  • Some of them, like the rPi3, came out of the box with all the communication peripherals needed to perform a confortable capture, with the remarkable example of the Wifi adapter.

The choice of performing the sensor fusion inside the hub is based on:

  • Mayor cost of the IMU units capable of performing sensor fusion on-chip
  • Mayor accuracy of the sensor fusion performed after the raw data is corrected by a previously done calibration.
  • Since the bandwidth in the i2c bus generates a bottleneck in the sensor’s data acquisition, the fusion sensor processing inside the hub doesn’t add a significant overhead.

Software (Client):

This is the less developed part at the moment. It consist on a python script running in Blender that grabs the quaternion data from OSC, and rotates the bones of a 3D armature.

Further development is planned, and the basic client software should be a Blender add-on responsible for:

  • Establishing some handshake communication with the hub, checking compatibility and status.
  • Communicate the status to the user.
  • Act as a GUI to run the on-pose calibration procedures and start the capture.
  • Display the preview of the capture on real time, and allow the user to register part of it.
  • Allow an user with a basic experience on Blender to create a custom distribution of the sensors on a virtual model of the human body, export it on an structured data format (like xml) and send it to the hub.

*For the sake of simplicity here I refer to IMU device, but to be correct I should say IMU (gyroscope and accelerometer) + magnetometer

IMU Proto - EAGLE and

Source and gerber files for the IMU-Proto board based on a LSM9DS0 sensor and a PCA9544A i2c multiplexer

Zip Archive - 148.65 kB - 09/29/2017 at 08:55


  • 12 × LSM9DS0 Semiconductors and Integrated Circuits / Misc. Semiconductors and Integrated Circuits
  • 12 × PCA9544A i2c multiplexer
  • 1 × Raspberry Pi 3

  • Chordata @ Maker Faire Rome 17

    Bruno Laurencich4 days ago 0 comments

    It's been a while since I don't post any update here.

    Last month we were very busy in the making of a better looking prototype to be shown on this year edition of the Maker Faire, here in Rome.This new prototype really shows the potentialities of the system, even tough several adjustments and calibration procedures should be still implemented.

    Public's response was really positive and several people showed interest in accessing to a cheap and hackerable mocap system. This response convinces us on putting more effort on its development.

    We also received some exciting collaboration proposals so perhaps we'll have some interesting announcements to make on the incoming weeks. 

    Stay tuned.. 

  • Hands on the second physical prototype

    Bruno Laurencich11/09/2017 at 18:22 0 comments

    I finally have all the parts to start building a second and more complete version of the system. The PCBs, stencil, and components were already arrived some time ago, but the solder paste kept me waiting for a long time, if you want to hear the whole story, please refer to The solder paste odyssey.

    If everything goes right I should be able to build at least twelve of the sensor nodes, and arrange my first whole-body capturing suit.

    The problem: apart from the general refactoring that I’m performing on the code, I will have to implement the lecture of the LSM9DS1 instead of the LSM9DS0 from the previous board. Fortunately sparkfun offers a library for arduino, which can be easily adapted to this system

    The real problem: home soldering 12 of these units with their tiny LGA packages, represents 12 one-shot opportunities... of getting it wrong.

    The article linked below also shows some of the cutting-edge equipment with which I’ll shoot these twelve shots.

  • Refactoring the PCB

    Bruno Laurencich10/19/2017 at 19:14 0 comments

    As I said, it was time to make a second version of the PCB in order to be able to build a complete body suit. I’ve called it “K-Ceptor” (Kinetic perCEPTOR).

    The changes are detailed on the previous log entry, and listed here:

    • Changed the LSM9DS0 for the LSM9DS1
    • Added an address translator and removed multiplexer
    • RJ-12 connector for both input and output (or optionally solder a regular 2,45mm header)
    • An EEPROM memory

    One thing that I hadn’t planned for (it came out while I was making this new pcb) was to arrange some of the components on a separate board: the “id_module”.

    This module is a tiny, one-layered pcb, containing the EEPROM, some resistors to setup the translation value of the LTC4316 (i2c address translator).

    This separation allows for greater flexibility and hardware resources reutilization. For example, suppose some user has a complete suit and, at some point, he is using it for two different activities taking place in different environments, namely: a capture for an animation performed outdoors and the rehearsals of a live performance in a theater. Since the electromagnetic interference on each location is completely different, the ideal would be to perform a calibration* on each sensor at least once for each place. Having a duplicate set of cheap id_modules would allow the user to easily apply the corresponding calibration before each use.

    (*) again: I'm talking about the sensor calibration, not to be confused with the pose calibration which should be performed before every capture.

    A render of the id_module stacked in position, on top of the K-Ceptor.

  • Current situation and ongoing work

    Bruno Laurencich10/03/2017 at 09:04 0 comments

    Here's a video showing the current state of the capture. This 3 sensor prototype it's with what I've been working on in the last months, even if it's not as spectacular as a whole body capturing suit, it allowed me to easy test the features as they were implemented. 

    The focus on this part of the development was put on:

    • General stability of the program.
    • Capability for reading sensors arranged on any arbitrary disposition (or hierarchy).
    • Obtaining readings of each of the sensors at a regular interval, no matter in which part of the hierarchy it was.
    • Capability for reading a single sensor on the hierarchy, process it's raw data and generate calibration information. Dump this information to a file.
    • Implementing a correction step for each sensor, with data obtained on a previously done calibration, before the sensor fusion. 

    The Imu-Proto sensor unit,

    physical base of the prototype is simple pcb featuring a IMU sensor, and an i2c multiplexer. The idea was that this units should be easily interconnected allowing the creation of tree shaped hierarchies. So it had a 4 pin input and output carrying the current, and the i2c bus. It also exposed pins for the secondary gates of the multiplexer.

    This arrangement was great for testing, but now I'm working on the creation of more user-friendly version of the sensing unit, which will have the following features:

    • -Lack of a multiplexer which will be on a separate unit, and instead implement an address translator.

    The multiplexer works fine, but it wasn't really used on all nodes, in the other hand it added unnecessary overhead to the bus, having to switch it for each sensor before the reading. Instead having it on a separate unit will allow a more flexible creation of arbitrary trees.

    • An easy pluggable connector.

    Of course, having to solder 4 wires in order to create the suit wasn't flexible at all. This connector should allow the performer to freely move while keeping the connection stable, should be cheap and common, and not excessively  bulky. At the moment I'll go with the RJ-12 connector (the one regular telephones use).

    • An on-board memory.

    The main function of this memory will be storing the sensor calibration data. This calibration should only be performed once in a while*, and until now the generated data was stored on a file in the Hub, not allowing a particular sensor to change Hub, or position on the hierarchy.

    (*) I'm talking about the sensor calibration, not to be confused with the pose calibration which should be performed before every capture.

View all 4 project logs

  • 1
    0) Wait..

    The project is under heavy development, so nothing apart from some EAGLE files and sparse scripts has been released.

    My idea is to create an easy replicable, yet expansible system.  Making this real requires lots of work, so as soon as I manage to pull out something stable, well arranged and documented I'll be publishing here.

    I you wan't any questions please write them below, or contact me in private

View all instructions

Enjoy this project?



Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates