Turning an RPI into a MoCap Camera.
Functional *and* Delicious...

Similar projects worth following
After years of thinking 'I could do that better' of commercial manufacturer's MoCap systems and software offerings, I'm going to have a crack at it myself. I mean, how hard could it be? It's a huge problem space, but if you chunk each section down, eventually it's going to be something a person could tackle.

You're welcome to join in the fun...

To build a mocap system, you first need some mocap cameras. A commercial camera will have a strobe light, a high-throughput IR sensor, and some DSP behind it to read the imaging and find the 'blobs' - which is the strobe's light reflected from markers the subject is wearing.

I think I can make this out of Raspberry Pi v3, plus the v2 NoIR camera, plus an 'off the shelf' IR Strobe unit.

I will need to do quite a lot to make it work though.

  1. Develop 'Command & Control' protocol and state machine to manage camera
  2. Develop 'packet scheduling' technique to return mocap data (Centroids) and Images in a sensible way.
  3. Develop Blob detection Algorithm
  4. Develop Test application to control camera(s), and view Imaging / Blobs
  5. Build Control gear to 'flash' the strobe
  6. Build hard-wired sync system so all cameras strobe at same moment
  7. Develop interface between Pi Camera & my Blob detection code
  8. Develop method to read from camera (still or video frame) in step with sync
  9. Build box for camera/pi

  • Back from the dead...

    Bit Meddler08/08/2020 at 11:10 0 comments

    OK this has been asleep for too long.  Need to get this running again so I recruit some better devs.

    So Reformatted my pi, and ran the following:

    sudo apt-get install htop tightvncserver python3-dev libboost1.62-all-dev
    pip3 install numpy pyzmq
    sudo mkdir /code
    sudo chmod 0777 /code
    cd code
    git clone
    git clone

    Good, now what?

    I see if the camSim works :)

    C Implementation of the latest Connected Components algo from "rpiCap" would be good.

    Need to get numpy ndarray in to the C lib, and a Vector of ROI data out, I guess.

  • boost::python and Connected Components

    Bit Meddler07/07/2016 at 11:16 0 comments

    Boost Python update

    This looks interesting, I'll have to take a deeper look at it S.P.E.A.D.. A fast protocol for throwing NumPY Arrays about. Berkeley, so you know they're serious. Could be an interesting reference for what I'm doing. And certainly, I want to throw NumPy arrays about!

    So I've compiled the 'Hello World' example natively on the Pi...

    #include <boost/python.hpp>
    namespace bp=boost::python
    int test() {
        return 42;
    BOOST_PYTHON_MODULE(smoke_test) {
        bp::def( "test", test );
    Real 0m27.47

    Ouch, this is a journey back in time. But I can build on it.

    Connected Components Update

    Python implementation is done and predictably slow, ~0.55 sec for a basic image with 75 blobs on an i7 ~2.6GHz. I'm not sure what the slow set could be, as it's not doing much. Visiting every pixel of an image *is* going to be slow though.

    The core implementation is as follows:

    class Region( object ):
        def __init__( self ):
            # Reset does the job
        def __str__( self ):
            return "x{} n{} r{} a{} ".format( self.last_x, self.last_n, self.last_row, self.area )
        def __repr__( self ):
            return self.__str__()
        def reset( self ):
            # zero out settings
            # BB
            self.bb_x = self.bb_y = 1e6
            self.bb_n = self.bb_m = 0
            # Line scanning / merging
            self.last_row = self.first_row = 0
            self.last_x = self.last_n = 0
            # Statistics about region
            self.area = 0
        def updateStats( self, area ):
            self.area += area
        def combineFrom( self, other ):
            self.updateBB( other.bb_x, other.bb_y, other.bb_n, other.bb_m )
            self.updateStats( other.area )
            # set the linedata explicitly
        def setLineData( self, x, n, r ):
            self.last_x   = x
            self.last_n   = n
            self.last_row = r
        def updateBB( self, x, y, n, m ):
            self.bb_x = min( self.bb_x, x )
            self.bb_y = min( self.bb_y, y )
            self.bb_n = max( self.bb_n, n )
            self.bb_m = max( self.bb_m, m )
    def tidyList( regions, range_idx, row ):
        end = len( regions ) - 1
        scan_idx = range_idx
        while( scan_idx < end ):
            if (regions[range_idx].last_row < row ):
                range_idx += 1
            scan_idx += 1
        region_idx = end
        while( region_idx > range_idx ):
            if( regions[region_idx].last_row < row ):
                elem = regions[region_idx]
                regions.remove( elem )
                regions.insert( range_idx, elem )
                range_idx += 1
            region_idx -= 1
        # end
        return range_idx
    def connectedComponents( img, threashold ):
        rows, cols = len(img), len(img[0]) # assumes row MJR, 1 channel (8-Bit Grey)
        # vars used in algo
        current_region = 0
        region_scan_start = 0
        in_line = False
        # TODO Replace with 'temp' region
        line_start = line_end = area = 0
        cidx = -1
        region_list = []
        for ridx in xrange( rows ):
            # do row
            current_region = region_scan_start
            cidx = 0
            in_line = False
            line_start = area = 0
            line_end = -1
            while( cidx < cols ):
                px = img[ridx][cidx]
                if( px > threashold ):
                    in_line = True
                    line_start = cidx
                    area += 1
                    cidx += 1
                while( in_line ):
                    cidx += 1
                    if( cidx < cols ):
                        if( img[ridx][cidx] < threashold ):
                            in_line = False
                            line_end = cidx
                            area += 1
                        # run out of data
                        in_line = False
                        line_end = cidx
                # if line found
                if( line_end > line_start ):
                    # scan regions for merge or insert
                    regionscanning = True
                    merge_mode = 0
                    while( regionscanning ):
                        if( current_region >= len(region_list) ):
                            # end of list.
                            regionscanning = False
                            if( merge_mode == 0 ):
                                # the line hasn't merged, so append
                                newReg = Region()
                                newReg.setLineData( line_start, line_end, ridx )
                                newReg.updateBB( line_start, ridx, line_end, ridx )
                                newReg.updateStats( area )
                                # insert into list
                                region_list.append( newReg )
                                current_region += 1
                        elif( line_start > region_list[current_region].last_n ):
                            # skip towards a region
                            current_region += 1
                        elif( line_end < region_list[current_region].last_x ):
                            regionscanning = False
                            if( merge_mode == 0 ):
                                # insert new region at current_region, before next
                                newReg = Region()
                                newReg.updateBB( line_start, ridx, line_end, ridx )
                                newReg.updateStats( area )
                                newReg.setLineData( line_start, line_end, ridx )
                                # insert into list
    Read more »

  • Python / C development on RPI

    Bit Meddler07/04/2016 at 11:59 0 comments

    So, a freash install of rasbian later, and I need some bits and bobs to get working

    sudo apt-get update
    sudo apt-get upgrade
    sudo apt-get purge wolfram-engine
    sudo apt-get install htop
    sudo rpi-update
    sudo apt-get install libboost1.55-all

    Sorry Wolfram, maybe next time. I imagine I'm going to have to strip away the GUI and a number of other services usually considered 'normal' to try and keep the CPUs free of extra load.

    Anyway, my point is - I need to write my 'connected components' in C, probably parallel, probably with some hand tuned inline ASM... I'm asking a lot of a small processor. But I'm doing most of my experimentation in python. So I think it would be good to learn how to wrap a C method or library and expose it to Python.

    Google suggests using Boost is by far the essayist route, so I've installed the latest version for ARM (trusting apt-get to have done the right thing :) ), and now need to find some resources online to figure out what on earth I'm doing...


  • Connected Components - Python

    Bit Meddler07/04/2016 at 11:24 0 comments

    Motion Capture is generally just a series of compression problems, through using the Strobes and the reflective markers, we are compressing the 'description' of the subjects body pose from all the pixels their light may fall onto, to just the pixels seeing the markers, and compressing the 'description' down to just these markers (from this angle).

    The image that hits the sensor of a mocap camera will be 95+% black, or below a certain threshold, the only bits we're interested in are the regions of light on the image. once we can hone-in the bright regions (the blobs), we're then only interested in their center, which could be computed from the x,y extents = (n-x)/2, (m-y)/2. (NOTE: x,y is the upper left of a bounding box, n,m are the bottom right. In screen space (0,0) is upper left of the screen (1920,1280) would be bottom right)

    Anyway, how do we ID the regions of interest in the image? 'Connected components' (see wiki) offers a method to scan an image and tag connected regions. We're not interested in tagging the regions, just maintaining a bounding box of them, and maybe collecting other data. So the Algo presents itself:

    regions = []
    first_possible_region = 0
    current_region = 0
    for rows:
      current_region = first_possible_region
      while pixels in col
        traverse dark pixels till a line of bright pixels found
        if line found:
          merges = 0
          while current_region < len(regions):
            if line touches current_region:
              if merges == 0:
                merge line into region
                merges = 1
                merge current & last regions
                drop current region
                current_region -= 1
              // line ends before current_region
              make new region out of this line
              insert it into regions before current_region
            current_region += 1
          if merges == 0:
      after scanning all pixels in the row, tidy up the region list
      move any regions not touching current row down the list
      preserve order of list (regions will be from x to n)
      move first_possible_region index so impossible regions are not tested
    // whole image scanned now
    for every region:
      compute center
    return centers
    Which I'll try to bang out in python and test with my bench-marking images / ground-truth data

  • Step Zero... Simulating MoCap data

    Bit Meddler07/04/2016 at 10:44 0 comments

    Ok, so before I can write the code to do the work, I need a couple of tests to validate / benchmark against.

    Three problems:

    1. Detecting Blobs, and the resulting X,Y position + other data being true or close enough
    2. Bench-marking efficiency - using the same data and timing the processing steps will highlight areas to improve, flag diminishing returns on over optimized parts
    3. Experimenting with 'tracking'. In a sequence of blobs, being able to assign an arbitrary ID to each blob and maintain these IDs sensibly as they move

    In order to generate this bench-marking data, I will simulate some 'dots' moving around, render them to an image, drop out the image, and save the dots 'actual' X,Y location and UID.

    At this stage, I am pretty much defining the file format the whole MoCap system will record to.

View all 5 project logs

Enjoy this project?



Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates