Close
0%
0%

CIRTS - Configurable Infra-Red Tomography systems

Using cheap IR LEDs and phototransistors to understand tomography. Using machine learning to make inferences based on sensor data. Etc :)

Similar projects worth following
This project involves playing around with different ways of doing tomography with IR light. I use simulation to figure out what might work, test ideas on a rotating platform and then build rings of sensors and LEDs that I can use to image things directly or to gather data that is then fed into machine learning models to do things like identify objects or estimate their position.

It will take a little while to fill in the backlog of stuff that I've already done, but I'm hoping to upload the material in a way that makes sense.This project was part of my final year engineering project at UCT. However, due to the relatively small scope of the project, I didn't write up some parts.

Report: https://github.com/johnowhitaker/CIRTS
Video overview: https://youtu.be/fFSetleTe_o

This is an overview of the project - each component will eventually get it's own project log with more details. While I work on this, the best place to get more information is my project report (available in the linked GitHub repository).

Introduction

This project arose because I wanted to do some experiments with Computed Tomography, but I didn't know enough to see what would work and what wouldn't. How many sensors does one need to achieve a particular goal for resolution or performance? What geometries work best? And what if we can't keep everything nice and regular? 

I built some tools that let me simulate these kinds of arrangements, and did some early experiments on image reconstruction and on the use of machine learning (specifically neural networks) to make sense of readings. Even with a weird arrangement like the on on the right, I could make some sense of the data. I'm not going to write up the simulation stuff here, but there is code and info in the GitHub repository. It will get tidied up and updated once the marking process ends.

I tested out these arrangements in the real world by building some fixed arrangements, and by using a 3D printed scanner to position an LED and a phototransistor (PT from now on) in different locations to slowly simulate having many detectors and emitters. 

The Rotating Scanner

This was fun to build - it was pretty much my first project using my 3D printer, and man was it easy to get something working fast! There are two components to the top surface, each of which is controlled by a stepper motor. The drive circuitry would be familiar to anyone who has played with 3D printers or other CNC machines. A teensy microcontroller board drives the steppers in response to commands from the computer and reads the light intensity at the PT with the built in ADC. 

The above image shows a scan in progress. The outer ring moves the phototransistor around, taking readings. Then the object being scanned is rotated and the process is repeated. Here are some example scans:

These scans were pretty rough, but with tweaking it could image 0.1in pin headers:

A view of the underside/internals

I'll post more on this if I get it out and do more scans, but that covers the basics. I can take a high-res scan and store that. Then later, if I'm looking building a ring of sensors with, for eg, 16 elements, I can see what the output would be by taking a subset of the higher res scan and working with that. So it let's me try out new arrangements quite quickly, in the real world.  A useful tool!

Fixed arrays of sensors

This was the end goal - building some sensor rings to take readings instantly rather than waiting for the rotating platform to build up a scan. 

The first few I build had 8 LEDs and 8 PTs arranged in a ring:

Image reconstruction on these was poor, as expected. But with some machine learning magic I could estimate position of an object within the ring to within ~3mm, and differentiate between different objects (for eg, pen and finger) with high accuracy. Pretty fun! I set up one ring as a game controller - a finger placed in the ring could be moved to steer a ship and dodge enemies. 

Raw image reconstruction of a finger moving:

I'll have to do a separate post about the ML work, or add info here later... 

More recently, I've been building better rings and trying different arrangements. <TODO update on ring of 14

  • Quantifying performance for the bigger 14-LED ring

    johnowhitaker11/30/2018 at 20:11 0 comments

    Seeing images reconstructed from the new, larger ring was pretty cool. But did it offer improvements besides making pretty pictures? In this test, I try to quantify that.

    First, some background. With the first ring of sensors I built (8 LEDs, 8 PTs) I used my 3D printer to position an object in 500 random, known locations and take readings for each. I split this data into two sets - one for testing (125 samples) and one for training (375 samples). I train a regression model to predict the location based on the readings. For the 8-sensor ring (r8 from now on), the best model I can make has a mean absolute error (MEA) of 2.3mm, with a root mean squared error (RMSE) of 2.8mm. Shifting to the offset r14 arrangement described in the previous project log, the  MEA drops to 1.2mm, RMSE of 1.6. A marked improvement. To put it in plain language, you could place the pen within the ring and have the model tell it's location to within 2 mm most of the time. There are a couple of outliers, but for the most part the model is surprisingly accurate.

    I wish I had time for more details, but for now here's the data collection process. Move to location, record data, repeat:

    Salient points:

    • I used the same 500 positions for the two arrangements
    • Moving from 8 to 14 sensors reduced the RMSE of predicted locations by ~40%, not bad!
    • For the interested, the machine learning model is a Random Forest regressor from scikit-learn
    • Getting the training data is time consuming, even when assisted by a motion platform like a 3D printer. More on this point to follow.

    Doing image reconstruction on one set of readings:

    Plotting predicted X location vs actual location for training (blue) and test (orange) data. Not bad!

    With this, I can now take a set of readings and predict the location of an object within the ring. Useful? Maybe. Fun? Yes.

  • Break time is over - a bigger, better sensor ring

    johnowhitaker11/16/2018 at 13:11 0 comments

    I took a bit of a break once the project report was submitted, but I was itching to carry on testing some ideas I'd had which were a little beyond the scope of a one semester project. This weekend, I finally got back in the workshop and built a better sensor ring. I had previously tried to build a ring with 16 sensors, but had used the wrong ones and made some other mistakes.

    I didn’t have quite enough of the more sensitive phototransistors left to make another ring of 16, so I decided to try something different: A ring with 14 LEDs and 14 PTs in a non-symmetric arrangement. The arrangement is shown on the left - note the difference between this and the one on the right (regular spacing). That hole in the centre has been bugging me.

    The ring is built better than the previous ones. The LEDs are connected to 5V (not the 3.3V of the GPIOs) and pulled low when activated. This means the signal received by PTs on the other side of the circle now results in adc reads from ~200-900 (out of 1024) giving a much better range than before. The beam spread is still an issue - only about 5-7 PTs opposite the LED are illuminated. But this is still enough to do some image reconstruction!

    I am busy with the maths (see next bit) but by interpolating to approximate a fan-flat geometry I can already get some images from this thing. Figure 3 shows one example - the two separate blobs are two of my fingers, inserted into the ring. Reconstructions get worse away from the center as less light is available (beam angle) and so the signal is smaller. But there is a nice central area where it works well.

    One cool aspect: this ring can capture scans at >100Hz, although my reconstruction code can’t keep up with that. I can store data and reconstruct later to get 100fps ‘video’ of objects moving around. For example, I stuck some putty onto a bolt and chucked it into a drill. Here it is as a gif:

    I've only just started playing with this, but should have a bit more time to try things out in the coming week (here's hoping).

    The maths

    At the moment, I use interpolation to get a new set of readings that match what would be seen by a fan beam geometry with a flat line of sensors. This means I can use existing CT reconstruction algorithms like Filtered Backprojection (FBP) but by doing this I sacrifice some detail. I'm hoping to get some time to work out the maths properly and do a better job reconstructing the image. This will also let me make some fairer comparisons between the two geometries I'm testing (offset and geometric). It will take time, but I'll get there!

    For now, enjoy das blinkenlights 

View all 2 project logs

Enjoy this project?

Share

Discussions

Asier Marzo wrote 11/22/2018 at 23:16 point

Neat project. We used the fact that IR passes through human flesh to detect gestures: https://www.youtube.com/watch?v=hCahI7ZRbOA

I will certainly follow your progress.

  Are you sure? yes | no

johnowhitaker wrote 11/23/2018 at 06:55 point

Very cool! I had seen something similar done with Electrical Impedance Tomography around the wrist. I think IR is better - no need for good electrical contact. 93% accuracy is great - how many training samples did you guys need to reach that point? And is there a training database you would be willing to share? I found some good ways of improving model accuracy with this kind of data and would love to mess around with gesture sensing. 

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates