Close

More Frames?

A project log for Bullet Movies

Using red, green, and blue LEDs to capture short movies of very fast objects

ted-yapoTed Yapo 11/18/2016 at 15:057 Comments

1 million farmes/second is cool (I still have a way to go before I get there), but 3 total frames is limiting. How can we take more frames?

More LEDs?

The first thought is more LEDs, but without more color sensors, there's a problem. To reconstruct the individual frames, we use the inverse of the matrix M:

but, if M isn't square, there isn't an inverse. With more LEDs than color sensors in the camera, we end up with an indeterminate system and no unique solution. To get more frames with more LEDs we need more than just the red, green, and blue sensors in the camera.

CYGM Sensors

Before the modern DSLR era, camera manufacturers flirted with 4-color sensors in some point-and-shoot consumer cameras. About a dozen different models were produced in the late 1990s and early 2000s. The Wikipedia article discusses these cameras and their cyan, yellow, green, and magenta sensitive pixels. If I could find one of these cameras on Ebay, I might be able to coax four frames out of it if I could find appropriate LEDs. It seems like a lot of work for one more frame (now 33% more frames!), but it's a possibility.

Multiple Cameras

Synchronizing a mechanical shutter with events on the microsecond scale is not likely to be possible. It would be possible to capture the same three instants from multiple viewpoints using multiple cameras. That could be interesting - multiple cameras at different viewpoints is how the original bullet-time special effects in The Matrix were done, but it doesn't give more frames.

It might be possible to simulate having more color sensors by using narrow-band color filters on a number of cameras. Since the response of the red, green, and blue pixels on the camera sensor are pretty wide, several narrow-band filters could be used (with different cameras) to capture light from several different colored LEDs. Maybe at most, you might do {royal blue, blue, cyan, green, yellow, red, deep red} for seven frames, maybe requiring seven cameras. Again, this sounds like a lot of work (and expense) for a mediocre gain.

Discussions

Eric Hertz wrote 11/18/2016 at 18:35 point

What about combinations of LEDs? E.G. one each of Red, then Green, then Blue, but then Red and Green simultaneously, and so-forth...?

  Are you sure? yes | no

Ted Yapo wrote 11/18/2016 at 20:27 point

It boils down to an algebra problem.  For each pixel in the image, the RGB components in the raw data give you three equations.  The individual frames you'd like to recover are variables, so given the three equations, you can only solve (uniquely) for three frames.

In your specific case, you wouldn't  be able to distinguish the contribution of the Red-only image from the Red in the Red-Green one, for instance.  I banged my head against this for a while :-)

  Are you sure? yes | no

Eric Hertz wrote 11/18/2016 at 22:45 point

My brain's definitely long-since muddified my image-processing/matrix-algebra abilities... but intuitively it seems like it'd be plausible to me. If everything were ideal (where a red LED *only* triggers the red-sensors, etc. *and* where the scene photographed is purely grayscale)... 

Your argument for three-equations and three-unknowns seems quite clear. Hmmm. OTOH, since you spent time thinking about it, as well, then maybe there's something in there... an equation forgotten...?

For some reason I keep thinking of e.g. FM-stereo, and/or how stereo is stored on records, but the latter I know to have introduced a second-axis, the former I'm not so sure about (maybe phase?).

Another consideration is that no flash of an LED will *take away* brightness from a pixel... could that be used, somehow, in the math?

  Are you sure? yes | no

Ted Yapo wrote 11/18/2016 at 23:22 point

You can also look at it this way.  At each pixel, you have measured one point in a 3-dimensional space (R, G, B).  Now, if you want to reconstruct four distinct monochrome images, you're asking to choose a point in four dimensions (I1, I2, I3, I4).  Just based on a counting argument, there isn't enough information to specify which 4D point you want.

FM stereo uses the frequencies between 23 and 53 kHz to send the L-R signal, which gets mixed with the mono-compatible L+R sent on the lower frequencies - essentially a "second axis"

https://en.wikipedia.org/wiki/FM_broadcasting#Stereo_FM

That I could wrap my head around.  The one that always got me was that I and Q components could be used to send completely independent signals.  If you follow the math, it works out, of course, but it always struck me as odd that there were these two orthogonal signals hidden somewhere in the real-valued voltage signal.

I've come up against the non-negativity of light before in my travels; it's always been a problem (like in this paper http://www.cs.rpi.edu/research/groups/graphics/eg2010/). It might be interesting try to use it to advantage....hmmm

I'll keep thinking about the more frames issue.  Like you say, if you can find more equations, you can solve for more frames.  Maybe some constraints on the images?  Maybe using some neighborhood information?  So far, everything has been done on a pixel-by-pixel basis, but there's lots of information in neighborhoods.

  Are you sure? yes | no

Eric Hertz wrote 11/19/2016 at 03:44 point

I've been thinking about it a bit, and I think I've got it... And I think you're spot-on about only being able to get three equations from the three colors.

But, there *is* another [set of] equation[s], that'd be pretty easy to throw in the mix... take the same photos *before* the "event"...

Again, I'm not so great at matrix-math these days, but I can see it algorithmically... assuming everything is ideal, as described earlier (no idea what the effect would be with real-world, but you've already shown your skill at removing ghosting, etc...)

I think it'd give you six images, maybe seven ;)

Thanks for those links. Yeah, I-Q... still haven't wrapped my head around that one. And, the non-negativity of light thing, hasn't quite sunk-in yet, either... as to whether it can be used for this purpose.

  Are you sure? yes | no

Eric Hertz wrote 11/19/2016 at 05:32 point

bah... that's what I get for pen-drawing my thoughts... no grayscale. Yeah, my idea would work *great* if your background is black, and no two frames snapped in the same color-channel contain object-overlap ;) 

...back to your regularly-scheduled program!

  Are you sure? yes | no

Ted Yapo wrote 11/19/2016 at 12:55 point

I probably do need to take some "dark frames" to get the best images.  Especially at high ISO settings, there is a lot of background noise on the sensor (stuck pixels, pixel leakage, and even IR glow from the amplifier circuits on one side of the sensor).  It's mostly repeatable, though, so you can take a frame with the lens cap on, then subtract that from the real frame.

You have to do this for astrophotography, especially for long exposures where the sensor background can be close in brightness to the objects you're trying to capture.  I haven't tried it here, but it's probably going to improve quality a little.

  Are you sure? yes | no