Close

Super-Resolution is a super solution!

A project log for Holoscope - Superresolution Holographic Microscope

Subpixel imaging using the Raspberry Pi and an Android Smarthpone. The lightsource is represented by an LCD.

beniroquaibeniroquai 05/21/2016 at 05:330 Comments

Example for the subpixel Super-Resolution

So, what does this mean? Super-Resolution? This simply means, that it’s possible to extend the support of available information coming from the sensor by simply shift the objects image. In the first section of the lens-less project thing I mentioned, that one is capturing an interference pattern of the object multiplied with the reference wave and the reference wave itself. A so called hologram. Waves are continuous, the detectors used are not, so there is always a sampling which is depending on the size of the pixels – in this case.

The smaller the pixels, the higher the resolution – as long as there are no optics in between. The image bellow shows the process of discretization. Just imagine you scale an image from 200×200 to 100×100 px. The pixelsize is now doubled. A lot of information is unrecoverable lost.

Now, shift this downsampled image over a high frequency sampling grating, here represented by a mid-resolution image sensor. Each shift gives another combination of each pixel. When knowing the exact position of each shift one can simply calculate the sub-pixels which form the intensities which are giving the sub-image.

Finally you have an image which has – in best case – n-times higher resolution, where n is the number of images used in the capturing process.

May approaches in the past have simply mechanically shifted the object or the sensor. Mechanics in optics are always a problem. Because in most cases a lot of things changes which should stay the same ideally. Another approach was investigated by Ozcan et al. They’ve virtually shifted the light source. The following graphic shows, that the effect is more less the same. I results in a shift of the hologram on the sensor. In case the distance between the LEDs and object compared to the on between the object and detector is great enough, the shift is linear.

By switching on/off the LEDs 1..5 sequentially one can get 5-shift images, which results in sqrt(5) bigger pixels (ideally).

But as I’ve mentioned earlier; Doing this LED thing was time consuming and efficiency is not great. Light distribution is also not great if you’re not working precisely.. and I’m not. Definitely! So using an old DMD-projector/chip does everything at once. Pupil shift, has high efficiency, is affordable as an off-the shelf part and you can even control the degree of coherence. This everything is possible by simply writing different patterns on the DMD-Chip.

The shift can adjusted even though it’s always a discrete shift. The pattern (seen bellow) was displayed by an Android MHL-connection. This is an HDMI-Standard for Mobile devices. Google allows android-devices to use secondary screens in their apps. Great stuff, but a hassle to debug. Anyway, I’ll publish the code in the future on github!

The shift can be calibrated post-experimentally with an OpenCV implementation of the optical flow-algorithm.

Optical Flow? This is a class of algorithms which calculates a shift-difference of two images. Just imagine a care moving from left to rigth (fram1->frame2). Then the difference is the „flow“ of a car. This i soften used to detect motions for example.

It detects anchor-points in the initial frame and tries to find them in the next frame. The movement of each anchro produces a flow-map as seen bellow. One could simply calculate the mean of all shifts and shift the second frame back to the original position, BUT:

As I’ve mentioned earlier, the shift of the images is linear as long as the condition of distances is fulfilled. In the experiments, that was obviously not the case. The different patterns gave a weired magnification/distortion (like defocus) effect, due to the point-source-effect. The spherical wave coming from the point source gave a fish-eye like distortion. Higher magnification at the boarders.

Therefore the „Shift“-map can be generated depending on the position in the image. The two images below show the adequate shift in X/Y-dirrection.

First result of „linear“ shift:

Reconstructed Super-Resolution image has motion blur due to unprecise shift parameters and not linear shift-effects

Final result of a 80 Mega-Pixel image (coming from 8 images, background subtracted)

The result using the „Dense Optical Flow“ with a shift-map:

Image 1:

Image 2:

Difference Image 1/Image 2:

Map in X-Direction:


Map in Y-Direction:

Shifted images: „Superresoltion“ with non-linear shift:


No motion blur with 2 images, further improvements needed with more images:

Discussions