Holoscope – Lensless Microscope – PART 1: Optical Setup

A project log for Holoscope - Superresolution Holographic Microscope

Subpixel imaging using the Raspberry Pi and an Android Smarthpone. The lightsource is represented by an LCD.

beniroquaibeniroquai 05/20/2016 at 16:490 Comments

2015-10-27 20.09.03


Many health applications in 3rd world countries need proper equipment for low price. Many research groups have started to develop cheap diagnostic modules such as the foldscope (the 1Dollarscope) or the lensless on-chip microscope from the MIT and Cellscope (Waller) Berkley. TH Cologne definitly needs one too!

The Germany was lagging a system. Goals were:

Lensless Microscope – WHAT??

Starting from the figure above (well, still German); There is a light source, which is almost monochromatic and of small bandwith. This coherent light source makes it possible to capture fringes coming from diffraction at the object (you see these rings around the flower in the grayscale image).

These fringes are coming from the interaction of the spherical waves coming from the LED-source with the transmission-function of the object. Therefore condition of coherence has to be fulfilled (temporal and spatical coherence). The LED is great, because it fits this circumstance and doesn’t produce any speckle pattern (color-spectrum ~20nm, high efficiency, small-chip~spatial-coherence)

Interference is possible, when the wave have a more less strong phase relationship to each other. Destructive and constructive interference of the spherical object-wave and the object-filtered reference-wave are producing a so called in-line hologram (look up the Gabor-Hologram on Wikipedia if you like).

Interference-pattern only right after the object. So capture it with a sensor. This causes the „On-Chip“-imaging methodogy. Further shrinking the source-shpape enlarges the distance of coherence. Possible by placing a pinhole right after the LED, but-coupling the Led into a wave-guide OR using a DMD/DLP-Projector…

Backpropagation of the field – magic field

Well, so the gotten image is an interference at a distance Z after the Object plane on the sensor of two components:

  1. The Object wave
  2. The Reference wave

After doing some magic with the numerical Fresnel-integral we can simply back-propagate the electrical field

  1. which is more-less the square-root of the intensity-measurement on the sensor by
  2. Fourier-transforming the object-amplitude
  3. Multiplying it with the Fresnel-kernel at Z (which becomes a convolution in spatial domain) and
  4. Inverse fourier-transfmorming it
  5. After (abs(A))^2 everything looks good – in case the Z is the same as in the experiment

Goal is: Doing it on a phone. Android+OpenCV will do the job. Most Phones have Quad-Cores already, why not using them?

Due to the short coherence length, each fraction of the image can be backpropagated independently.


Following from the explanation above, one needs a more less coherent source. An LED. This needs to be placed at a large distant far away from the object. One can use the advantage of „scaling“ the system by placing a pinhole aperture right after the LED. This decreases light’s efficiency, but shrinks the system! Good parameters will be:

Optional: When trying to recover objects phase (usually an object consisnts of amplitude eg. Dust and phase information eg. Cell-structure), it’s helpful to take several images at diferrent Z-positions. Z-Resolution should be around 10-15µm..