Several ideas in computer vision use structured illumination including compressed sensing, non-line-of-sight imaging, and range finding. Here I'll report on building up demos for these concepts. Many of these ideas are easy to implement from a hardware perspective, but with some understanding of the reconstruction concepts, the results can be really impressive and surprising.
The structured illumination can be produced using an LED matrix or a projector, while the detection can be either a single photodetector or a camera.
The basic idea is to measure the light transport between a projector and camera by mapping each illumination pixel to camera pixels. With this amazing things are possible:
1) Reconstructions from the perspective from the projector
2) Synthetic illumination - reconstruct images to appear as though they were illuminated with any illumination pattern
One of the keys to implementing this with a camera is binary encoded structured illumination. The video has an explanation for how this works and it is the key to data acquisition. Instead of illuminating the scene one illumination pixel at a time, each illumination pixel has a unique binary code. Each digit in the code corresponds to a different illumination sequence. 1 means that the illumination pixel is on during that pattern and 0 means that it is off for that pattern.
I also did some dual photos of my face, and they are eerie.
Dual photography paper. Pradeep Sen, Billy Chen, Gaurav Garg, Stephen R. Marschner, Mark Horowitz, Marc Levoy, and Hendrik P. A. Lensch. 2005. Dual photography. In ACM SIGGRAPH 2005 Papers (SIGGRAPH '05). Association for Computing Machinery https://doi.org/10.1145/1186822.1073257
Thanks to Michael @TeachingTech - 3D scanning system review: • Automated and easy 3D scanning with OpenSc... Generative AI references: Babaei, R.; Cheng, S.; Duan, R.; Zhao, S. Generative Artificial Intelligence and the Evolving Challenge of Deepfake Detection: A Systematic Analysis. J. Sens. Actuator Netw. 2025, 14, 17. https://doi.org/10.3390/jsan14010017 Karras, Tero, et al. "Progressive growing of gans for improved quality, stability, and variation." arXiv preprint arXiv:1710.10196 (2017). https://arxiv.org/abs/1710.10196
Karras, Tero, Samuli Laine, and Timo Aila. "A style-based generator architecture for generative adversarial networks." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019. https://arxiv.org/abs/1812.04948 Keyence 3D scanner with structured illumination - https://www.keyence.com/products/3d-m...
What if there was a way to collect an image with a single detector, a single pixel? It doesn’t seem possible - images consist of 2D information - how could all the information be captured with a single point measurement?
One way to do this is by scanning that point over the field-of-view, one-point at a time like a 3D Lidar map - the view of the photodector changes over a scene with a mirror scanner.
But there’s actually another way to solve this problem. And it amazingly doesn’t require any moving parts. I’ve always been fascinated by this idea, so I decided to build it - a one-pixel camera that has no moving parts. Just an LED matrix and a single photoresistor.