The scanner performs instant capture of a 3D surface, and after some (fully automated) processing, outputs a 3D model that can be viewed in CAD packages for measurement, or re-printing etc.
The key components are:
a) 4 cameras (to capture the object from different angles).
b) A Multi-view Stereo (MVS) algorithm which turns the 4 images into a 3D model.
c) A projector which projects a random pattern onto the object to help step b.
4 Raspberry Pi Zero's perform synchronised capture and upload images to the cloud ready for processing. A more powerful laptop takes over to create a nicely smoothed, textured mesh.
Essentially, MVS finds matching features in each of the images so that it can triangulate distance. Projecting extra features onto the object results in a more accurate 3D mesh.
It was only designed to capture 3D surfaces at about 6 inches (disregarding the rear of the object and so on), but speed, simplicity and accuracy for this task all score highly.
I've made a "Mark 2" of this scanner which is way more accuracy, and it has a 2 axis turntable so you can scan objects from all sides easily. I'm really proud of it now, its on a new project page...
A good source of scatted random dot pattern is a laser bounced off a dull white slightly curved surface. We used an Aspirin for a similar project in the 90's. very efficient, cheap.
great work. I always wondered why structured light scanning could not be done with a rudimentary laser with a projection pattern. The pattern can be a rotating disk with cutouts or something like that. I can wait to see the scripts published.
The question I have, Is that mesh straight from scanner or it was post-processed? I had similar results with one camera and Agisoft but with A LOT of pictures and some post.
I'm not too familiar with Agisoft but in my experience the pattern projection helps more than the number of images. In the example image the only post processing was to crop the 3D model to show the central region where the hand is.
If you projected 3 patterns (say random Red, Green and Blue patterns) from different projectors at different angles would that help with the creation of the 3D image?
By 'projectors' I'm thinking something simple like:
you're thinking of structured light scanning, this is photogrammetry.
Also i have my doubts with the claim that this is fully automated. VisualSFM outputs point clouds , and surface reconstruction on meshlab usually requires a lot of manual cleaning before a decent mesh can be obtained.
No, the thinking behind it was that with a single projector you're only going to be able to add features to a single "face". With multiple projectors (so long as they don't wash-out areas that are already lit - hence the three colours) you would be able to give features to surfaces which multiple cameras could see, but a single projector could not reach.
Regarding "fully automated" what I meant by this is that from a click of the button a 3D mesh arrives on the laptop screen. In the example image I did crop the scan to make it a nice rectangle but there was no manual processing or tidying of the 3D hand.
I don’t suppose there is a similar way to digitize buildings, without needing the random dot patterns?
Also, why does the data from the Pi Zeros have to go to the “cloud”? Surely any recent PC can handle the work.