3 days ago •
I finally have all my settings save in ~/.config/JoshCrawleySoft/ and a user changeable directory for all the camera calibration data!
Code was pushed to github.com/jwcrawley 9/27. I'm now working on the Hough line and circle transforms to auto-detect the laser plane intersection (hough line transform) as well as detection of the circular platter (hough circle transform). I may at a later time, implement color calibration using a flatbed scanner and paint chips from local hardware store. Goal here is lens calibrated, color calibrated, auto detected 3D scanning. The simpler, the better.
Work is going slower on this project mainly due to 13 credit hours, new job at Indiana University, and my wedding on October 11! However, coding is still taking place for the magic of a cheap and ubiquitous 3D scanner!
2 months ago •
I messed with Java in displaying Webcam images to do the ful stack with OpenCV. Unfortunately, messing with java is all that I got accomplished, considering there is no unified simple way of opening webcams on Java. I tried JMF, FMJ, Marvin, CIVIL, webcam-capture, and other libraries. The worst crashed my java interpreter. The best displayed any webcam source at 640x480 (argh!).
So, I decided to see how hard displaying cam data would be with QT and C++... It took me 20 minutes to display an arbitrary resolution! So I decided to implement the GUI and controls in QT, OpenCV, and eventually PCL.
On August 4th (yesterday), I made my first major checkin on Github with my new QT program. It has the config screen and video viewing screens. Now, my next big adventure is to work on calibration routines. Then, and only until they are done, will I get to scanning.
My calibration routines are simple in idea.
1. First, calibrate the cameras for lens abberation. That is easily solved by calling in the camera calibration routine. I do this for each camera.
2. I determine the center coordinate for the platter. That point (on each picture) is what I consider Origin. That point shouldnt change, so it's the same for each image. I use a hough circle transform, with inspiration from here:
3. I determine the slope of the laser line on the platter. This angle is the combined angular displacement between horizontal and vertical allignment. I use a hough line transform to get the position of this line, and then calculate angle versus vertical.
As long as I calculate for each camera its own calibration data, I can combine those pointclouds easily. I will need a calibration function for the second laser line and its position. I'm still thinking how I want to implement it, as my goal here is simplicity in use.
2 months ago •
I now have the 2 line lasers I ordered! So, my total inventory is 2 line lasers and a point laser (with glass rod that makes 3rd dirty line laser).
I also have a gifted (lsusb) 046d:082d Logitech, Inc. HD Pro Webcam C920 ! Yes, this is a $70 webcam that captures at 1920x1080!
I'm also revising my decision to go with Java. I have fought and fought with the multitude of libraries that claim to properly open up webcams in Linux (OpenCV, webcam-capture, FMJ, JMF, Marvin...). Alas, I either end up with driver failures in opening the libraries, or they are capped at 640x480, which is no better than my PS2 Eyetoy. Doing that loses me 6.75x resolution data.
Below is the output of a QT5 program I wrote using OpenCV and C++ at the max resolution my camera supports. Took only 20 minutes, and that's including learning C++ grammar (I already knew C):
And this is a snapshot of my and my wife's artistry. We both painted it together (we hopped from side to side). And it was my first time painting :)