10/30/2014 at 02:19 •
I finally have the bulk of my calibration code written, with only the line selection / saving as the last step. Everything that's touched is calibrated and measured!
Mainly, what next to do is the line selection / saving , writing the 3D scaning portion, and then updating the Arduino code to reflect a more GCode base.
From there, I can easily generate a pointcloud file with X, Y, Z, as well as RGB attached to each coordinate. Once I am at this point, I will look into how hard it is to calculate surfaces using PCL.
I also have to fix errors like my Makefile. I'm unsure how to do that, given this is my first real undertaking of programming. However, documentation online is very good. I'll get it done soon.
09/28/2014 at 14:52 •
I finally have all my settings save in ~/.config/JoshCrawleySoft/ and a user changeable directory for all the camera calibration data!
Code was pushed to github.com/jwcrawley 9/27. I'm now working on the Hough line and circle transforms to auto-detect the laser plane intersection (hough line transform) as well as detection of the circular platter (hough circle transform). I may at a later time, implement color calibration using a flatbed scanner and paint chips from local hardware store. Goal here is lens calibrated, color calibrated, auto detected 3D scanning. The simpler, the better.
Work is going slower on this project mainly due to 13 credit hours, new job at Indiana University, and my wedding on October 11! However, coding is still taking place for the magic of a cheap and ubiquitous 3D scanner!
08/05/2014 at 10:51 •
I messed with Java in displaying Webcam images to do the ful stack with OpenCV. Unfortunately, messing with java is all that I got accomplished, considering there is no unified simple way of opening webcams on Java. I tried JMF, FMJ, Marvin, CIVIL, webcam-capture, and other libraries. The worst crashed my java interpreter. The best displayed any webcam source at 640x480 (argh!).
So, I decided to see how hard displaying cam data would be with QT and C++... It took me 20 minutes to display an arbitrary resolution! So I decided to implement the GUI and controls in QT, OpenCV, and eventually PCL.
On August 4th (yesterday), I made my first major checkin on Github with my new QT program. It has the config screen and video viewing screens. Now, my next big adventure is to work on calibration routines. Then, and only until they are done, will I get to scanning.
My calibration routines are simple in idea.
1. First, calibrate the cameras for lens abberation. That is easily solved by calling in the camera calibration routine. I do this for each camera.
2. I determine the center coordinate for the platter. That point (on each picture) is what I consider Origin. That point shouldnt change, so it's the same for each image. I use a hough circle transform, with inspiration from here:
3. I determine the slope of the laser line on the platter. This angle is the combined angular displacement between horizontal and vertical allignment. I use a hough line transform to get the position of this line, and then calculate angle versus vertical.
As long as I calculate for each camera its own calibration data, I can combine those pointclouds easily. I will need a calibration function for the second laser line and its position. I'm still thinking how I want to implement it, as my goal here is simplicity in use.