Close

Success!

A project log for VR Camera: FPGA Stereoscopic 3D 360 Camera

Building a camera for 360-degree, stereoscopic 3D photos.

colin-pateColin Pate 10/16/2017 at 08:081 Comment

TL;DR: I've successfully captured, warped and stitched a 200-degree stereoscopic photo! Check it out in the photo gallery. Full 360 will come soon, once 4 more cameras show up.

After the last log, the project went dormant for a couple weeks because I was waiting on new lenses. However, the new ones arrived about a week ago and I was very happy to see that they have a much wider field of view! They are so wide that it seems unlikely that their focal length is actually 1.44mm. I'm not complaining, though. Below is a photo from one of the new lenses.

The field of view on these lenses is pretty dang close to 180 degrees, and as you can see, the image circle is actually smaller than the image sensor. There is also a lot more distortion created than there was with the older lenses. This caused some serious issues with calibration, which I'll cover below.

Camera Calibration

In order to create a spherical 360 photo, you need to warp your images with a spherical projection. I found the equations for spherical projection on this page, and they're pretty straightforward, however, an equirectangular image is required to use this projection. The images from the new lenses are pretty dang far from equirectangular. So far, in fact, that the standard OpenCV camera calibration example completely failed to calibrate when I took a few images with the same checkerboard that I used to calibrate the old lenses.

This required some hacking on my part. OpenCV provides a fisheye model calibration library but there is no example of its usage that I can find. Luckily, most of the functions are a 1 to 1 mapping with the regular camera calibration functions, and I only had to change a few lines in the camera calibration example! I will upload my modified version in the files section.

Cropping and Warping

Once I got calibration data from the new lenses, I decided to just crop the images from each camera so the image circle was in the same place, and then use the calibration data from one camera for all four. This ensured that the images would look consistent between cameras and saved me a lot of time holding a checkerboard, since you need 10-20 images per camera for a good calibration.

Stitching

OpenCV provides a very impressive built-in stitching library, but I decided to make things hard for myself and try to write my own stitching code. My main motivation for this was the fact that the images would need to wrap circularly for a full 360-degree photo, and they would need to be carefully lined up to ensure correct stereoscopic 3D. I wasn't able to figure out how to get the same positioning for different images or do a fully wrapped image with the built in stitcher.

The code I wrote ended up working surprisingly decently, but there's definitely room for improvement. I'll upload the code to the files section in case anyone wants to check it out.

The algorithm works as follows:

The image from each camera is split vertically down the middle, to get the left and right eye views. After undistortion and spherical warping, the image is placed on the 360 photo canvas. It can currently be moved around with WASD or an X and Y offset can be manually entered. Once it has been placed correctly and lined up with the adjacent image, the area of each image with overlap is then cut into two Mats. The absolute pixel intensity difference between these mats is calculated and then gaussian blurred. A for() loop iterates through each row in the difference as shown in the code snippet below:

Mat overlap_diff;
absdiff(thisoverlap_mat, lastoverlap_mat, overlap_diff);
Mat overlap_blurred;
Size blursize;
blursize.width = 60;
blursize.height = 60;
blur(overlap_diff, overlap_blurred, blursize);
for (int y = 0; y < overlap_blurred.rows; y++) {
    min_intensity = 10000;
    for (int x = 0; x < overlap_blurred.cols; x++) {
        color = overlap_blurred.at<Vec3b>(Point(x, y));
        intensity = (color[0] + color[1] + color[2]) * 1;
        if (y > 0) intensity += (abs(x - oldMin_index) * 1);
        intensity += (abs(x - (overlap_blurred.cols / 2)) * 1);
        if (intensity < min_intensity) {
            min_index = x;
            min_intensity = intensity;
        }
    }
       //Highlight the pixel where the cut was made, for visualization purposes
    overlap_blurred.at<Vec3b>(Point(min_index, y))[0] = 255;
    oldMin_index = min_index;
    cut[y] = min_index;
}

Each pixel in each row is given a score, and the pixel with the lowest score is chosen to be where the cut is placed between the two images. The gaussian blur ensures that the cut line isn't jagged and rough, which causes strange artifacts in the output.

I also implemented very simple feathering along the cut line, but the code is too ugly to post here. This full stitching process is repeated for the left and right halves of each image, and the two stitched images are vertically stacked to create a 360 3D photo! It's not a full 360 degrees yet because I only have 4 cameras, but I just need to order 4 more, hook them up and I should have some glorious 360-degree 3D photos. If you have any sort of VR headset, check out the attached photo to see my current progress.

Next Steps

Obviously, the most important step is to get 4 more cameras to bring it up to 360 degrees. However, there are many other things that I'll be working on in addition, including:

Discussions

brijesh.kmar wrote 11/05/2017 at 10:03 point

excellent work done. 

Could you please share github details 

  Are you sure? yes | no