Close
0%
0%

Quamera Gen 2

Stereoscopic machine vision with integrated depth information

Similar projects worth following
This project is an exploration of inspirations from insect and mammalian vision, specifically mantis, bee, and dragonfly visual capabilities and mammalian microsaccades. The system uses a field of overlapping cameras to create undistorted high fidelity omnidirectional RGBD images based on the fixed visual field overlap between cameras. The information from the mobile hardware is processed through cloud based systems which apply a range of machine learning tools to identify elements within the image, generate high quality image element scale estimates, and convert all of this to 3D models of the complete visual environment. This would go a long way to blurring real and virtual space boundaries.

26 cameras, 4 Beaglebones,1 nvidia Jetson, device trees,C++,OpenCV,Python,Tensorflow,Keras, 2 Arduinos,2 Hi Torque Servos

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

The system uses cameras supplied from Arducam and a custom 3D printed enclosure. The full rig has 26 cameras arranged in a Rhombicuboctahedron. where the cameras are grouped into 3 banks of 8 cameras, where each bank is controlled by a Beaglebone Black. These rings provide 45 degree look down, look up, and equatorial views. The last two cameras are controlled by a 4th Beaglebone which also orchestrates data flow into the Jetson. The Linux device tree has been modified to support dual I2C/SPI busses, and a C++ program handles both camera interactions and provisioning a basic REST interface that allows for sending commands and getting data. An nVidia Jetson is used to fuse all of the camera feeds together into a data model based on a hypersphere, which is then delivered to AWS for cloud processing.

Current status is that the equatorial ring hardware V1 is complete and operation, the ring driver software running on the BeagleBone is operational, and the image fusion system is operational in that it can process all linear integration solutions to establish optimal linear overlap. Current activity involves working on the final non-Euclidian math that handles the non-linear final fusion parts of the image and construction of the hypersphere data model that allows the system to track both overlapping field of view and the fact that while there is overlap, the incidence angles on overlap are all unique.

AR0134_RR_D.PDF

ARO134 Register Map

Adobe Portable Document Format - 723.88 kB - 06/13/2017 at 10:57

Preview

AR0134_DG_C.PDF

ARO134 Developer Guide

Adobe Portable Document Format - 2.33 MB - 06/13/2017 at 10:56

Preview

AR0134_DS.pdf

Global shutter based image sensor available with Arducam interface - this is an excellent global shutter solution - bandwidth issues for this application still being worked out

Adobe Portable Document Format - 1.31 MB - 06/13/2017 at 10:47

Preview

AnglePlate.scad

Angled plate for upper and lower angled rings

scad - 6.66 kB - 05/29/2017 at 20:45

Download

leveltshifter.sch

sch - 145.54 kB - 05/16/2017 at 22:29

Download

View all 10 files

  • 2 × Cameras - you pick if you interface I primarily use Arducam, also use Raspi cams
  • 4 × Raspberry Pi Zeroes Realizes Deino,Enyo, and Pemphredo nodes
  • 1 × nVidia Jetson TX2 Realizes fabar node
  • 1 × Hardkernel Octa - Master Control Program - ;-) This presents the unified high level view and access to internals
  • 1 × Raspi 3 Host for the 4 Raspi Zeros via a clusterhat

View all 6 components

  • It's been a long strange trip

    Mark Mullin12/14/2018 at 00:35 0 comments

    However, V3 is now operational.  Stereo vision, LIDAR, integrated PID control of yaw and pitch, yadda,yadda,yadda.  I have decided to open source the entire system, in spite of reservations of my own, and frantic advice to the contrary.  After all, if you want to pry open Pandora's box, might as well not pussyfoot around.


    Here's a teaser - this is a vertical LIDAR/imaging scan cycle of the full camera head.  Images, github links, etc, will follow over the holiday season.  

  • OK, we've got results, and they were good!

    Mark Mullin05/28/2018 at 18:26 0 comments

    So of course the first two write-ups went to the old Google Tango community for old friends, and to LinkedIn, cause this is business, at least to some degree.

    Over the coming weeks I'll be updating the project information here and providing both general architectural details of the system as whole, and more specific details for those who wish to take advantage of the fundamental camera management system that supports image rectification.  The former depends on some work I'm not disclosing publicly, the latter is completely backed by the open source Dantalion and Procell projects and should be directly replicable or I will have a very stern talk with them.

    Linked In

    Tango Board  (vaguely more technical, and much less formal)

    Final Output Vis

  • At the end, you're always trying to pound the back side of an elephant, through a knothole, in 5 minutes.....

    Mark Mullin05/03/2018 at 23:10 0 comments

    Generation 2 is much smarter than gen 1.  Also meaner and more stubborn.  For the moment, here's a pic of the working system architecture

  • A *lot* has been going on behind the scenes

    Mark Mullin03/20/2018 at 00:14 0 comments

    So, we're going to do machine vision with low cost imagers, and by machine vision we mean integrated stereo depth.  Hmmmm.   Houston, there's a lot of noise in this data  :-(     Hence the long hiatus whilst a genetic algorithm was used to solve the problem of exactly which images might be good calibration images.   Soon, a description of the new stereo camera assembly will be arriving, and that one should (hopefully) spin up quicker.  And its a hell of a lot smarter than G1.

    Here's a poster showing the results of a successful stereo fusion to depth integration

    Here's a movie showing the genetic algorithm solving image selection for calibration

  • Stereo rectification trying to exit development

    Mark Mullin01/28/2018 at 01:28 0 comments

    I may have been quiet here, but much has been going on in the github side of the house.  Refining my previous claims, the second generation will consist of an as yet undetermined number of stereo camera pairs on elephant trunks.  Before worrying about grander things, one needs a working stereo camera.

    The patient is still on the table, wires dangle out everywhere, and organs keep getting swapped in and out.  That said, here's a zoomed picture of the systems current ability to rectify a stereo image pair in order to do successful depth measurement.

    This shows a high delta visual region AFTER (mostly) successful stereo rectification (sliding the images around so everything aligns just right).  This is the sweet spot for where it's got the strongest depth measurement abilities, which it does by measuring the horizontal offset between equivalent points. These images have been aligned vertically to allow for measurement of distance by measuring the horizontal angle between equivalent elements of the image.

    In short, with the depth map that can be derived from these pictures, this is how you keep your robot from bumping into things.  :-)

    All the unique code to do this is on, or will be on github soon.  I'm working on trying to document the plethora of library and external system dependencies, as well as just making an OS image available in case I missed anything.  None of this is a secret, much that is good comes from others, so if I can help light the way, it's in my interests too.

  • Baseline G2 Beaglebone Control Library

    Mark Mullin11/18/2017 at 14:54 0 comments

    OK, Generation 2 has officially commenced - a number of the prior libraries on GitHub have been archived as they are now superseded by Dantalion, which is the integrated first level executive that runs on the BeagleBones and directly controls the cameras.  The most significant change is that each Beaglebone is now used to control a single pair of cameras and openCV is now part of the operating environment at this level.  You can find Dantalion here.

  • Noise,noise, and damn noise

    Mark Mullin10/14/2017 at 13:56 0 comments

    After a month and a half of solid misery, I'm having to change the design - the original design had 4 cameras operating on a shared SPI bus, meaning each Beagle controlled 8 cameras over two busses.  Unfortunately, this led to an analog nightmare of noisy circuits and corrupted data.  I'm a software guy, I did what I could and decided a strategy change was better than falling down the rabbit hole of analog EMI interference and other such mental anguish.  Right now, I am firmly of the opinion that Kirchoff can go straight to hell.  :-)

    So, the design is being reworked to drive single cameras off the SPI buss, i.e. there will be 12 Beaglebones driving 24 cameras.  This is actually a good thing, indications from the downstream processors was that they'd be a lot happier if more and better work was done upstream on the images, and we've got enough spare cycles now to dump OpenCV onto the bones.

    So stay tuned, next generation rings are being ordered and built out -  indications are that real time video will be available in lower resolutions, and that max resolution of 1600x1200 still comes in at a solid 3 frames/sec

    Oh, and I would have tried with Raspberry Pi Zeros, but apparently, you can only order one at a time -  I elected to completely remove all Raspberry Pi's from all aspects of the project, because I think the vendors are playing games. I can get bones, I trust bones, so that leads to a nice uniform environment.

  • I'll be damned -- it's running

    Mark Mullin08/24/2017 at 19:19 0 comments

    Meet B.O.B.  the robotic computer eyeball

  • A bit of history....

    Mark Mullin08/24/2017 at 16:15 0 comments

    In getting ready for the exhibition at the Dover Mini Maker Faire, I rounded up the various suspicious ancestors in the Quamera project, along with Bad Old Bob (B.O.B) -  here's a short video into how an afternoons fun turned into a complete monster -- 

  • We're up, sorta, barely...... fingers crossed

    Mark Mullin08/23/2017 at 19:37 0 comments

    OK, we have 3 problem children cameras and focusing everythings quite the pain, but we've got a monitoring grid up -  of course, if that's all I wanted to do I'd just buy  a spherical lens

View all 28 project logs

Enjoy this project?

Share

Discussions

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates