3D vision robot

An autonomous robot with 3D vision based on two webcams and a Raspberry Pi.

Similar projects worth following
I had the idea of a real 3D vision system with two webcams. This robot uses OpenCV and an algorithm to measure the distance to objects. I use a Raspberry Pi 3 to control the motors connected by a motor driver. The motor driver module is connected over the RS232 port.
As the power supply the robot has a LiPo 7.4 V battery. The chassis is the DG 012 RV tank rescue plattform.

To control the robot I use my Nano VM and libraries written in C. The vision system can measure up to 500 points in the frames captured. The distance range goes from 50 mm to 250 mm. Everything farer away is "-1" and filtered out.

The robot can be controlled by speech (with Jasper speech recognition) commands like: "go", "left", "stop". Or you can search Wikipedia and the IMDB for information.

To control the robot I use my Nano VM with some libraries written in C. The main control program is written in my language N.

The motor driver is connected to the serial port of the Raspberry Pi.

With OpenCV the pictures of the two webcams are scanned for identical parts in both pictures. An object in front is placed in a different location in the pictures. The control program uses this difference to calculate the objects distance. The goal is to avoid driving into obstacles.

  • 1 × Raspberry Pi 3
  • 2 × Webcam
  • 1 × DAGU multi chassis
  • 1 × Pololu Qik 2s9v1 dual serial motor controller
  • 1 × DC converter to 5V for Raspberry Pi

View all 7 components

  • New control program

    jay-t01/12/2018 at 12:49 0 comments

    Currently I am writing a new control program for the robot. I am using my new language bra(et (bracket) to write the program. The program runs on my new virtual machine L1VM. L1VM is a lot faster then Nano VM, that is the reason why I write a new control program. It has the OpenCV object detection library already in the code. I still need to write a basic program and did no real drive tests so far.

    The L1VM virtual machine is on GitHub: L1VM - fast virtual machine

    Programs running on L1VM are 6-7 times faster as on the Nano VM! The L1VM binary is really small: it's only about 28 KB in the X86_64 Linux version. The VM has 60 opcodes and two interrupts (with up to 255 possible operations). The VM can run modules for SDL graphics, math operations, file I/O and more.

    The modules are very basic. I continue development on them if I need something new.

    The bra(et language makes it possible to use inline assembly code. This is sometimes needed to do some coding which is not possible in bra(et yet.

    The full source code for L1VM is available on GitHub. Any feedback is welcome.

  • Speech recognition with Jasper

    jay-t08/30/2017 at 03:19 0 comments

    I finally got speech recognition with Jasper working. I can say commands like "go", "left" or "stop" and the robot will drive or turn in that direction.  Or with the Jasper modules I can start a search in Wikipedia and the IMDB Movie database.

    I began to write a little chat module for Jasper. So someone could ask the robot something and gets an answer.

    I use the webcam microphone for speech input. I had to set the recording level to very low.

    You have to turn it down until Jasper shows: "no disturbance detected" on silence. Then you have no "false" detected words on silence.

  • Learning the objects threshold

    jay-t08/14/2017 at 09:45 0 comments

    The vision system now generates a disparity map of the two webcam pictures (3D picture).

    The robot adds near parts of this 3D picture to my 3D points and can see "more" objects.

    If a threshold of objects in a set distance is reached the robot detects it as a obstacle. But how high should this threshold be set? I wrote some functions which do learn that. I did set the start thresholds very low, so the robot drives backwards to avoid objects. There is a "did drive backwards counter" to note this. If this counter reaches a set value the robot will increase the objects threshold. And the counter is set back to zero.

    After some time "learning" the robot should drive forwards and avoid driving into objects. If the robot is  stuck somewhere the objects threshold is decreased. The accelerometer is used to measure if an object was touched.

  • Accelerometer read

    jay-t07/21/2017 at 02:21 0 comments

    I did changes to my control program to log the accelerometer data before a move and while moving the robot. This way I can see the difference between the values of standing still and moving.

    Later I will add code to check if the robot does move if it sends the commands to the motor driver.

    So the robot can recognize if it is stuck somewhere and can't move forward.

  • New Installation

    jay-t06/17/2017 at 07:17 0 comments

    I did install OpenCV on a fresh Raspbian. My programs need the "extra modules", so I had to compile them too. I installed my VM too. Now the control program runs again.

    I now have to install Jasper for voice recognition. This time I made a backup of the SD card. So if anything goes wrong then I am on the safe side.

    I added A heatsink onto the big chip on the Raspberry Pi 3. It got hot while compiling OpenCV. Now it is better with the heatsink installed. My control programs can reach up to 80% CPU time and I need this cooling then.

  • Filesystem on SD card did crash

    jay-t06/16/2017 at 06:05 0 comments

    While testing some GPIO library stuff for my VM the SD card filesystem did crash!

    It was corrupted beyond all repair. I did use photorec to rescue some important files. Now I have to reinstall everything from scratch. That is not funny :(.

    Unfortunately my last backup image was done long time ago. Lesson learned: do more backup images of the SD card!

    I can not work on this project until I am ready again, sorry for this!

  • MPU 6050 via I2C

    jay-t06/13/2017 at 21:21 0 comments

    I wrote an I2C library for Nano. With a simple test program I can now read out the raw data of a MPU 6050 accelerometer. Now I have to add some code to my main control program of the robot.

View all 7 project logs

Enjoy this project?



jay-t wrote 12/09/2017 at 20:31 point

Yes, I use the OpenCV library. It is a algorithm to get keypoints in both pictures. The function to get the distances is my own code. I did use a "calibration" method to calculate the distances from 50 mm up to 250 mm. Everything farer away is filtered out. Maybe I do a calibration for it later.

  Are you sure? yes | no

Rud Merriam wrote 12/09/2017 at 20:07 point

Would you please provide more information on using the 2 web cams for 3d vision?  Are you using a library from OpenCV? 

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates