Close
0%
0%

StereoPi - DIY stereoscopic camera with Raspberry

For computer vision learners, drone and robot builders, AR/VR and 360 panorama experimenters. Open source hardware.

Similar projects worth following
Starting from
$65.00
Realizator has 1 orders / 0 reviews
Ships from New York, United States of America
StereoPi has been successfully crowdfunded!
https://www.crowdsupply.com/virt2real/stereopi
Key features:
- Supports Raspberry Pi Compute Module 1 and 3 / 3 Lite
- Raspbian support out of the box
- Support two cameras
- Small size
- Open source

StereoPi is designed to be friendly tool for experiments and quick prototyping with all kind of video-related projects. It will definitely helps you to enjoy with:
- Making 3D photos or record stereoscopic video
- Experiment with 3D video livestream to 3D helmets like Oculus Go or Internet
- Build computer vision systems and work with OpenCV
- Make a robots with ROS onboard
- Prototyping 360 degree photo and video solutions
- Creating AR/VR project
- Livestream from your drone or robot in stereo mode or from two independent cameras

Some our experiments:

Front view:

Top view:
LegendInfo
ABoot mode jumper
B1st Camera CSI connector
CMicroUSB (for firmware upload)
DPower connector (5V DC)
EPower switch
FMicroSD
GEthernet RJ45
H2 x USB connectors
IHDMI out
J3rd USB connector pins
K2nd Camera CSI connector
LGPIO head
MSO-DIMM connector for Pi Compute

Specifications:

ParameterInfo
Dimensions:90x40 mm
Supported Pi:CM3, CM3 Lite, CM1
Camera:2 x CSI 15 lanes cable
GPIO:40 classic raspberry PI GPIO
USB:2 x USB type A, 1 USB on a pins
Ethernet:RJ45
Storage:Micro SD (for CM3 Lite)
Monitor:HDMI out
Power:5V DC

  • A robot on StereoPi, part 1: fisheye cameras

    Eugene08/04/2019 at 20:03 0 comments

    The goal of this series of articles is to create a compact indoor robot that can navigate using stereo vision. As a platform for it, we’ll use a small Dagu RP5 platform on tracks that we have. Here’s how it looks like next to the StereoPi.

    Detailed TL;DR for fisheye cameras calibration can be found here in our blog.

  • 1.44 TFT Raspberry Pi HAT screen test

    Eugene06/21/2019 at 13:07 0 comments

    I've got one of these tiny funny screens.
    So following this manual with this fix I got this result:

    It just works! :-)

  • 3 more DIY guides for the StereoPi

    Eugene06/20/2019 at 13:58 0 comments

    We have 3 more guides now:

    1. The Art Of Stereoscopic Photo, part 1 (basics)

    2. The Art Of Stereoscopic Photo, part 2 (assembling a camera)

    3. Hacking Skybox on Oculus Go for StereoPi live streaming (just a hack)

  • OpenCV and Depth Map on StereoPi tutorial

    Eugene04/09/2019 at 08:33 0 comments

    Today we’re pleased to share with you a series of Python examples for OpenCV development. This code works with either the StereoPi or the Raspberry Pi Development Board, as both support using two cameras simultaneously. Our ready-to-use code (and also Raspbian image) will help you every step of the way, from the first image capture to the Depth Map created via real-time video capture.

    Introduction

    We would like to emphasize that all of these examples are for those new to OpenCV and are not intended for production use. If you are an advanced OpenCV user and have worked with the Raspberry Pi before, you’ll know it’s better to use C/C++ (instead of Python) and to utilize the GPU for better performance. At the end of this article we’ll provide some notes regarding the various bottlenecks we experienced using Python.

    Hardware setup

    Here is our hardware setup:

    We used the StereoPi board with Raspberry Pi Compute Module 3+. Also two Raspberry Pi cameras V1 connected (based on ov5647 sensor).

    Software used:

    The software installation process is beyond the scope of this article but we have prepared a Raspbian image with all software installed. Here is a link to our GitHub stereopi-tutorial repository.

    Notice

    All scripts support key stroke processing, and you can press ‘Q’ key to stop them. If you use Ctrl+C to stop the script, it may break the Python interaction with the cameras. In this case, you will need to reboot StereoPi.

    Step 1: Image Capture


    We use 1_test.py script for this purpose. Open the console and go to our examples folder:

    cd stereopi-tutorial

    Console Command: 

    python 1_test.py

    After starting the script you can see a preview window with the stereoscopic video. Pressing ‘Q’ will stop the process and save the last captured image. This image will be used in the next scripts for Depth Map parameters tuning.

    This script allows you to check if your hardware is operational and helps you obtain your first stereoscopic picture.

    The following video shows how the first script works:

    Step 2: Collecting Images For calibration


    In an ideal world, a perfect depth map needs to use two identical cameras with their optical, vertical and horizontal axis all in parallel. In the real world, however, cameras are different and it’s impossible to align them perfectly. Thus, a software calibration method is used. Using two cameras you take multiple photos of an object. In our case, we used a printed chessboard. A special algorithm will then analyze these photos and find parameters for correction. This script begins this process by capturing a series of chessboard photos for calibration. Before each photo, the script starts a five (5) second countdown. Five seconds is generally enough time to reposition the chessboard. Make sure it can be seen by both cameras and ensure it’s stable to avoid “blurred” photos. The default number of photos captured per series is 30.


    Console Command:

    python 2_chess_cycle.py

    The process:

    At the end, we have 30 stereoscopic photos, saved in /scenes folder.

    Step 3: Image Separation


    The third script 3_pairs_cut.py separates the captured photos into “left” and “right” images and saves them in /pairs folder. These separations could be done on-the-fly, without saving, but this step is helpful for your next experiments. You can save image-pairs from different capture series. Use your own code to work with this images, or use another stereoscopic camera’s images by putting them in this folder.

    This script will show you every stereo pair before it’s separated (and waiting for key press). This lets you find bad photos and remove them before the next...

    Read more »

  • Skype 3D and SLP Raspbian update

    Eugene03/04/2019 at 11:31 0 comments

    As you know, all our Starter and Deluxe kits will include a microSD card with a ready-to-use Raspbian image so you can repeat all of our livestream experiments right out of the box. We’ve been busy polishing existing features and adding new ones to this image. In this update, we’ll share with you some new features and say a few words about our experiments with Skype and 3D video.

    Latest StereoPi Livestream Playground (SLP) Image

    • Image size reduced from 5 GB to 860 MB
    • Video livestream to browser (2D and 3D)
    • Livestream to Android over USB cable (Android accessory support)
    • Bash console over web admin panel
    • File editor over admin panel
    • Access to video records over web admin panel
    • RTSP livestream support
    • MPEG-TS livestream support
    • Linux partition now takes 2 GB instead of 4 GB
    • FAT32 partition now created automatically on first boot
    • RPi 3B+ and CM3+ support (updated kernel)
    • Most settings are now in the /boot/stereopi.config file

    You can download image file from one of these three mirrors:

    Full descriptions of all features will be added to the SLP section of our wiki in the coming days.

    Skype and 3D Video

    One of our new features is the ability to livestream MPEG-TS. We used this feature to livestream video from StereoPi to OBS (Open Broadcaster Software) with this OBS-VirtualCam plugininstalled. OBS creates a virtual webcam accessible to Skype. Here’s a proof-of-concept demo, recorded by Sergey:

    And here is a screen capture of my iPhone and our first 3D Skype call:

    As the iOS screen recorder does not record sounds, I added music to it.

    To get all these things to work, we used two tricks. First, we used mic from Logitech webcam connected to the same computer to provide the audio to Skype, since the OBS-VirtualCam cannot emulate a sound device, so sound from StereoPi’s microphone isn’t available to Skype.

    Second, we avoid a one-second delay between audio (due to an internal OBS video buffer) by first streaming the video from StereoPi to gstreamer on Windows and then pointing OBS to use the gstreamer window as its video source, which resulted in a delay of only about 100 milliseconds.

    This test shows it is possible to use stereoscopic livestream with a lot of common software, like Skype and other video-related programs, all without modification since they already work with a traditional camera.

    If you want to discuss more features, please join this thread on the Raspberry Pi forum.

  • Wanna play with our Raspbian image?

    Eugene02/07/2019 at 11:38 0 comments

    If you have classic Raspberry Pi with the camera, you can repeat all our video livestream experiments. Livestream to YouTube, Android and Oculus Go. Also you can repeat our behind-the-scene experiments with video livestream to WIndows desktop, Mac or any RTMP server.

    Today we want to share with you our Raspbian image. We call it SLP (StereoPi Livestream Playground). It supports single-camera mode and also two-camera mode for StereoPi.
     You can find image, Android application and brief manual in our Wiki

    Admin panel screenshot:

    Android application screenshot:

  • Our crowdfunding is now live!

    Eugene01/30/2019 at 20:04 0 comments

    We are pleased to announce the launch of the StereoPi campaign! :-)

    https://www.crowdsupply.com/virt2real/stereopi

  • Factory prorotyes passed all tests

    Eugene12/25/2018 at 10:13 0 comments

    As we mentioned in our previous update, 3 weeks ago we started first step for preparing production at chosen factory. Now we are glad to inform you that first step is successfully complete!

    During these 3 weeks these things happened:

    • During first week factory started PCB manufacturing and begun components bought.
    • During second week components was mounted on equipment, which will be used for batch production. At this step all components were mounted except some connectors. We received some photos at this stage:
    Read more »

  • You from 3rd person view: StereoPi + Oculus Go

    Eugene12/25/2018 at 10:08 0 comments

    A friend of mine hosts a VR club and asked me if it’s possible to make a 3rd person view in a real life. Thus, we decided to conduct another experiment using our StereoPi (a stereoscopic camera with Raspberry Pi inside).

    Read more »

  • ROS: a simple depth map using StereoPi

    Eugene12/25/2018 at 09:55 0 comments

    If you use ROS when creating robots, then you probably know that it supports utilization of stereo cameras. For example, you can create a depth map of the visible field of view, or make a point cloud. I began to wonder how easy it would be to use our StereoPi, a stereo camera with Raspberry Pi inside, in ROS. Earlier, I’d tested and confirmed that a depth map is easily built using OpenCV; but I had never tried ROS - and so, I decided to conduct this new experiment, and document my process of looking for the solution.

    Read more »

View all 11 project logs

Enjoy this project?

Share

Discussions

karla wrote 07/25/2019 at 18:33 point

Hello again,

I was reading the DIY ninjas page, and I read the section about usb client mode. I read that the feature allows to turn on RNDIS LAN, and I was wondering if you know if this has been done. I am trying to succesfully connect the stereopi to my computer (like ssh) without needing an ethernet cable or a wifi dongle

  Are you sure? yes | no

Eugene wrote 07/26/2019 at 16:55 point

Hi karla! We tested this mode. But your question leads me to idea, that we need to do a step-by-step guide for this. I put it in my to-do list, and hope to do it in a couple of weeks.

Have you tried simple methods like this one http://www.circuitbasics.com/raspberry-pi-zero-ethernet-gadget/ ?

  Are you sure? yes | no

karla wrote 07/28/2019 at 17:41 point

Thanks Eugene!

I will try this soon! And thanks for the upcoming step by step guide, I am sure that will be helpful for other folks in the future.

  Are you sure? yes | no

jamie-torres wrote 08/21/2019 at 19:07 point

Hello Eugene,

Any progress on this? I have been unable to get it working. I would appreciate a link once you write a guide!

  Are you sure? yes | no

[deleted]

[this comment has been deleted]

Eugene wrote 06/30/2019 at 21:40 point

Hello Karla,

You can find both pinouts in our Wiki here: https://wiki.stereopi.com/index.php?title=StereoPi_Specifications#USB_pins.2C_power_pins

  Are you sure? yes | no

karla wrote 06/30/2019 at 23:17 point

Hello Eugene, Thanks for the quick reply,I can see the pinout for the USB pinout and the power connector, but I was actually referring to the power switch (part E).

  Are you sure? yes | no

Eugene wrote 07/01/2019 at 13:35 point

Karla, may be this image will clarify power switch pinout: https://wiki.stereopi.com/images/1/1c/R15-stereopi-bottom.jpg
In the top-left corner you can see power switch pins and PCB lines. There are 5 of pins, and if we number them from 1 to 5 (left to right) we have:

1 and 5 (very left and very right) are switch shield 

2 and 3 - two ends of a power line, power is On when they are connected

4 - not connected

  Are you sure? yes | no

roger wrote 05/12/2019 at 16:03 point

Is there any support for POE on Ethernet ?

  Are you sure? yes | no

Eugene wrote 05/13/2019 at 10:27 point

We did not added PoE support in this revision, as it increases price, and not required by most of StereoPi users. If this feature will be requested by a lot of users, we can add it in some special StereoPi edition.

  Are you sure? yes | no

roger wrote 05/12/2019 at 15:59 point

Is there a way to breakout and use the wifi present on the cm3+ module ?

  Are you sure? yes | no

Eugene wrote 05/13/2019 at 10:26 point

Unfortunately CM3+ has no WiFi onboard...

  Are you sure? yes | no

Swan wrote 04/24/2019 at 07:05 point

First of all, great project. The depth results look promising.

On to questions: Is there any reason why one would use a V1 vs a V2 raspberry pi camera for depth mapping with stereo pi? Have you looked at using a global shutter on each camera instead of rolling shutter?

  Are you sure? yes | no

Eugene wrote 04/29/2019 at 13:26 point

Swan, Raspbian and Raspbery Pi supports only 2 kind of sensors (OV5647 as V1 and Sony IMX219 as V2) out of the box. There are some experiments with other sensors support at Pi forum (including global shutter), but this is a game for hardware ninjas only. :-)

  Are you sure? yes | no

Swan wrote 04/29/2019 at 14:16 point

I see, for sure. Have you noticed better depth results using one of the two sensors over the other or are they largely the same?

  Are you sure? yes | no

Eugene wrote 04/30/2019 at 08:54 point

@Swan Hackaday comments logic does not allow me to post direct reply to your next question, so I edited my previous answer.

========

I see, for sure. Have you noticed better depth results using one of the two sensors over the other or are they largely the same?

========

No, there are no difference if used pinhole optics. The main idea of using V1 cameras is that there are modules with wide-angle optics available on market.

  Are you sure? yes | no

wahib mir wrote 03/24/2019 at 19:13 point

Hey Eugene,

I love the product and see great future in it. I’m a student of Electrical Engineering and I’m working on a Project in which i have to do Image processing. I’d like to know the processing speed of Stereo pi. As regular RPI B+ is 1.4GHz, i need something faster so that while image recognition and image processing the RPI doesn’t lag. I need your help. Thanks.

  Are you sure? yes | no

Eugene wrote 03/25/2019 at 13:47 point

Hello Wahib,

StereoPi use Raspberry Pi Compute Module inside. If you use the latest Compute Module 3+, you have the same CPU as on RPi 3B+. So you will have right the same speed as on 3B+, may be a bit lower as peak frequencies are not the same. The only way to obtain more performance is to optimize your code, and also try to use GPU acceleration (at MMAL level) while image processing.

  Are you sure? yes | no

MORE PCB wrote 02/07/2019 at 08:34 point

Good boards 

  Are you sure? yes | no

EngineerAllen wrote 01/30/2019 at 02:35 point

i swear this projects been on my to do list for half my life

  Are you sure? yes | no

Eugene wrote 01/30/2019 at 05:19 point

So now you can use another half to do a projects with it! :-)

  Are you sure? yes | no

Travis Collins wrote 01/20/2019 at 01:56 point

Love this project!

Can you describe the picture resolutions and frame rates you're using in the "3rd person view of yourself" demo? The Sony IMX 219 is an ~8M pixel camera - but I assume this raspberry Pi platform has a resolution capture limit much lower than 2x that. 

Can you share the Occulus Go real-time streaming source code?

  Are you sure? yes | no

Eugene wrote 01/23/2019 at 18:34 point

On 3rd person view we used 1280x720 at 42 frames per second.

Yes, IMX219 supports higher resolution, but we found a good balance between resolution and framerate for good result.

As for code share. We are preparing ready-to-use Raspbian image we used for all our livestream-based projects. I think next week we will open it to public and will invite all enthusiasts for tests. So you will be able to se all code and play with it. By the way, this image can be used on a classic Raspberry Pi with single camera (and disabled stereoscopic mode).

Next week will be busy for us, as we do our last steps for crowdfunding campaign start.

  Are you sure? yes | no

michaelturner681 wrote 01/08/2019 at 11:39 point

Can you give us a clue as to when these boards would be available to buy? im currently doing a computer vision project for my final year at University in the UK and this board is exactly the kind of thing i could use as i wish to use stereo vision. My biggest issue is getting capable cameras to connect via usb as i haven't found any adaptor boards to go from the ribbon cable to usb ports of my single board computer.

  Are you sure? yes | no

Eugene wrote 01/11/2019 at 14:28 point

At this moment we do final preparations for crowdfunding campaign start. In better case it will start next week, in worst case a bit later. I can estimate StereoPi availability time as February-March. Right now we have some prototypes from the first test batch (20 pcs), but they are several times more expensive because of a small batch quantity.

  Are you sure? yes | no

MatYay wrote 01/07/2019 at 19:23 point

Hi, Can you tell did you use a Gigabit ethernet chip or just 100Mbit one ?

  Are you sure? yes | no

Eugene wrote 01/11/2019 at 14:24 point

We use 100 Mbit. 

USB/LAN chip on our board is connected to CM3's USB (of Broadcom SoC), which is USB 2.0 and have theoretical bandwidth limit 480 Mbit/s. This bandwidth is shared between LAN and all connected USB devices. So installing Gigabit looks strange, as we are able to use only 50% of his speed (and in the real world it is ~25%, not 50%). Installing USB 3.0 and Gigabit chip here is a good marketing, bud bad engineering. And we are engineers, not marketers, and prefer balanced  solutions :-)

  Are you sure? yes | no

benoitperron wrote 01/07/2019 at 12:36 point

Is it possible to use the StereoPi as a hub to access the two cameras individually(ie: streaming each camera on a deparate port?) and bypass all the stereo stuff you have worked so had to create? :-)

  Are you sure? yes | no

Eugene wrote 01/07/2019 at 13:00 point

Yes, you can. We use this mode for streaming front view and rear view videos from a drone. Two individual cameras, two independent livestreams. Just take into account that it is the single h264 encoder onboard, so you will not be able to encode 2 FullHD h264 streams (but able to do 2 x HD 1280x720 h264 streams with this encoder). Also you have independent MJPEG encoder and other goodies :-)

UPD> Let me answer on a question from DIYdrones here: you can use one Pi camera and one HDMI->CSI adapter to connect, for example, GoPro camera. So you have two independent streams from two optical devices.

  Are you sure? yes | no

David Sykes wrote 01/04/2019 at 20:15 point

How are the cameras synchronised ?

What is average mis-synch in msec ?

Thanks.

  Are you sure? yes | no

Eugene wrote 01/07/2019 at 12:55 point

StereoPi use sync implemented in Raspbian kernel. It is software sync, not genlock. While our experiments we were unable to capture any unsync issues in 3D video perception or depth map building. 

Here are some answers on synchronisation questions from user 6by9 (it was he who imlemented stereoscopic mode in Raspbian kernel): https://www.raspberrypi.org/forums/viewtopic.php?f=43&t=85012#p612743

  Are you sure? yes | no

David Sykes wrote 01/07/2019 at 18:40 point

Thanks.

So,in effect does the software 'press' the start on one camera and then on the second camera ?

Unlike most cameras, you would think simple board cameras have virtually no further preparatory work to do and start capturing frames immediately

Do you have a link to that part of the code ?

The only way to really measure sync is by using a crt :-

https://www.3dtv.at/Knowhow/Synctest_en.aspx


With Canon point-and-shoot/bridge cameras (including EOS M3 and M10) I have been able to achieve a sync error of a fraction of a msec.

If a crt test is not possible, it would be useful to see short video clips of fountains,waterfalls or flowing water with lots of water droplets flying around.

It would be very useful if the VSYNC signal was accessible on a test point so that they could be compared on a 'scope.

Is a diagram of the 15-way CSI connector avai;able ?

I have previously powered-up two board cameras at the same time and added a small capacitor across the power-up capacitor of the 'faster' camera to initially bring the VSYNC signals into sync.

The 65mm camera separation is sensible but strictly only applies when  the effective focal length is about 40 to 50mm.

I would be very interested in a configuration where the cameras are as close as possible for macro photography.

Is that possible ?

  Are you sure? yes | no

Bruno Verachten wrote 01/01/2019 at 18:01 point

Any idea of the price tag? Do you think this would work with the HDMI2CSI converter?
https://auvidea.com/b102-hdmi-to-csi-2-bridge-22-pin-fpc/

  Are you sure? yes | no

Eugene wrote 01/01/2019 at 22:01 point

Crowdfunding price is about $70 for board itself and about $120 for set with Compute Module and two v1 cameras.

Yes, board supports Auvidea hdmi2csi converter, but not in dual mode. That is you can not use two such converters. It is not StereoPi fault, but absence of support in Pi kernel. I've asked user 6by9 at Pi forum, here's his answer:https://www.raspberrypi.org/forums/viewtopic.php?f=38&t=120702&p=1108531#p1108448

  Are you sure? yes | no

Bruno Verachten wrote 01/02/2019 at 07:07 point

Thanks a lot. I am not interested in having two hdmi2csi, but only one and a V2 camera on the other csi port. It looks like it would work.

http://www.arducam.com/raspberry-pi-camera-rev-c-improves-optical-performance/

  Are you sure? yes | no

Guido wrote 01/04/2019 at 08:57 point

when is the crowdfunding going to launch ?

Awesome project!

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates