Close
0%
0%

Luxonis DepthAI

Spatial AI Meets Embedded Systems

Public Chat
Similar projects worth following
Machine learning (ML) based computer vision (CV) is incredibly powerful... but when you go to use it to interact with physical world it can be incredibly frustrating and limiting.

This is because so far the whole ML-based computer vision has been stuck in 2D, particularly when it comes to Edge or Embedded AI.DepthAI brings Spatial AI - unleashing the power of machine learning into 3 dimensions.

Real-time 3D results of what object are, and where it is in x, y, and z relative to the camera, on your embedded platform (SPI interface), your Raspberry Pi (USB), or Linux/Mac/Windows computer. With convenient (and free!) Google Colab notebooks for training on your objects of interest.

With Open Source hardware and software integration and modification for your prototype and product is painless and low risk: https://github.com/luxonis

We have launched our KickStarter Campaign!

Today, our team is excited to release to you the OpenCV AI Kit, OAK, a modular, open-source ecosystem composed of MIT-licensed hardware, software, and AI training - that allows you to embed Spatial AI and CV super-powers into your product.

And best of all, you can buy this complete solution today and integrate it into your product tomorrow.

Back our campaign today!

https://www.kickstarter.com/projects/opencv/opencv-ai-kit?ref=card


The Why

  • There’s an epidemic in the US of injuries and deaths of people who ride bikes
  • Majority of cases are distracted driving caused by smart phones (social media, texting, e-mailing, etc.)
  • We set out to try to make people safer on bicycles in the US
    • We’re technologists
    • Focused on AI/ML/Embedded
    • So we’re seeing if we can make a technology solution

Commute Guardian

(If you'd like to read more about CommuteGuardian, see here)

DepthAI Platform

  • In prototyping the Commute Guardian, we realized how powerful the combination of Depth and AI is.
  • And we realized that no such embedded platform existed
  • So we built it.  And we're releasing it to the world through a Crowd Supply campaign, here

We want this power to be easily embeddable into products (including our own) in a variety of form-factors (yours and ours).  So we made a System on Module which exposes all the key interfaces through an easy-to-integrate 100-pin connector.  

Unlike existing USB or PCIE Myriad X modules, our DepthAI module exposes 3x MIPI camera connections (1x 4-lane, 2x 2-lane) which allows the Myriad X to received data directly from the camera modules - unburdening the host

The direct MIPI connections to the Myriad X removes the video data path from the host entirely.  And actually this means the Myriad X can operate without a host entirely.  Or it can operate with a host, leaving the host CPU completely unburdened with all the vision and AI work being done entirely on the DepthAI module/Myriad X.

This results in huge efficiency increases (and power reduction) while also reducing latency, increasing overall frame-rate, and allowing hardware blocks which were previously unusable to be leveraged.

Take real-time object detection on the Myriad X interface with the Raspberry Pi 3B+ as an example:

Because of the data path efficiencies of DepthAI vs. using an NCS2, the frame rate increases from 8 FPS to 25FPS.  

And most importantly, using this data path allow utilization of the following Myriad X hardware blocks which are unusable with previous solutions:

This means that DepthAI is a full visual perception module - including 3D perception - and no longer just a neural processor, enabling real-time object localization in physical space, like below, but at 25FPS instead of 3FPS:


And to allow you to use this power right away, and with your favorite OS/platform, we made 3x editions of DepthAI, which serve as both reference designs for integrating the DepthAI module into your own custom hardware and also as ready-to-use platforms that can be used as-is to solve your computer vision problems, as-is.
  1. Raspberry Pi Compute Module Edition - with 3x integrated cameras
  2. Raspberry Pi HAT Edition - with 3x modular cameras
  3. USB3 Edition - compatible with Yocto, Debian, Ubuntu, Mac OS X and Windows 10

All of the above reference designs will be released should our CrowdSupply campaign be successfully funded.  So if you'd like to leverage these designs for your designs, or if you'd like to use this hardware directly, please support our CrowdSupply campaign:

https://www.crowdsupply.com/luxonis/depthai

Development Steps

The above is the result of a lot of background work to get familiar with the Myriad X, and architect and iterate the System on Module definition, form-factor, interfaces, and manufacturability.  Below are some of the steps involved in that process.

The first thing...

Read more »

  • 1 × Intel Movidius Myriad X Vision/AI Processor
  • 1 × CM3B+ Raspberry Pi Compute Module 3B+
  • 2 × OV9282 Global Shutter camera modules optimized for disparity depth
  • 1 × IMX378 Nice high-resolution 12MP camera module that supports 12MP stills

  • Announcing OpenCV AI Kit (OAK)

    Brandon07/14/2020 at 13:48 0 comments

    Today, our team is excited to release to you the OpenCV AI Kit, OAK, a modular, open-source ecosystem composed of MIT-licensed hardware, software, and AI training - that allows you to embed Spatial AI and CV super-powers into your product.

    And best of all, you can buy this complete solution today and integrate it into your product tomorrow.

    Back our campaign today!

    https://www.kickstarter.com/projects/opencv/opencv-ai-kit?ref=card

  • megaAI CrowdSupply Campaign Production Batch Complete

    Brandon07/09/2020 at 19:58 0 comments

    Our production run of the megaAI CrowdSupply campaign is complete and now shipping to us:

    We had 97% yield on the first round of testing and 99% yield after rework and retest of the 3% that had issues in the first testing.

  • Pure Embedded Variant of DepthAI

    Brandon07/02/2020 at 15:34 0 comments

    Hi DepthAI Backers and Fans,

    So we've proof-of-concepted an SPI-only interface for DepthAI and it's working well (proof of concept done with MSP430 and Raspberry Pi over SPI).

    So to make it easier for engineers to leverage this power (and also for us internally to develop it), we're making a complete hardware and software/AI reference design for the ESP32, with the primary interface between DepthAI and the ESP32 being SPI.

    The design will still have USB3C for DepthAI, which will allow you to see live high-bandwidth results/etc. on a computer while integrating/debugging communication to your ESP32 code (both running in parallel, which will be nice for debugging).  Similarly, the ESP32 will have an onboard UART-USB converter and micro-USB connector for programming/interfacing w/ the ESP32 for easy development/debug.

    For details and progress on the hardware effort see [here] and to check out the SPI support enhancement on DepthAI API see [here]

    In short here's the concept:

    And here's a first cut at the placement:

    And please let us know if you have any thoughts/comments/questions on this design!

    Best,

    Brandon & The Luxonis Team

  • New Spatial AI Capabilities & Multi-Stage Inference

    Brandon06/17/2020 at 19:53 0 comments

    We have a super-interesting feature-set coming to DepthAI:

    • 3D feature localization (e.g. finding facial features) in physical space
    • Parallel-inference-based 3D object localization
    • Two-stage neural inference support

    And all of these are initially working (in this PR, [here](https://github.com/luxonis/depthai/pull/94#issuecomment-645416719)).

    So to the details and how this works:

    We are actually implementing a feature that allows you to run neural inference on either or both of the grayscale cameras. 

    This sort of flow is ideal for finding the 3D location of small objects, shiny objects, or objects for which disparity depth might struggle to resolve the distance (z-dimension), which is used to get the 3D position (XYZ). So this now means DepthAI can be used two modalities:

    1. As it's used now: The disparity depth results within a region of the object detector are used to re-project xyz location of the center of object.
    2. Run the neural network in parallel on both left/right grayscale cameras, and the results are used to triangulate the location of features.

    An example where 2 is extremely useful is finding the xyz positions of facial landmarks, such as eyes, nose, and corners of the mouth. 

    Why is this useful for facial features like this?  For small features like this, the risk of disparity depth having a hole in the location goes up, and even worse, for faces with glasses, the reflection of the glasses may throw the disparity depth calculation off (and in fact it might 'properly' give the depth result for the reflected object).

    When running the neural network in parallel, none of these issues exist, as the network finds the eyes, nose, and mouth corners per image, and then the disparity in location of these in pixels from the right and left stream results gives the z-dimension (depth = 1/disparity), and then this is reprojected through the optics of the camera to get the full XYZ position of all of these features.  

    And as you can see below, it works fine even w/ my quite-reflective anti-glare glasses:


    Thoughts?

    Cheers, 

    Brandon and the Luxonis Team

  • Raspberry Pi HQ Camera Works With DepthAI!

    Brandon06/10/2020 at 04:07 0 comments

    Hello everyone!

    So we have exciting news!  Over the weekend we wrote a driver for the IMX477 used in the Raspberry Pi HQ Camera.

    So now you can use the awesome new Raspberry Pi HQ camera with DepthAI FFC (here).  Below are some videos of it working right after we wrote the driver this weekend.


    Notice that it even worked w/ an extra long FFC cable!  ^

    More details on how to use it are here.  And remember DepthAI is open source, so you can even make your own adapter (or other DepthAI boards) from our Github here.

    And you can buy the adapter here: https://shop.luxonis.com/products/rpi-hq-camera-imx477-adapter-kit

    Cheers,

    Brandon & the Luxonis team

  • IR-only DepthAI

    Brandon06/03/2020 at 20:53 0 comments

    Hi DepthAI (and megaAI) fans!

    So we have a couple customers who are interested in IR-only variants of the global-shutter cameras used for Depth, so we made a quick variant of DepthAI with these.

    We actually just made adapter boards which plug directly into the BW1097 (here) by unplugging the existing onboard cameras.  We tested with this IR flashlight here.

    It's a bit hard to see, but you can tell the room is relatively dark to visible light and the IR cameras pick up the IR light quite well.

    Cheers,

    The Luxonis Team

  • Fighting COVID-19!

    Brandon05/26/2020 at 19:35 0 comments

    Hello DepthAI Fans!

    We're super excited to share that Luxonis DepthAI and megaAI are being used to help used to help fight COVID-19!

    A doctor cleaning medical equipment

    How? 

    To know where people are in relation to a Violet, Akara's UV-cleaning robot. This allows Violet to know when people are present, and how far away they are, in real-time - disabling its UV light when people are present.

    Check out this article from Intel for more details about Violet, the UV-cleaning robot.

    We're excited to continue developing this effort. Specifically, DepthAI can be used to map which surfaces were cleaned and how well (i.e. how much UV energy was deposited on each surface).

    This would allow a full 3D map of what was cleaned, how clean is it, and what surfaces were missed.  

    So in cases where objects in the room are blocking other surfaces, DepthAI would allow a map of the room showing which surfaces were blocked and therefore not able to be cleaned.

    Thanks,  
    The Luxonis Team  

  • megaAI is Live

    Brandon05/22/2020 at 16:23 0 comments

    Greetings Potential Backer,

    It is with great pleasure that we announce to you the immediate availability (for backing) of the brand-new Luxonis megaAI camera/AI board.


    It’s super-tiny but super-strong, capable of crushing 4K H.265 video with up to 4 Trillion Operations Per Second of AI/CV power. This single board powerhouse will come with a nice USB3 cable at all pledge levels that receive hardware. If you’re just interested in the project, or a good friend, you can pledge at the $20 level to get campaign updates and some other but currently secret cool stuff.

    We hope you’ll join us over the campaign. If you want to be the first kid of your block to receive a megaAI unit, back at the Roadrunner pledge level- you’ll get shipped before anyone else.

    https://www.crowdsupply.com/luxonis/megaai

    Thank you!

    -  The Luxonis Team

  • DepthAI Power Over Ethernet (PoE)

    Brandon04/24/2020 at 22:52 2 comments

    Our first cut at the Power over Ethernet Carrier Board for the new PCIE-capable DepthAI System on Module (SoM) just came in.  

    We haven't tested yet, and the system on module (the BW2099) which powers these will arrive in ~3 weeks (because it's HDI, so it's slower to fabricate and assemble).  We'll be texting this board standalone soon, while anxiously awaiting the new system on module.

    So the new system on module has additional features which enable PoE (Gigabit Ethernet) applications and other niceties:

     - PoE / Gigabit Ethernet

     - On-board 16GB eMMC for h.264/h.265 video storage and JPEG/etc.

     - uSD Slot

  • Open Source Spatial AI Hardware

    Brandon04/22/2020 at 20:46 0 comments

    Now you can integrate the power of DepthAI into your custom prototypes and products at the board level because...

    We open sourced the carrier boards for the DepthAI System on Module (SoM).  So now you can easily take this SoM and integrate it directly into your designs.

    Check out our Github here:

    https://github.com/luxonis/depthai-hardware

    So this covers all the hardware below:

View all 59 project logs

Enjoy this project?

Share

Discussions

rand3289 wrote 06/05/2020 at 03:00 point

Awesome project!  However, 3D is not enough.  Use TIME - the fourth dimension.  I describe why and what that means in this short paper:  https://github.com/rand3289/PerceptionTime 

  Are you sure? yes | no

Brandon wrote 06/05/2020 at 03:13 point

Hi @rand3289 ,

Thanks!

Yes, time is very important in a lot of perception, whether it's integrating LSTM w/ YOLO, or using time to better estimate depth.

On the depth part, see here: https://www.reddit.com/r/MachineLearning/comments/gc2wo9/r_consistent_video_depth_estimation_siggraph_2020/

Which builds off of work from Google, which took advantage of time in the training data, but not in the network itself (which is the above improvement):

https://ai.googleblog.com/2019/05/moving-camera-moving-people-deep.html

On YOLO with an LSTM (so to take advantage of time), see here:
https://github.com/opencv/opencv/issues/15033

I think there's newer/better than that.  I know AlexeyAB and he shared that w/ me back in July 2019... so that's from a LONG time ago in computer vision/AI rates.

We haven't investigated any of these on the platform... there's so much core platform work to be done, but many of them are probably runnable on the platform with some work/optimization.

Thoughts?

Thanks,

Brandon

  Are you sure? yes | no

Andrey V wrote 08/30/2019 at 17:53 point

I think it's too hard to get it work IRL. If you will do it, your LVL=80)

  Are you sure? yes | no

Brandon wrote 08/30/2019 at 17:59 point

Heh.  Thanks.  It's definitely a hard project - but we're hoping it will provide a lot of value!

  Are you sure? yes | no

Andrey V wrote 08/30/2019 at 18:02 point

Good luck!!!

  Are you sure? yes | no

Brandon wrote 06/05/2020 at 03:05 point

Hi @Andrey V - so you can buy it IRL now: https://shop.luxonis.com/.  And it's open source with a whole ton of useful stuff.  To pick a random repo, checkout some of our recent experiments with the platform here: https://github.com/luxonis/depthai-experiments

  Are you sure? yes | no

Brandon wrote 10/05/2020 at 16:02 point

Hi @Andrey V ,

So as an update, you can also now buy this as the OpenCV AI Kit: https://www.kickstarter.com/projects/opencv/opencv-ai-kit
It comes with neat aluminum enclosures with Gorilla-Glass Anti-Reflective Coatings.  

Cheers,

Brandon

  Are you sure? yes | no

Alan wrote 08/04/2019 at 00:40 point

Does your SOM break out the PCIe lanes on the movidius? I was looking at the UP Ai core but they convert PCIe to usb on board.

  Are you sure? yes | no

Brandon wrote 08/04/2019 at 01:03 point

Great question, thanks.

It does not.  The SoM uses USB3 (or 2) to connect to the host.  Less power, fewer parts, and also works with hosts that don’t have PCIE or don’t want to tie up a PCIE slot for this.

On the DepthAI for Raspberry Pi version there’s a USB hub chip on the carrier board, which is what allows the pi compute module to talk to the myriad x and also the outside world.

And yes the PCIE boards do convert from PCIE to USB on-board.

If you don’t mind me asking, why do you ask?  Do you need direct PCIE connection?

Thoughts?

Thanks again,

Brandon

  Are you sure? yes | no

Alan wrote 08/05/2019 at 05:47 point

I can't think of any reason right away, but it would be great to have as much of the io exposed as possible.  I think at least the SPI and ethernet should be exposed because those would both be useful for someone who wanted to use the SOM as a standalone device.

P.s. what did you guys have to do in order to aquire the chips? I can't seem to find anywhere to order them online

  Are you sure? yes | no

Brandon wrote 08/06/2019 at 22:35 point

Hey Alan,

So for some reason I can't reply to your comment, so I'm replying here instead.  So we don't know how to make Ethernet work yet from a firmware standpoint (and don't have a clear path to figuring it out) so we left it off of this SoM.  

That said we are making a SoM w/ an i.MX 8M on w/ the Myriad X, so that would provide Ethernet and a whole slew of interfaces like USB3, WiFi, etc.

Thoughts?

Thanks,

Brandon

  Are you sure? yes | no

psykhon wrote 03/07/2019 at 12:19 point

Hi Brandon, awesome project! 

How hard was to get te myriad x chips? Can you share some info on how do you do it? price?

  Are you sure? yes | no

Tegwyn☠Twmffat wrote 02/01/2019 at 14:53 point

I checked out that link above - I wonder how their larger kit compares to the Jetson TX2 in terms of performance?

I realise performance is not everything and the Intel model zoo is pretty useful. The Nvidia software seems to be a bit behind in that they only have blvc_Googlenet as 'out of the box' solution for detection.

What do you think your price point will be for a single myriad X carrier board, I'm presuming about $100 ?

  Are you sure? yes | no

Brandon wrote 02/01/2019 at 15:02 point

Great question!  So we've actually done a decent amount of stuff on the Tx2 as well.  The Myriad X, in terms of straight neural inference performance (e.g. object detection, semantic segmentation, etc.) is about the same as the Tx2.  The Myriad X neural engine is 1 TOPS, and the Tx2 peaks in ideal conditions at 2 TOPS, but from below, it seems like in most conditions, it's effectively 1 TOPS:

https://devtalk.nvidia.com/default/topic/1024825/cuda-programming-and-performance/jetson-tx2-performance/

But!  If your application is depth vision + neural inference, the Myriad X is equivalent to about 2 or 3 Jetson Tx2, mainly because of the 16 SHAVE cores in the Myriad X, which together can do 6 cameras in 3 pairs of depth streams. 

The neural inference part of the Myriad X is only 1 TOPS of the total 4 TOPS the device an do.  The remaining tops are for image processing functions like depth vision.

So this board won't really even tax the Myriad X, as there will just be one depth stream.  That said, we can use the extra Myriad X 'head room' to run fancier/more-processing-intensive depth calculation on these just 2 cameras - to produce a better set of depth information.

  Are you sure? yes | no

Tegwyn☠Twmffat wrote 01/31/2019 at 22:58 point

Hello Brandon! Does the Myriad X chip get put on the carrier board or does it stay in the USB stick?

If it goes on the board, how many of them?

  Are you sure? yes | no

Brandon wrote 02/01/2019 at 12:37 point

The Myriad X would be directly on the carrier board.  We could make versions with multiple Myriad X, for sure.  Is that of interest?  

These guys did that for their PCIE version:

https://www.crowdsupply.com/up/ai-core-x

I have 2 of those on order, by the way.  They're useful as well, for sure - just a different application, and not applicable for the Pi community (which is what this board should serve).

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates