Luxonis DepthAI

Spatial AI Meets Embedded Systems

Public Chat
Similar projects worth following
Machine learning (ML) based computer vision (CV) is incredibly powerful... but when you go to use it to interact with physical world it can be incredibly frustrating and limiting.

This is because so far the whole ML-based computer vision has been stuck in 2D, particularly when it comes to Edge or Embedded AI.DepthAI brings Spatial AI - unleashing the power of machine learning into 3 dimensions.

Real-time 3D results of what object are, and where it is in x, y, and z relative to the camera, on your embedded platform (SPI interface), your Raspberry Pi (USB), or Linux/Mac/Windows computer. With convenient (and free!) Google Colab notebooks for training on your objects of interest.

With Open Source hardware and software integration and modification for your prototype and product is painless and low risk:

Our Crowd Supply Campaign is Live!

Back us by ordering DepthAI here:

The Why

  • There’s an epidemic in the US of injuries and deaths of people who ride bikes
  • Majority of cases are distracted driving caused by smart phones (social media, texting, e-mailing, etc.)
  • We set out to try to make people safer on bicycles in the US
    • We’re technologists
    • Focused on AI/ML/Embedded
    • So we’re seeing if we can make a technology solution

Commute Guardian

(If you'd like to read more about CommuteGuardian, see here)

DepthAI Platform

  • In prototyping the Commute Guardian, we realized how powerful the combination of Depth and AI is.
  • And we realized that no such embedded platform existed
  • So we built it.  And we're releasing it to the world through a Crowd Supply campaign, here

We want this power to be easily embeddable into products (including our own) in a variety of form-factors (yours and ours).  So we made a System on Module which exposes all the key interfaces through an easy-to-integrate 100-pin connector.  

Unlike existing USB or PCIE Myriad X modules, our DepthAI module exposes 3x MIPI camera connections (1x 4-lane, 2x 2-lane) which allows the Myriad X to received data directly from the camera modules - unburdening the host

The direct MIPI connections to the Myriad X removes the video data path from the host entirely.  And actually this means the Myriad X can operate without a host entirely.  Or it can operate with a host, leaving the host CPU completely unburdened with all the vision and AI work being done entirely on the DepthAI module/Myriad X.

This results in huge efficiency increases (and power reduction) while also reducing latency, increasing overall frame-rate, and allowing hardware blocks which were previously unusable to be leveraged.

Take real-time object detection on the Myriad X interface with the Raspberry Pi 3B+ as an example:

Because of the data path efficiencies of DepthAI vs. using an NCS2, the frame rate increases from 8 FPS to 25FPS.  

And most importantly, using this data path allow utilization of the following Myriad X hardware blocks which are unusable with previous solutions:

This means that DepthAI is a full visual perception module - including 3D perception - and no longer just a neural processor, enabling real-time object localization in physical space, like below, but at 25FPS instead of 3FPS:

And to allow you to use this power right away, and with your favorite OS/platform, we made 3x editions of DepthAI, which serve as both reference designs for integrating the DepthAI module into your own custom hardware and also as ready-to-use platforms that can be used as-is to solve your computer vision problems, as-is.
  1. Raspberry Pi Compute Module Edition - with 3x integrated cameras
  2. Raspberry Pi HAT Edition - with 3x modular cameras
  3. USB3 Edition - compatible with Yocto, Debian, Ubuntu, Mac OS X and Windows 10

All of the above reference designs will be released should our CrowdSupply campaign be successfully funded.  So if you'd like to leverage these designs for your designs, or if you'd like to use this hardware directly, please support our CrowdSupply campaign:

Development Steps

The above is the result of a lot of background work to get familiar with the Myriad X, and architect and iterate the System on Module definition, form-factor, interfaces, and manufacturability.  Below are some of the steps involved in that process.

The first thing we made was a dev board for ourselves.  The Myriad X is a complicated chip, with a ton of useful functionality... so we wanted a board where we could explore this easily, try out different image sensors, etc.  Here's what that looks like:


We made the board with modular camera boards so we could easily test out new image sensors w/out the complexity...

Read more »

  • 1 × Intel Movidius Myriad X Vision/AI Processor
  • 1 × CM3B+ Raspberry Pi Compute Module 3B+
  • 2 × OV9282 Global Shutter camera modules optimized for disparity depth
  • 1 × IMX378 Nice high-resolution 12MP camera module that supports 12MP stills

  • Fighting COVID-19!

    Brandon5 days ago 0 comments

    Hello DepthAI Fans!

    We're super excited to share that Luxonis DepthAI and megaAI are being used to help used to help fight COVID-19!

    A doctor cleaning medical equipment


    To know where people are in relation to a Violet, Akara's UV-cleaning robot. This allows Violet to know when people are present, and how far away they are, in real-time - disabling its UV light when people are present.

    Check out this article from Intel for more details about Violet, the UV-cleaning robot.

    We're excited to continue developing this effort. Specifically, DepthAI can be used to map which surfaces were cleaned and how well (i.e. how much UV energy was deposited on each surface).

    This would allow a full 3D map of what was cleaned, how clean is it, and what surfaces were missed.  

    So in cases where objects in the room are blocking other surfaces, DepthAI would allow a map of the room showing which surfaces were blocked and therefore not able to be cleaned.

    The Luxonis Team  

  • megaAI is Live

    Brandon05/22/2020 at 16:23 0 comments

    Greetings Potential Backer,

    It is with great pleasure that we announce to you the immediate availability (for backing) of the brand-new Luxonis megaAI camera/AI board.

    It’s super-tiny but super-strong, capable of crushing 4K H.265 video with up to 4 Trillion Operations Per Second of AI/CV power. This single board powerhouse will come with a nice USB3 cable at all pledge levels that receive hardware. If you’re just interested in the project, or a good friend, you can pledge at the $20 level to get campaign updates and some other but currently secret cool stuff.

    We hope you’ll join us over the campaign. If you want to be the first kid of your block to receive a megaAI unit, back at the Roadrunner pledge level- you’ll get shipped before anyone else.

    Thank you!

    -  The Luxonis Team

  • DepthAI Power Over Ethernet (PoE)

    Brandon04/24/2020 at 22:52 2 comments

    Our first cut at the Power over Ethernet Carrier Board for the new PCIE-capable DepthAI System on Module (SoM) just came in.  

    We haven't tested yet, and the system on module (the BW2099) which powers these will arrive in ~3 weeks (because it's HDI, so it's slower to fabricate and assemble).  We'll be texting this board standalone soon, while anxiously awaiting the new system on module.

    So the new system on module has additional features which enable PoE (Gigabit Ethernet) applications and other niceties:

     - PoE / Gigabit Ethernet

     - On-board 16GB eMMC for h.264/h.265 video storage and JPEG/etc.

     - uSD Slot

  • Open Source Spatial AI Hardware

    Brandon04/22/2020 at 20:46 0 comments

    Now you can integrate the power of DepthAI into your custom prototypes and products at the board level because...

    We open sourced the carrier boards for the DepthAI System on Module (SoM).  So now you can easily take this SoM and integrate it directly into your designs.

    Check out our Github here:

    So this covers all the hardware below:

  • Mask (Good) and No-Mask DepthAI Training

    Brandon04/21/2020 at 06:22 0 comments

    We did a quick training of DepthAI over the weekend (using on Mask (Good) and No Mask (Bad). Seems pretty decent as long as it's not too-far range:
    Running on (uncalibrated) DepthAI real-time:

  • MobileNet SSD v2 Training for DepthAI and uAI

    Brandon04/13/2020 at 20:36 0 comments

    Got training for MobileNetSSDv2 working:

    You can label using this tool:

  • Update On Training Custom Neural Models for DepthAI

    Brandon04/08/2020 at 03:58 0 comments

    Meant to share this a while ago. So we have our initial online custom training for DepthAI now live on Colab.

    So there are two notable limitations currently:

    1. DepthAI currently supports OpenVINO 2019 R3, which itself requires older versions of TensorFlow and so on. So this flow has all those old versions, which causes a lot of additional steps in Colab... a lot of uninstalling current versions of stuff and installing old versions. We are currently in the process of upgrading our DepthAI codebase to support OpenVINO 2020.1, see here. The updated training flow when that's done. (EDIT: we got that done fast, see the bottom of this post for the 2020.1 training flow)
    2. The final conversion for DepthAI (to .blob) for some reason will not run on Google Colab. So it requires a local machine to do it. We're planning on just making our own server for this purpose that Google Colab can talk to to do the conversion.

    To test the custom training we took some images of apples and oranges and did a terrible job labeling them and then trained and converted the network and ran it on DepthAI. It's easy to get WAY better accuracies and detection rates by using something like to generate a larger dataset.

    To use the latest of everything (as of this writing), including OpenVINO (R2020.1), etc. use the following:


    Running on DepthAI:


    Brandon & the Luxonis Team

  • Enclosures and Mounts

    Brandon03/25/2020 at 21:40 0 comments
  • Crowd Supply Shipments Received and Community Slack

    Brandon03/02/2020 at 18:59 0 comments

    Hi everyone,

    Just wanted to share the the Crowd Supply shipments have been received, and our fans are up and running with the hardware, with much exciting activity and discussion in our Community Slack channel.

    To join, go to our docs page: and scroll down to the bottom and click on 'Slack Community' to join.

    We also have a couple new hardware models for sale, and a PoE variant of DepthAI in progress.

    First, the two new models since the Crowd Supply campaign (these will be on the campaign soon):

    1. BW1098OBC - USB3, Onboard cameras:

    2. BW1093 Single camera variant, USB3C.  We're calling this uAI (pronounced 'micro AI').

    You can buy initial versions of these here:

    Note that the BW1093 has a temporary heatsink on these units, as the Coronavirus is preventing the CM from being able to make the final version.  So it's just a stick-on one for now, similar to what the Raspberry Pi stick-on heatsinks look like:

    And for the new board we're working on:

    We got requests to be able to deploy DepthAI quite far from any host computer.  And to be able to deploy as many as people want.  Boom, enter PoE.  Now you can deploy DepthAI at up to 328.1 feet (100 meters, for those who have 1m = 3.281 feet memorized, like I do, strangely).

    Below are the renderings of this Myriad X power over Ethernet Design.  This is more or less our initial test design.  Other variants will come after this is validated and proves out all the EE pieces.  

    Our plan is to make an ROS module so that when plugged into the network ROS will auto-discover it and it can be used right away.



    Brandon and the Luxonis team!

  • MVP Complete, Crowd Supply Shipping

    Brandon02/14/2020 at 16:40 0 comments

    Hi all,

    So 1/2 of the Crowd Supply orders shipped to customers yesterday.  And the other half are shipping out today.  

    So guess what?  Our planned shipped date was... drum roll please, February 14th (today)!

    And we have our MVP running:

    So what is this showing?  The dawn of 'spatial AI'.  Where now your embedded system can know what objects are, and where they are in physical space, in real time.  

    It's a capability that straight-up didn't exist at all prior to 2016, even on mainframes.  And even recently, thinking you could do this on an embedded system would be considered science fiction.

    Now you can have this capability, for your product, with this little module:

    And big thanks to the Crowd Supply team for getting these all shipped out on time (and early)!


    Brandon & the Luxonis team.

View all 53 project logs

Enjoy this project?



Andrey V wrote 08/30/2019 at 17:53 point

I think it's too hard to get it work IRL. If you will do it, your LVL=80)

  Are you sure? yes | no

Brandon wrote 08/30/2019 at 17:59 point

Heh.  Thanks.  It's definitely a hard project - but we're hoping it will provide a lot of value!

  Are you sure? yes | no

Andrey V wrote 08/30/2019 at 18:02 point

Good luck!!!

  Are you sure? yes | no

Alan wrote 08/04/2019 at 00:40 point

Does your SOM break out the PCIe lanes on the movidius? I was looking at the UP Ai core but they convert PCIe to usb on board.

  Are you sure? yes | no

Brandon wrote 08/04/2019 at 01:03 point

Great question, thanks.

It does not.  The SoM uses USB3 (or 2) to connect to the host.  Less power, fewer parts, and also works with hosts that don’t have PCIE or don’t want to tie up a PCIE slot for this.

On the DepthAI for Raspberry Pi version there’s a USB hub chip on the carrier board, which is what allows the pi compute module to talk to the myriad x and also the outside world.

And yes the PCIE boards do convert from PCIE to USB on-board.

If you don’t mind me asking, why do you ask?  Do you need direct PCIE connection?


Thanks again,


  Are you sure? yes | no

Alan wrote 08/05/2019 at 05:47 point

I can't think of any reason right away, but it would be great to have as much of the io exposed as possible.  I think at least the SPI and ethernet should be exposed because those would both be useful for someone who wanted to use the SOM as a standalone device.

P.s. what did you guys have to do in order to aquire the chips? I can't seem to find anywhere to order them online

  Are you sure? yes | no

Brandon wrote 08/06/2019 at 22:35 point

Hey Alan,

So for some reason I can't reply to your comment, so I'm replying here instead.  So we don't know how to make Ethernet work yet from a firmware standpoint (and don't have a clear path to figuring it out) so we left it off of this SoM.  

That said we are making a SoM w/ an i.MX 8M on w/ the Myriad X, so that would provide Ethernet and a whole slew of interfaces like USB3, WiFi, etc.




  Are you sure? yes | no

psykhon wrote 03/07/2019 at 12:19 point

Hi Brandon, awesome project! 

How hard was to get te myriad x chips? Can you share some info on how do you do it? price?

  Are you sure? yes | no

Tegwyn☠Twmffat wrote 02/01/2019 at 14:53 point

I checked out that link above - I wonder how their larger kit compares to the Jetson TX2 in terms of performance?

I realise performance is not everything and the Intel model zoo is pretty useful. The Nvidia software seems to be a bit behind in that they only have blvc_Googlenet as 'out of the box' solution for detection.

What do you think your price point will be for a single myriad X carrier board, I'm presuming about $100 ?

  Are you sure? yes | no

Brandon wrote 02/01/2019 at 15:02 point

Great question!  So we've actually done a decent amount of stuff on the Tx2 as well.  The Myriad X, in terms of straight neural inference performance (e.g. object detection, semantic segmentation, etc.) is about the same as the Tx2.  The Myriad X neural engine is 1 TOPS, and the Tx2 peaks in ideal conditions at 2 TOPS, but from below, it seems like in most conditions, it's effectively 1 TOPS:

But!  If your application is depth vision + neural inference, the Myriad X is equivalent to about 2 or 3 Jetson Tx2, mainly because of the 16 SHAVE cores in the Myriad X, which together can do 6 cameras in 3 pairs of depth streams. 

The neural inference part of the Myriad X is only 1 TOPS of the total 4 TOPS the device an do.  The remaining tops are for image processing functions like depth vision.

So this board won't really even tax the Myriad X, as there will just be one depth stream.  That said, we can use the extra Myriad X 'head room' to run fancier/more-processing-intensive depth calculation on these just 2 cameras - to produce a better set of depth information.

  Are you sure? yes | no

Tegwyn☠Twmffat wrote 01/31/2019 at 22:58 point

Hello Brandon! Does the Myriad X chip get put on the carrier board or does it stay in the USB stick?

If it goes on the board, how many of them?

  Are you sure? yes | no

Brandon wrote 02/01/2019 at 12:37 point

The Myriad X would be directly on the carrier board.  We could make versions with multiple Myriad X, for sure.  Is that of interest?  

These guys did that for their PCIE version:

I have 2 of those on order, by the way.  They're useful as well, for sure - just a different application, and not applicable for the Pi community (which is what this board should serve).

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates