Start With the Why
- There’s an epidemic in the US of injuries and deaths of people who ride bikes
- Majority of cases are distracted driving caused by smart phones (social media, texting, e-mailing, etc.)
- We set out to try to make people safer on bicycles in the US
- We’re technologists
- Focused on AI/ML/Embedded
- So we’re seeing if we can make a technology solution
(If you'd like to read more about CommuteGuardian, see here)
- In prototyping the Commute Guardian, we realized how powerful the combination of Depth and AI is.
- And we realized that no such embedded platform existed
- So our milestone on the path to CommuteGuardian is to build this platform – and sell it as a standard product.
- We’re building it for the Raspberry Pi (Compute Module)
- Human-level perception on the world’s most popular platform
- Adrian’s PyImageSearch Raspberry Pi Computer Vision Kickstarter sold out in 10 seconds – validating demand for Computer Vision on the Pi (that, and validating that Adrian is AWESOME!)
The first thing we made was a dev board for ourselves. The Myriad X is a complicated chip, with a ton of useful functionality... so we wanted a board where we could explore this easily, try out different image sensors, etc. Here's what that looks like:
We made the board with modular camera boards so we could easily test out new image sensors w/out the complexity of spinning a new board. So we'll continue to use this as we try out new image sensors and camera modules.
While waiting on our development boards to be fabricated, populated, etc. we brainstormed how to keep costs down (working w/ fine-pitch BGAs that necessitate laser vias means prototypes are EXPENSIVE), while still allowing easy experimentation w/ various form-factors, on/off-board cameras, etc. We landed on making ourselves a Myriad X System on Module, which is the board w/ all the crazy laser vias, stacked vias, and over all High-Density-Integration (HDI) board stuff that makes them expensive. This way, we figure, we can use this as the core of any Myriad X designs we do, without having to constantly prototype w/ expensive boards.
We exposed all that we needed for our end-goal of 3D object detection (i.e. MobileNet-SSD object detection + 3D reprojection off of stereo depth data). So that meant exposing a single 4-lane MIPI for handling high-res (e.g. 12MP) color camera sensors and 2x 2-lane MIPI for cameras such as ~1MP global-shutter image sensors for depth.
And we threw a couple other interfaces, boot methods, etc. on there for good measure, which are default de-pop to save cost when not needed, and can be populated if needed.
So of course in making a module, you also need to make a board on which to test the module. So in parallel to making the SoM, we started attacking a basic breakout carrier board:
It's basic, but pulls out all the important interfaces, and works with the same modular camera-board system as our development board. So it's to some degree our 'development board lite'.
And once we got both of these ordered, we turned our attention to what we set out to build, for you, the DepthAI for Raspberry Pi system. And here it is, in all it's Altium-rendered glory:... Read more »