Using the Raspberry Pi as a host for the Intel NCS2 (Myriad X) is becoming an increasingly popular for running neural inference on the 'edge'.
However the data path when using an NCS2 is inefficient because the camera data has to flow through the Pi first to reach the NCS2. This results in a ~5x reduction in performance of what the NCS2 (the Myriad X) is capable of.
So what we're making is carrier board for the Raspberry Pi 3B+ Compute module, which exposes dual-camera connections directly to the Myriad X. Then the Myriad X connects to the Raspberry Pi, largely over the same manner it does when in the NCS2 (so much code can be reused).
This allows a couple things:
1. The video data path now skips the Pi, eliminating that CPU use (which is a LOT).
2. The hardware stereo-image depth capability of the Myriad X can now be used.
3. An estimated ~5x improvement on MobileNet-SSD object detection, as a result of the Raspberry Pi CPU no longer limiting the X.
So the effort that prompted us making the Raspberry Pi ("AiPi") solution is actually Commute Guardian (check it out at commuteguardian.com). So we have a lot of that effort (prototyping, results, etc.) mixed in here.
We have to do all (and more) of the work for the AiPi for that end-goal (which is itself to save bikers' lives). And so we figured, why not share the general underpinnings to Commute Guardian, here, with Hackaday - so others can benefit from the core work we're doing on Commute Guardian.
For more information on what we're thinking on the AiPi, stay tuned here, and/or check out discuss.aipi.io to give feedback/feature-requests/etc. as we progress along making the device.
So we got hardware depth and video tracking working. It's not calibrated depth yet (so that's why it doesn't look so great - it's using a unity 3x3 homography matrix). But it's working! (Caveat on that, it's still buggy and crashes on startup 9/10 times, but the 1/10 is so satisfying!)
But to re-iterate, all the calculation shown in the video is being done on the Myriad X (depth calculation and feature tracking). The host is doing nothing (other than just displaying the data that the Myriad X is streaming, which is optional).
The nice part is the Myriad X doesn't even get warm doing this. And that's with zero heatsink. Just the chip exposed to ambient air.
And for more info as to our end goals, check out: aipi.io - a Raspberry Pi depth vision + AI carrier board, which is itself a product we thought would be useful to the world, and is an internal stepping stone to: commuteguardian.com - the AI bike light to save lives
We're excited to share that we just finished component placement and initial routing of our first version of the board. This one is for initial development, debugging, etc. - and actually doesn't even have a Raspberry Pi slot yet. It'll primarily be programmed by JTAG and prodded and debugged.
Anyways, here's a 3D view of it:
It is, however, the same size as a Raspberry Pi 3. For the later versions, we'll remove a TON of extra stuff that's on this one - so there'll be more room for the Raspberry Pi CM3B+ module.
This demo is with an Intel RealSense D435 + Raspberry Pi 3B + NCS1.
It's doing MobileNet-SSD Object Detection and depth-data projection to give XYZ position of every pixel. And we're printing the XYZ of the middle-pixel of bounding boxes in the bounding box label (hence with the chair, it changes when I walk behind it, because the center-pixel is actually the wall behind the chair in its initial orientation). All other pixels' XYZ are available per frame, so you can use the ones most pertinent, average over an area, etc. And in the case of the Commute Guardian, the XYZ location of the edge of the vehicle is used for impact prediction.
We're working to make a board which leverages the Myriad X to do the depth calculation (and de-warp/etc.) directly while also doing the neural network side (the object detection). This should take the whole system from ~3FPS to ~30FPS, while reducing cost.
And if you want to give input on what the design should be, or other designs you'd want instead (or generally just to find out more options for embedded machine learning), head over to: https://discuss.aipi.io/
And if you want to know the background/why of us making this stuff, the end goal is to save bike commuters' lives: https://commuteguardian.com/
We're simply releasing our work, before the final bike product is out because we realized that the board itself (particularly with the Raspberry Pi as the brain) would be super useful for a bunch of engineers across a variety of project types.
Quick background: AiPi is us sharing a useful product we're developing on our path to make the commuteguardian.com product. Here's some background on that product:
Wanted to share the idea of how the rider is warned, before the horn goes off. The background here is the horn should never have to go off. Only the MOST distracted drivers won’t notice the ultra-bright strobes, which activate well before the horn will activate.
However, the horn WILL go off should the driver not respond to the strobes. It will activate early-enough such that the driver still has enough time/distance to respond and not hit you.
To warn the rider, there are two separate systems. The first one is an optional user interface via smartphone, which we’ll discuss first as it paints the picture a little easier:
So this gives you the states. In normal operation, it is recording and gives you map operation.
When there’s a warning state, the ultra-bright strobes come on, and there’s an overlay to make you aware of the elevated danger.
An example of this is a vehicle on your trajectory at a high rate of speed, which is still far away.
And when that vehicle is closer, the strobes didn’t deter them from an impact trajectory, the horn will sound, and you’ll be visually warned.
So the -second- system of warning the rider doesn’t rely on this optional (although cool) app.
It’s simply an audible alert that the biker will hear (but the car likely won’t) that will sound in the WARNING state, to alert the biker of the danger, and hopefully bring them into a state of being able to avoid the DANGER state (moving over, changing course, etc.)