-
A Whole Bunch of New DepthAI Capabilities
12/18/2020 at 04:34 • 0 commentsHi DepthAI Backers!
Thanks again for all the continued support and interest in the platform.
So we've been hard at work adding a TON of DepthAI functionalities. You can track a lot of the progress in the following Github projects:
- Gen1 Feature-Complete: https://github.com/orgs/luxonis/projects/3
- Gen2 December-Delivery: https://github.com/orgs/luxonis/projects/2
- Gen2 2021 Efforts: (Some are even already in progress) https://github.com/orgs/luxonis/projects/4As you can see, there are a TON of features we have released since the last update. Let's highlight a few below:
RGB-Depth Alignment
We have the calibration stage working now. And future DepthAI builds (after this writing) are actually having RGB-right calibration performed. An example with semantic segmentation is shown below:
The `right` grayscale camera is shown on the right and the RGB is shown on the left. You can see the cameras are slightly different aspect ratios and fields of view, but the semantic segmentation is still properly applied. More details on this, and to track progress, see our Github issue on this feature here: https://github.com/luxonis/depthai/issues/284
Subpixel Capability
DepthAI now supports subpixel. To try it out yourself, use the example [here](https://github.com/luxonis/depthai-experiments#gen2-subpixel-and-lr-check-disparity-depth-here). And see below for my quickly using this at my desk:
Host Side Depth Capability
We also now allow performing depth estimation from images sent from the host. This is very convenient for test/validation - as stored images can be used. And along with this, we now support outputting the rectified-left and rectified-right, so they can be stored and later used with DepthAI's depth engine in various CV pipelines.
See [here](https://github.com/luxonis/depthai-experiments/tree/master/gen2-camera-demo#depth-from-rectified-host-images) on how to do this with your DepthAI model. And see some examples below from the MiddleBury stereo dataset:
For the bad looking areas, these are caused by the objects being too close to the camera for the given baseline, exceeding the 96 pixels max distance for disparity matching (StereoDepth engine constraint):
These areas will be improved with `extended = True`, however Extended Disparity and Subpixel cannot operate both at the same time.
RGB Focus, Exposure, and Sensitivity Control
We also added the capability (and examples on how to use) manual focus, exposure, and sensitivity controls. See [here](https://github.com/luxonis/depthai/pull/279) for how to use these controls. Here is an example of increasing the exposure time:
And here is setting it quite low:
It's actually fairly remarkable how well the neural network still detects me as a person even when the image is this dark.
Pure Embedded DepthAI
We mentioned in our last update ([here](https://www.crowdsupply.com/luxonis/depthai/updates/pure-embedded-depthai-under-development)), we mentioned that we were making a pure-embedded DepthAI.
We made it. Here's the initial concept:
And here it is working!
And here it is on a wrist to give a reference of its size:
And [eProsima](https://www.eprosima.com/) even got microROS running on this with DepthAI, exporting results over WiFi back to RViz:
RPi Compute Module 4
We're quite excited about this one. We're fairly close to ordering it. Some initial views in Altium below:
There's a bunch more, but we'll leave you with our recent interview with Chris Gammel at the Amp Hour!
https://theamphour.com/517-depth-and-ai-with-brandon-gilles-and-brian-weinstein/
Cheers,
Brandon & The Luxonis Team
-
Announcing OpenCV AI Kit (OAK)
07/14/2020 at 13:48 • 0 commentsToday, our team is excited to release to you the OpenCV AI Kit, OAK, a modular, open-source ecosystem composed of MIT-licensed hardware, software, and AI training - that allows you to embed Spatial AI and CV super-powers into your product.
And best of all, you can buy this complete solution today and integrate it into your product tomorrow.
Back our campaign today!
https://www.kickstarter.com/projects/opencv/opencv-ai-kit?ref=card
-
megaAI CrowdSupply Campaign Production Batch Complete
07/09/2020 at 19:58 • 0 commentsOur production run of the megaAI CrowdSupply campaign is complete and now shipping to us:
We had 97% yield on the first round of testing and 99% yield after rework and retest of the 3% that had issues in the first testing.
-
Pure Embedded Variant of DepthAI
07/02/2020 at 15:34 • 0 commentsHi DepthAI Backers and Fans,
So we've proof-of-concepted an SPI-only interface for DepthAI and it's working well (proof of concept done with MSP430 and Raspberry Pi over SPI).
So to make it easier for engineers to leverage this power (and also for us internally to develop it), we're making a complete hardware and software/AI reference design for the ESP32, with the primary interface between DepthAI and the ESP32 being SPI.
The design will still have USB3C for DepthAI, which will allow you to see live high-bandwidth results/etc. on a computer while integrating/debugging communication to your ESP32 code (both running in parallel, which will be nice for debugging). Similarly, the ESP32 will have an onboard UART-USB converter and micro-USB connector for programming/interfacing w/ the ESP32 for easy development/debug.
For details and progress on the hardware effort see [here] and to check out the SPI support enhancement on DepthAI API see [here]
In short here's the concept:
And here's a first cut at the placement:
And please let us know if you have any thoughts/comments/questions on this design!
Best,
Brandon & The Luxonis Team
-
New Spatial AI Capabilities & Multi-Stage Inference
06/17/2020 at 19:53 • 0 commentsWe have a super-interesting feature-set coming to DepthAI:
- 3D feature localization (e.g. finding facial features) in physical space
- Parallel-inference-based 3D object localization
- Two-stage neural inference support
And all of these are initially working (in this PR, [here](https://github.com/luxonis/depthai/pull/94#issuecomment-645416719)).
So to the details and how this works:
We are actually implementing a feature that allows you to run neural inference on either or both of the grayscale cameras.
This sort of flow is ideal for finding the 3D location of small objects, shiny objects, or objects for which disparity depth might struggle to resolve the distance (z-dimension), which is used to get the 3D position (XYZ). So this now means DepthAI can be used two modalities:
- As it's used now: The disparity depth results within a region of the object detector are used to re-project xyz location of the center of object.
- Run the neural network in parallel on both left/right grayscale cameras, and the results are used to triangulate the location of features.
An example where 2 is extremely useful is finding the xyz positions of facial landmarks, such as eyes, nose, and corners of the mouth.
Why is this useful for facial features like this? For small features like this, the risk of disparity depth having a hole in the location goes up, and even worse, for faces with glasses, the reflection of the glasses may throw the disparity depth calculation off (and in fact it might 'properly' give the depth result for the reflected object).
When running the neural network in parallel, none of these issues exist, as the network finds the eyes, nose, and mouth corners per image, and then the disparity in location of these in pixels from the right and left stream results gives the z-dimension (depth = 1/disparity), and then this is reprojected through the optics of the camera to get the full XYZ position of all of these features.
And as you can see below, it works fine even w/ my quite-reflective anti-glare glasses:
Thoughts?Cheers,
Brandon and the Luxonis Team
-
Raspberry Pi HQ Camera Works With DepthAI!
06/10/2020 at 04:07 • 0 commentsHello everyone!
So we have exciting news! Over the weekend we wrote a driver for the IMX477 used in the Raspberry Pi HQ Camera.
So now you can use the awesome new Raspberry Pi HQ camera with DepthAI FFC (here). Below are some videos of it working right after we wrote the driver this weekend.
Notice that it even worked w/ an extra long FFC cable! ^
More details on how to use it are here. And remember DepthAI is open source, so you can even make your own adapter (or other DepthAI boards) from our Github here.
And you can buy the adapter here: https://shop.luxonis.com/products/rpi-hq-camera-imx477-adapter-kit
Cheers,
Brandon & the Luxonis team
-
IR-only DepthAI
06/03/2020 at 20:53 • 0 commentsHi DepthAI (and megaAI) fans!
So we have a couple customers who are interested in IR-only variants of the global-shutter cameras used for Depth, so we made a quick variant of DepthAI with these.
We actually just made adapter boards which plug directly into the BW1097 (here) by unplugging the existing onboard cameras. We tested with this IR flashlight here.
It's a bit hard to see, but you can tell the room is relatively dark to visible light and the IR cameras pick up the IR light quite well.
Cheers,
The Luxonis Team
-
Fighting COVID-19!
05/26/2020 at 19:35 • 0 commentsHello DepthAI Fans!
We're super excited to share that Luxonis DepthAI and megaAI are being used to help used to help fight COVID-19!
How?
To know where people are in relation to a Violet, Akara's UV-cleaning robot. This allows Violet to know when people are present, and how far away they are, in real-time - disabling its UV light when people are present.
Check out this article from Intel for more details about Violet, the UV-cleaning robot.
We're excited to continue developing this effort. Specifically, DepthAI can be used to map which surfaces were cleaned and how well (i.e. how much UV energy was deposited on each surface).
This would allow a full 3D map of what was cleaned, how clean is it, and what surfaces were missed.
So in cases where objects in the room are blocking other surfaces, DepthAI would allow a map of the room showing which surfaces were blocked and therefore not able to be cleaned.
Thanks,
The Luxonis Team -
megaAI is Live
05/22/2020 at 16:23 • 0 commentsGreetings Potential Backer,
It is with great pleasure that we announce to you the immediate availability (for backing) of the brand-new Luxonis megaAI camera/AI board.
It’s super-tiny but super-strong, capable of crushing 4K H.265 video with up to 4 Trillion Operations Per Second of AI/CV power. This single board powerhouse will come with a nice USB3 cable at all pledge levels that receive hardware. If you’re just interested in the project, or a good friend, you can pledge at the $20 level to get campaign updates and some other but currently secret cool stuff.
We hope you’ll join us over the campaign. If you want to be the first kid of your block to receive a megaAI unit, back at the Roadrunner pledge level- you’ll get shipped before anyone else.
https://www.crowdsupply.com/luxonis/megaaiThank you!
- The Luxonis Team
-
DepthAI Power Over Ethernet (PoE)
04/24/2020 at 22:52 • 2 commentsOur first cut at the Power over Ethernet Carrier Board for the new PCIE-capable DepthAI System on Module (SoM) just came in.
We haven't tested yet, and the system on module (the BW2099) which powers these will arrive in ~3 weeks (because it's HDI, so it's slower to fabricate and assemble). We'll be texting this board standalone soon, while anxiously awaiting the new system on module.
So the new system on module has additional features which enable PoE (Gigabit Ethernet) applications and other niceties:
- PoE / Gigabit Ethernet
- On-board 16GB eMMC for h.264/h.265 video storage and JPEG/etc.
- uSD Slot