So, due to the pandemic I can't actually build the damn thing right now, but everything else is coming along nicely.
The matrix is controlled by an ESP32. This was my first ESP32 project and my first foray into ESP-IDF. So I cut some corners and instead of using LCD-Mode, I bitbanged it. With one core fully dedicated to that and one dedicated to IO I can draw about 3300 monochrome frames per second. Enough for me.
Normally, you redraw the same line a few times (like PWM), to achieve color depth. By only drawing a line once, I trade in color depth for fullscreen refresh rate.
So, its really not easy to use two Kinect2 simultaneously on one PC due to the ridiculous bandwith it needs. But Kinect2 + Kinect1 works fine. Bonus points for them using different methods of depth sensing and so no change of interference. I only use the K1 for "closing" the pointcloud from behind anyway.
The depth textures are fed into a HLSL compute-shader which is amazing for this use-case because its so great to parallelize.
I know the pointcloud in the first images doesn't really look different (only worse) than the cheapest of cheap Kinect example projects. But mine is generated from the real data send to the matrix and is displayed in the same way. (virtually rotating around a plane of points, switching them on or off).
So I hope this is what it will actually look like! :)