• Pose2Art: low cost video pose capture for interactive spaces

    10/19/2022 at 00:00 0 comments

    Pose2Art Project

    Jerry Isdale, MIOAT.com

    Maui Institute of Art and Technology

    notes started oct 12 2022

    This Page is now supersceded by the Project

    Basic idea

    Create a low cost, (mostly) open source 'smart' camera system to capture human pose and use it to drive interactive immersive art installations.  Yes, its kinda like the Microsoft Kinect/Azure 'product', but DIY open to upgrading.

    1. use one or more smart cameras to capture Human Pose from video stream
    2. stream that data (multicast?) – pose points (OSC), raw frames, skeleton overlay (video), outline, etc etc
    3. receive stream to drive CGI Rendering Engine using skeleton data etc
    4. project that stream on wall (or use all above streams as input to video switcher/overlay

    Hardware:

    • Edge Computing: Raspberry Pi 4 and Nvidia Jetson Nano are target platforms i have. Google Coral may be a better low cost alternative to the raspberry pi 4.
    • Camera: Does not need to be high resolution. a usb webcam or CSI interface (ribbon cable, rPi camera, arducam, etc.)
    • Network: Wired Ethernet is preferred over WiFi for installations to avoid interference. A single cable can connect the edge device with PC. Configuration of software is a bit tricky
    • Rendering Engine:  Decently powerful computer with high graphics card running TouchDesigner, Unity, Unreal or similar visual software
    • Display: either a video wall or projection video setup

    Options:

    Multiple cameras can be used to  create 3d pose tracking

    Stream video from edge cameras to rendering engine. Not sure of usable protocol

    Tracking multiple people, in contact with each other (dancing, acro-yoga etc)

    Depth Camera: cameras that give Point Cloud depth data could be used

    STATUS:

    This very much a work in process (with uneven progress).
    18Nov: I have gotten the Camera/Pose etc working and feeding Points over network to PC via OSC which feeds data into TouchDesigner

    Currently i;m note taking on both pi and pc (with multiple boot sd cards for different OS on Pi)

    Example Art Installations

    (insert links to still/video of pose tracking in interactive environments)

    https://vimeo.com/satmetalab

    https://user-images.githubusercontent.com/15977946/124654387-0fd3c500-ded1-11eb-84f6-24eeddbf4d91.mp4

    ----

    Oct 14

    10 steps in Pose2Art process

    (make a graphic of this flow)

    1. image capture
    2. pose extract
    3. pose render (optional)
    4. stream send pose data, video optionally
    5. physical send ( transport)
    6. physical rcv
    7. stream receive
    8. stream process
    9. render/overlay
    10. project/display

    Project Plan and this Page

    • This doc:survey's tech options for each of the 10 stages listed
    • survey existing solutions, focus on newer ones with multi-person options
    • find one that runs on my rPi4
    • build out a demo using rPi4 and TouchDesigner for rendering.

    Nov 20 status:

    The QEngineering Raspberry Pi image comes with TensorFlowLite properly installed, along with a C++ demo of pose capture.  Adding Libosc++ got it emitting OSC data. Fair bit of mucking around with network static ip, routes, and firewalls was required, but finally got it working with PC.  Found at least 1 TouchDesigner example of reading OSC pose data and got it working.  Looking into other demos, like a Kinect driving TD Therimin simulator.

    OSC (OpenSoundControl) currently chosen as data transport. It is VERY much user defined messages, and I have yet to see any 'standards' for how to name the Pose data. Kinect tracked point names might be useful.

    Survey of System Demos

    Web searching turned up a LOT of links on pose estimation using machine learning. Some include source code repositories and documentation, others are academic papers or other non-replicable demos.  This section is a summary of some.  Hopefully one will be found to actually work?

    30 oct 2022: links below this update

    Attempting to run demos has been Interesting, with lots of classic dependency issues. Some Python pose examples were made to work, but alas Very slowly.  The QEngineering rPi example...

    Read more »