• 2023 Hackaday Supercon Badge Hack - Vector Video

    11/08/2023 at 20:39 0 comments

    This year for SuperCon, I took the awesome vectorscope badge that the team had created and got it to play some short video clips from the Raspberry Pi Pico's 2MB of flash space on it's 1.8" round LCD screen with a GC9A01 driver chip.

    This required trying out some new techniques, and I'm happy to publish details on how it works and how you can apply this to your own Pico projects too.

    First, my GitHub repo for the hack is published at https://github.com/unwiredben/vector-video/.  This is a Platform.IO based project, so it's written in C++ and builds to a UF2 file you can flash onto the badge.  It's likely this will work on non-badge hardware like these Waveshare development boards I've ordered, but the code will need to be modified to adjust for the lack of a "user" button.

    While doing development, I had a number of issues, but also some non-issues.  It wasn't hard to get the video files embedded into the UF2 image.  The RP2040 has a nice mechanism for dealing with it's attached SPI flash as a flat address space, with the hardware managing a memory cache to page in sections of the flash as needed.  From a program point-of-view, you just include the video as a read-only array of bytes.  There is an option with the PlatformIO tools to partition the flash to have part in program memory and part as a filesystem.  That may help with reducing the time to flash a new image, as you don't need to reload the movie data each time, but it will lead to additional complexity of the code to feed the video decoder.

    Decoding Video

    The video decoder I used was a modified version of pl_mpeg, an amazing single-file MPEG-1 decoder developed by Dominic Szablewski, aka phoboslab.  He originally published this in 2020, but I didn't notice this until earlier this year when he published an image format, Quite OK Image (QOI),  that got some attention on Hacker News.  The code is quite easy to read and adapt, and I first tried it out to make a movie player for the Badger 2040, an eInk-based badge by Pimoroni.

    My original intention was to try to turn the player code into a MicroPython add-on, but I quickly realized that the memory usage wasn't going to make that viable.  When I first tried to play a short video clip, the badge hung.  This required using a patch to pl_mpeg's macros for overriding malloc/realloc to identify the source of the allocations, and I found that the code that allocated three frames of reference data to use during decoding was failing as there's just not enough of the 256K RAM on the RP2040 left to handle that.  Since the round screen has an effective resolution of 240x240, you can verify this by doing some math:

    • Y frame is 240x240 = 57,600
    • Cr and Cb frames are 120x120 = 14,400
    • 3 of each is 259,200 which is more than 256K

    However, just storing the luma (Y) reference frames lets us fit into 172,800, which does fit into the available memory.  So, I modified the pl_mpeg code to have a luma_only mode.  This ended up being fairly easy -- there were three parts of the code where it was picking which plane to access.  In both plm_video_copy_macroblock and plm_video_interpolate_macroblock, the code runs the macroblock's instructions over all three frames, so omitting modifying the non-existent color planes was easy.  The tricky one was in plm_video_decode_block; I'd originally just aborted processing when the code got to selecting the color plans by doing an early return, but that left me with random macroblocks being colored all white in the output.  After some debugging, I realized that I needed to just skip the code that modified the Cr/Cb blocks, but still needed to do the other processing because there were intentional side effects to the decoder state. Once I fixed that, I got crystal clear decoding.

    Displaying the Video Frames

    The pl_mpeg code was running in a mode where it just returned a frame that's ready for output after each call to plm_decode_video...

    Read more »