I started with the Adafruit PyBadge. This is a 120 MHz ATSAMD51 based board that can be easily reflashed using a variety of UF2 images. I put on the latest stable CircuitPython, 7.0, and updated the .mpy files in the filesystem with the latest compatible version. I'd previously used this to make a personal badge for our last big in-person meeting at my employer, so I had to archive away all the files from that to make a clean slate. I'd also made a case for it from a Thingiverse project earlier in 2021 after getting my 3D printer, so that stayed in place.
The first thing I needed to get working was the MP3 playback. The original MP3 file for the song was about 7MB, far larger than the 2MB flash storage on the device. However, it also had way too much data for the little speaker on the PyBadge to render, so I loaded it into Audacity, mixed it down to mono, then output it as a low-bitrate VBR MP3 file. This reduced the size to 1.2MB. Later, I would take it further to a 32 Kbps CBR MP3 (880K) as part of optimization.
To play back the audio, CircuitPython has a audiomp3 module that hooks into its playback system. Upon trying it, I found that the volume was extremely loud, I had to setup an audiomixer instance and route it through there for level control. The MP3 decoder has a nice property called rms_level where you can get a recently decoded sample's volume level, and seeing that lead to using the front NeoPixels as a VU meter.
To get the actual word clock working too a lot of manual effort. I found lyrics, separated out all the words onto their own lines, converted to lower case, sorted them, ran a unique filter to remove the duplicates, then went through and looked for any weirdness. In the end, the song has only 19 unique words, with two of those variations that can share space in the clock. I tried for a 8x8 layout, but there wasn't enough room, so with some reorganization, I got the 12 x 7 matrix.
To figure out the cue timing, I went back to Audacity. The tool has a nice feature called a label track, and you can quickly add a label with text for any timestamp in the song. After a couple of hours of scrubbing and tagging, I had a fully marked up song, and the label export provided me with a text file that I was able to massage into a Python data structure. Of course, being Daft Punk, there are parts where the lyrics are so manipulated that I just had to put my hands in the air and give up until something became intelligible again.
In the Python code, I made a WordCloud class that embedded all the cues, the position of each word in the grid, and the grid data itself. The grid is implemented using displayio and adafruit_display_text, using the built-in font, which happens to be 6x12 monospaced. I originally had it all scaled x2, which filled the screen nicely, but I it also caused performance problems, so I ended up unscaled and centered.
The timing is done with a list of cues, each with a timestamp and a list of words to highlight. A lack of words means do no highlights, so I had to add those cues for sections of the song with no lyrics. The code uses the first cue as a trigger to turn off the VU meter and show the word clock.
Once I got everything together, I hit some real issues that keep this from going to the next level. My demo video cuts off at a certain point on purpose, as after that, the text highlights get more and more synchronized from the music. Increasing the buffer size in the mixer helped a little, but I suspect that the MP3 playback system is running a little slow. I've seen similar issues with presenting closed captions to match video content in my day job. It would be really nice if either the MP3 decoder could provide a position property, indicating how far into the file had been played. With that, I could keep in sync even if playback was a little slow.
Good news! I made a patch to CircuitPython, and now audio sync works great....Read more »