Project status: working!
All three hard/soft components have been implemented and tested:
- two hardware dongles (SAMD21 based)
- post-production analysis desktop software (proof-of-concept version)
Here's the results for two test files:
Side Note: Someone more competent than me should have tackled this! I've NO formal compsci education except two programming introduction courses four decades ago! (Pascal and Fortran...) But there we go:
I've implemented an audio sync track using the 1 PPS from a GPS receiver (those pulses are precise to 10 nanoseconds typically). Between each seconds the time of day is encoded into the track and will be parsed in post-production by syncNmerge (a program to be written) that will merge sound clips with their corresponding video tracks (thanks to FFmpeg!) and this, before importing into your NLE.
The UTC time of day is BFSK modulated. Initially I tought I would used ASK over FSK so maybe (just maybe) NLE softwares doing wave form analysis syncing could use it (PluralEyes, Syncaila, FCP, Kdenlive, Resolve, any other?). But for now it's only BFSK: I don't want to fight with multiple wave form analysis algorithms from all those NLEs.