Atomic Synchronator

Beyond '67 SMPTE timecode, an affordable GPS based AUX syncing track format for dual-system sound or multicam setups: YaLTC

Public Chat
Similar projects worth following
This is a cheap hardware dongle to aid in automatically syncing audio and video you're shooting. Eh oui, yet another LTC... hence the name: YaLTC.

It should interest scientists who need to timestamp data recordings within sub-millisecond precision and multicam /dual-system sound videographers:
• who don't want to fork 200 CAD for a syncing program, neither 270 CAD for each recording device to sync (cams, audio recorders)
• who have encountered some limitations of their editing software "waveform analysis syncing".

Maybe I'll eventually distribute pre-flashed kits on tindie (essentially off the shelves components). A GUI application is considered for non-dev users (a lot of videographers DONT have a running python installation). Wrapping synced material under OpenTimelineIO is explored.

Project status: working!

All three hard/soft components have been implemented and tested:

  • two hardware dongles (SAMD21 based)
  • firmware
  • post-production analysis desktop software (proof-of-concept version)

Here's the results for two test files:

Side Note: Someone more competent than me should have tackled this! I've NO formal compsci education except two programming introduction courses four decades ago! (Pascal and Fortran...) But there we go:

I've implemented an audio sync track using the 1 PPS from a GPS receiver (those pulses are precise to 10 nanoseconds typically). Between each seconds the time of day is encoded into the track and will be parsed in post-production by  syncNmerge (a program to be written) that will merge sound clips with their corresponding video tracks (thanks to FFmpeg!) and this, before importing into your NLE.

The UTC time of day is  BFSK  modulated. Initially I tought I would used ASK over FSK  so maybe (just maybe) NLE softwares doing wave form analysis syncing could use it (PluralEyes, Syncaila, FCP, Kdenlive, Resolve, any other?). But for now it's only BFSK:  I don't want to fight with multiple wave form analysis algorithms from all those NLEs.

  • 1 × a SAMD21 board eg Adafruit Feather M0 proto, 20 USD
  • 1 × a GPS module with 1PPS output eg Adafruit Feather Ultimate GPS 40 USD
  • 1 × a single supply opamp for mid-level reference aka virtual ground I'm using a LM358
  • 1 × Arduino IDE, python and FFmpeg installed

  • Syncing in the kitchen

    Raymond Lutz2 days ago 0 comments

    For now, enjoy this dual-system sound demo. Stay tuned for a multicam demo with Olive and OpenTimelinIO

    camera file:
    Below is the Zoom wav file. Visuals done with ffmpeg (right channel in green is YaLTC and  left channel in red is fingers snapping):
    Postprocessing with

    automatically synced!

  • Synced !

    Raymond Lutz09/11/2021 at 14:17 0 comments

    As a test for assessing system performance, the same pilot tone (a continuous sinus wave of 220 Hz) has been recorded simultaneously by two devices: a ZOOM H4n Pro and a FUJIFILM X-E1) alongside their atomicsynchronator dongle (recordings where started one after the other so they were off sync by approx. 3 seconds)

    Each devices recorded two channels: the common 220Hz CSW and the YaLTC signal from their respective synchronator (an Adafruit Feather based for the H4n and a Trinket M0 one for the FUJI). The camera was used as a stereo sound recorder: the movie itself being discarded.

    The two tone tracks have been synchronized using the timestamp deduced from their YaLTC track and mixed into a single stereo file (the earlier recording is simply trimmed of the exact samples number so they start at the same time). Below are the decoding program output; we see the H4n (ZM-YaLTC.wav) was started 39.247354 - 36.062314 seconds earlier. Take note: 7 out of  8 of those yummy digits are significant!

    The precision of the sync process is then given by how much the left and right 220 Hz recordings are out of phase:

    They're out of sync by a mere 4 samples: a 83 μs offset! (for reference, typical video frames are spaced 33000 μs apart). Cool.

  • But it's only parts per million!

    Raymond Lutz08/05/2021 at 01:04 0 comments

    The atomicsynchronator encoding is precise enough to evaluate each device *effective* sample rate: my Nikon D300S sound is spec'd at 44100 Hz but is really 44099.688 Hz (all digits are significant), a discrepancy more simply expressed in ppm (here 7 ppm). My Fuji X-E1 shows an error of 33 ppm. Why is this consequential? Because when syncing sound and video from 2 distinct devices at the clip start, the tracks will drift apart slowly and for long enough takes (interviews, live shows) it will be noticeable.

    How long, you ask? Take the maximum acceptable offset between video and sound to be 15 milliseconds (cf link); this small interval would be 33 ppm of the whole recording, hence T = (1e6/33)*15e-3 = 454 seconds! After about 8 minutes the drift is out of spec... The solution? Split the tracks and realign them again each time the offset is bigger than 15 ms. My software suite will have a dedrift program for correcting high drift device takes. For sure, ffmpeg based, but how? Will I have to interpolate frames? ouach.

  • Important milestone: 2FSK demodulation working!

    Raymond Lutz07/10/2021 at 04:01 0 comments

    Ouvrez le champagne!

    With the Canon, acoustic coupling:

    Next step: whipping up a program 'syncNmerge' with ffmpeg as engine.

  • Almost a Kickstarter sales pitch

    Raymond Lutz06/22/2021 at 01:09 0 comments


  • Back to a square

    Raymond Lutz06/15/2021 at 23:16 0 comments

    I didn't tell you I  changed, yet again, the synchronization pulse, did I? It's a square! Other shapes are too distorted when acoustic coupling is used and it was impossible to robustly detect it. See and ear for yourself the result (filmed on a NIKON D300s with mic input):

  • Working on doc

    Raymond Lutz06/15/2021 at 01:15 0 comments



  • Found the right sync pulse shape

    Raymond Lutz06/02/2021 at 16:45 0 comments

    But still, I'm stuck with strange artifacts (nothing too serious)... Note the exponential decay in the camera recording (a Canon PowerShot ELPH 330). Gonna do the same measurements with my Nikon D300S.

  • Celebrated too early

    Raymond Lutz06/01/2021 at 21:00 0 comments

    Well, I rejoiced too soon. A much larger sampling (a 10min take) revealed the pulse location is much less robust than I thought... back to the drawing board.

    Restating project goals

    For those who just jumped in, here's a recap: I want to design and implement an audio time code (and accompanying hardware) for automated video synchronization, cheaper than existing solutions, working on camera without sound input  and using off the shelf components.

    Acoustic coupling challenges

    I want to localize pulses at the highest possible precision in the recorded audio. Typical cameras use a sampling frequency of 48 kHz so I’m aiming at a maximum variation of more or less 4-5 sampling periods, ie ±100 μs. For now I’m still exploring which pulse shaping is better at avoiding transients in the recordings.

    Fighting against the camera automatic gain control

    I generate this symmetrical shape from the SAMD21 DAC output and feed it to a small headphone speaker:

    And here’s the resulting sound recording, note the asymmetry now present:

    I guess I should slow down the attack to avoid triggering the AGC too heavily, and I should generate an already odd signal (odd vs pair).

    Something like a tri-level sync pulse (surprise!) and aim at the zero-crossing point as the timing anchor.

    Time of day (UTC) is done and is part of the 23 bit BFSK word following the aforementioned sync pulse whose location I'm still optimizing:

    Nota: I don't modulate the amplitude, it's an artifact of some non-flatness in response curves (mic, speaker, air gap?) The two used frequencies are 1000 and 1800 Hz.

  • acoustical problems

    Raymond Lutz05/25/2021 at 00:59 0 comments

    aaaahh...  Sync pulse direct recording is OK. See second track below, exactly 20 cycles, direct meaning with electrical connections between the board and my computer audio line in.

    But acoustical coupling shows transients that will make postprocessing harder...

    Acoustical coupling for board → speaker → mike → computer ADC

    I'll rather try to shape the pulse with a gaussian on the SAMD21 to avoid those.

View all 16 project logs

  • 1
    Lowering costs

    For now, I'm building synchonators with off the shelf boards, here an Adafruit Ultimate GPS FeatherWing (25 USD) and a SAMD21 Adafruit Trinket M0 (9 USD):

    Missing from the cabling is the Vcc/2 signal reference circuit: for audio output, I didn't wanted to use a high pass filter for removing DC offset (the sinusoidal DAC signal is between 0 and 3.3V, hence oscillates around 1.65V).

    For this, I'm using a LM358 to buffer a Vcc/2 reference, sinking or sourcing 10 mA max (which is the same limit than the SAMD21 DAC output).

    A motivated maker could lower the GPS cost using a custom breakout board and a raw  Raw L80 Quectel module. Total project cost: 18 USD + USB power bank.

    How's that for a subframe (20 μs) zero-drift/no-need-to-jam-sync  solution?

View all instructions

Enjoy this project?



Sean McVeigh wrote 01/03/2021 at 22:25 point

Considering this is an indoor device, why not use a chip-scale atomic clock?
EDIT: never mind, thought the price had come down, although you could sell as pro product :D

  Are you sure? yes | no

Raymond Lutz wrote 01/03/2021 at 22:36 point

ah ah! You got me on that one! And I thought you were joking until I looked it up!!! Those devices DO exist!!!  But for now prohibitively onerous...

Anyway, you would have the hassle to jam sync them before use... Also this GPS chipset works OK indoor, even in a modern building with a steel framing (but in a basement reception in spotty).

  Are you sure? yes | no

Sean McVeigh wrote 01/03/2021 at 22:47 point

Yeah I worked on one in grad school (modded as magnetometer):

  Are you sure? yes | no

Sean McVeigh wrote 01/03/2021 at 22:51 point

Wait, I assumed you only need one clock source? Use Ethernet for sync to capture devices:

This is the technique used for multi-AP clock sync for triangulation in indoor location, often using GPS as CLK source. Didn't look too carefully if the recent updates meet your timing requirements.

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates