Somebody asked how the pseudocode worked out what timeslot it should use.
Each tracker device is assigned an ID, which in turn we use to calculate the device's timeslot for transmission.
We have an access slot window of 333ms (packet on-air duration is ~280ms)
We require transmissions once every minute, which gives 180 devices per radio channel.
We have a precise 1 pulse-per-second (PPS) reference signal from the GPS
Therefore each device is assigned a 'seconds past the minute' timeslot, AND a 'milliseconds past the s30
- Device 1 transmits on 'second timeslot' = 00 and 'millisecondtimeslot' = 000
- Device 2 transmits on second 00 and millisecond 333
- Device 9 transmits on second 02 and millisecond 666
On each PPS we set an interrupt flag.
in the main LOOP() program, we only perform other actions (get GPS position, get battery voltage etc) if the PPS interrupt flag is NOT set. We also keep these other actions really short - so that if a PPS interrupt comes along whilst we are performing an action, we can exit quickly and service the interrupt.
In the main LOOP() we are continually fetching and updating the the current time from the GPS (hh:mm:ss) - and we store the SECONDS (ss) in a local variable each time it is updated.
When our PPS interrupt is flagged, we ADD ONE SECOND to the current time, and if this second value matches our device's 'second timeslot', we then DELAY() by our 'millisecond timeslot' value (possibly zero), then transmit our packet.
Why do we add one second to the current time?
Because on the exact moment that the PPS interrupt is triggered, our system has not yet had chance to fetch the the new timestamp from the GPS, but we know it has advanced by one second as we have just received the PPS pulse from the GPS! The value for the current timestamp is exactly 1 second out of date.
The worst that will happen is that the PPS interrupt flag is set whilst we have just initiated a serial read from the GPS to update the time. We only receive one byte from the serial port before we check for a PPS flag, so there is no danger of updating the microcontroller's memory with the current timestamp before we act on it - system time is only updated once a complete NMEA sentence is received and parsed by the microcontroller and certainly not after just one byte.
Keeping a fast LOOP() is crucial, so we must perform any actions very quicky (so they complete quickly and we iterate around the loop quickly back to servicing the PPS routine), and can be skipped if a PPS flag comes along.
What is the minimum window time we can use?
Our main loop() code completes in 7.8 microseconds (worst case scenario).
Jitter - the difference in start time between subsequent transmissions in the best and worst case scenarios - varies between 0 and 7.8 microseconds
If our PPS flag happens to be set when we are in the point in the loop() code that checks for the PPS flag, then we have the minimum delay before it is acted upon.
If we have just finished checking for a PPS flag, and are now busy reading a byte from the GPS over serial - we have to wait for this to finish, then skip over all other checks before the loop() starts again and we service the interrupt flag. This is measured at taking 7.8 microseconds.
We can then theoretically have timeslots with, say, 10 microsecond guard period, if we have a precise delay(microseconds) function which itself does not have any jitter, and our activity (transmitting) within each window itself is not subject to jitter.
In our scenario, if our LoRa transmission takes 280ms +- 1 ms, we could easily define TDMA access slots 282ms apart. i.e. with a 1ms guard period either side of a transmission window that easily accounts for the jitter in our loop() timing
Moving from a TDMA slot of 333ms to 282ms gives us 212 slots per minute - an extra 32 slots/devices we can squeeze onto our 1 minute cycle.
Clearly the limiting factor for the number of devices per radio channel is the time-on-air of each device - and any limitation in our receiver (perhaps the receiver AGC recovery time would not allow us to make such rapid successive receptions - especially if a distant, weak tracker followed a nearby strong signal which made the receiver 'deaf' for a few milliseconds.
What is the limit of the hardware?
We can no doubt be more elegant with this solution - for example only read serial from the GPS for the 900ms AFTER a PPS has taken place, then for the following 100ms sit patiently in a very tight loop() doing nothing but waiting to service the PPS interrupt - written in assembly, our jitter can be significantly reduced down to a few clock cycles (loop iteration (~6 cycles), write variable to memory (1-2 cycles), compare two variables in memory (1-2 cycles), interrupt latency (16 cycles), ~ 30 cycles ~0.6 microseconds at 48MHz