Close

Interrupts Are Stupid

A project log for Random Tricks and Musings

Electronics, Physics, Math... Tricks/musings I haven't seen elsewhere.

eric-hertzEric Hertz 02/16/2023 at 09:130 Comments

Imagine you've designed a clock...

Each time the seconds go from 59 to 0, it should update the minutes.

Imagine it takes three seconds to calculate the next minute from the previous.

So, if you have an interrupt at 59 seconds to update the minute-hand, the seconds-hand will fall-behind 3 seconds, by the time the minute-hand is updated.

...

Now, if you knew it takes three seconds to update the minute-hand, you could set up an interrupt at 57 seconds, instead of 59.

Then the minute-hand would move one minute exactly after the end of the previous minute. 

BUT the seconds-hand would stop at 57 seconds, because updating the minute uses the entirety of the CPU. If you're clever, the seconds-hand would jump to 0 along with the minute's increment.

....

Now... doesn't this seem crazy?

...

OK, let's say we know it takes 3 seconds of CPU-time to update the minute-hand. But, in the meantime we also want to keep updating the seconds-hand. So we start updating the minutes-hand *six* seconds early... at 54 seconds. That leaves six half- seconds for updating the seconds-hand, in realtime, once-per-second, and three split-up seconds for updating the minutes-hand once.

But, of course, there's overhead in switching tasks, so say this all starts at 50 seconds.

...

At what point do we say, "hey, interrupts are stupid" and instead ask "what if we divided-up the minutes-update task into 60 steps, each occurring alongside the seconds-update?"

What would be the overhead in doing-so?

...

So, sure, it may be that doing it once per minute requires 3 seconds, but it may turn out that the interrupt-overhead is 0.5 times that, due to pushes and pops, and loading variables from SRAM into registers, etc.

And it may well turn out that dividing that 3 seconds across six will require twice as much processing time due to, essentially, loading a byte from SRAM into a register, doing some small calculation, then storing it back toSRAM to perform the same load-process-store pocedure again a second later...

But, if divided-up right, one can *both* update the seconds-hand and calculate/update the minutes-hand every second; no lag on the seconds-hand caused by the mintues' calculation.

No lag caused by a slew of push/pops.

...

And if done with just a tiny bit more foresight, no lag caused by the hours-hand, either.

...

Now, somewhere in here is the concept I've been dealing with off-n-on for roughly a decade. REMOVE the interrupts. Use SMALL-stepping State-Machines, with polling. Bump that main-loop up to as-fast-as-possible.

With an 8-bit AVR I was once able to sample audio at roughly its max of 10KS/s, store into an SD-Card, sample a *bitbanged* 9600-baud keyboard, write to an SPI-attached LCD, writes to EEPROM, and more... with a guaranteed 10,000 loops per second, averaging 14,000. ALL of those operations handled *without* interrupts.

Why? Again, because if, say, I'd used a UART-RX interrupt for the keyboard, it'd've taken far more than 1/10,000th of a second to process it, between all the necessary push/pops, and the processing routine itself (looking up scancodes, etc), which would've interfered with the ADC's 10KS/s sampling, which would've interfered with that sample's being written to the SDCard.

Instead: i.e. I knew the keyboard *couldn't* transmit more than 960 bytes/sec, so I could divide-up its processing over 1/960th of a second. Similar with the ADC's 10KS/s, and similar with the SDCard, etc. And, again, in doing-so managed to divide it all up into small pieces that could be handled, altogether, in about 1/10000th of a second. Even though, again, handling any one of those in its entirety, in say, an interrupt, would've taken far more than 1/10000th of a second; throwing everything off, just like the seconds-hand not updating between 57 and 0 seconds in the analogy.

Discussions