Close

Running C on the TOM-1 (take 1)

A project log for TTL Operation Module (TOM-1)

A 16-bit TTL CPU and stack machine built out of 7400-series chips.

tim-ryanTim Ryan 08/12/2020 at 03:232 Comments

I described in a previous post how you can write "assembly" for the TOM-1. This is the current way in which you'd write a program that subtracts two numbers on the data stack, and loops if the value is 0:

[start] -1 nand 1 add add branch0[start]

After the first label [start], every token in this program represents a single opcode that is executed every two clock cycles. There are a lot of examples of opcodes that manage the stack: dup, drop, load, store, and literal numbers like -1 all move data either from RAM or ROM or move data around the stack itself. But there's only two ALU operations: add and nand.

Writing loops using just two math functions is tricky. And while I'd love to build a high-level stack-based language just for this CPU, I wondered if there was a quick and dirty hack that I could procrastinate on first. Maybe that's getting the CPU to run C?

There is precedent for many homebrew CPUs to implement their own C-like compilers, common enough that CPU designs like the #Kobold K2 - RISC TTL Computer  have pitched JSON standards for C parsing and lexing that would make adapting a C compiler to a new interface much simpler. Can TOM-1 run a higher-level language like C? Because TOM-1 is arithmetically underpowered, we really need to leverage an optimizing compiler or the C code we generate will be exhaustingly slow. Luckily, there are optimizing compilers for very simple CPUs, and there's precedent for directly emulating the instruction set of other CPUs as the #Gigatron TTL microcomputer  does with its v6502 core or the #Novasaur Retrocomputer's support for the Intel 8080.

In the next few updates I'll evaluate Arduino (AVR) bytecode, then 6502 bytecode, and finally settle on a tiny processor I'd never heard of before(!).

I uploaded to Github the Digital circuit I've been using to simulate the TOM-1 in most of this project log. This implements the whole CPU and simulates the 7400-series chips a real version would use. In the later posts I'll start using a Python simulator written just for these experiments.

Discussions

roelh wrote 08/12/2020 at 14:04 point

Hi Tim, 

for running C on a processor, you need instructions that access a variable that is in a stack frame. So you need an (SP+d) addressing mode to access position SP+d.

In the Kobold processor, this is similar, but here the SP is called WP, and the WP is always aligned, such that adding a displacement will not generate a carry (needed because the displacement adder is only 4 bit wide).

If you don't have such an address mode, your compiler will have to generate long, clumsy instruction sequences, or you will be forced to run FORTH.

Your ALU is similar to Kobold, because it has only ADD and a logic function. You have NAND where Kobold has NOR. NOR has the nice property that when one of the operands is zero (easy provided), the other operand will appear inverted at the output.

  Are you sure? yes | no

Tim Ryan wrote 08/20/2020 at 04:49 point

Hi roelh, I appreciate this comment so much!

I hadn't put much thought into the design of the ALU, because I spent a lot of time before this project designing a Forth language around this instruction set. But luckily I'm deviating a lot from what I initially planned to work on. I'd been wrestling for some time with how to do more arithmetic without increasing the chip count dramatically; supporting both NOR and a native invert instruction  would simplify a lot of common usecases!

I went back and reviewed your post about the inner loop of Sieve of Eratsothenes, and I'm very impressed by the compiler for the Kobold. For example, I went and calculated what cc65 generates for the 6502, and it was somewhere around 70 CPU cycles compared to the (still slow) hand-tuned 49 opcode procedure you quoted. The 6502 mainly seems to helpful as an easy emulation target. This inner loop comparison is really useful and I think supports your conclusion that a stack-relative addressing mode is essential for fast C code.

I expect my experiments will be pretty slow for now, but I am contemplating how an indirect mode might work on TOM-1—if I can spare another control line, it might be doable! I'm looking forward to experimenting with this.

  Are you sure? yes | no