I started this project wanting to better understand how computers work at the logic level. I originally bit off more than I could chew, attempting to implement what I called RISC, which was really more CISC, and a pipeline (the pipeline was the biggest oops). That being said, I am building a simpler design that is still quite functional.
The processor deals with data in 16-bit words. All scalar values are represented in 16 bits.
My instruction word is 16 bits, 4 of which are opcode, the next two define whether or not the operands are immediate values. The final 9 bits are divided into two or three parts. When the operation is unary, the upper 6 bits of the twelve are used as a cache address or immediate value (depending on the opcode), and the other 3 a register address. When the operation is binary, the 9 bits are divided into 3 chunks of 3 bits, the first being operand1, the second operand2, and the third a register address for the result.