It's been a while since I made an update, but I am making progress in fits and starts. I ran into some roadblocks with the pipelining when introducing exceptions and some of the other vagaries of a real design, and so went back and thought through some of my assumptions. It turns out that I had a major error in how I understood the Wishbone bus specification.
In short, I had struggled with how to deal with latency with a pipelined operation when coupled with multiple masters/arbitration. If you allow bus preemption, it seems like you can lose data or have to reply requests, which doesn't make sense.Read more »
During the holiday break, I was able to make a significant amount of progress on the pipeline logic. At this point, I have everything working with the exception of subroutines and... exceptions. Subroutines (push old PC to memory stack, update the PC) shouldn't cause too much trouble, and while I'm not sure if there are going to be surprises in the exception handling, I'm expecting it will be similar to the existing branch code.Read more »
I'm returning to this project and made a few interesting improvements recently. The first is that I cleaned up the verilog for the CPU core so that it could be built in Verilator, a pretty slick tool that takes a Verilog module and defines it as a C++ object. You can then attach it to a test harness of your choosing to validate your work, check for regressions, etc. Until now, I've been relying on tests on the FPGA systems themselves, and leaning heavily on the logic analyzer functions that Quartus provides to debug. It works and is very powerful, but it's also quite slow and has limited flexibility. This change, coupled with a new initialized RAM module allows me to compile and run arbitrary code pretty easily.
The main reason I went down this road is because I was planning to do a redesign of the CPU to support pipelining. I've made some progress here as well, building a 5 stage pipeline that at least seems to move the proper data and signals around.
My challenge with pipelining in general is that most of the textbooks I've seen handwave over one of the most fundamental structural hazards - what to do when the instruction and data memory are on a common bus. I decided to "solve" this problem by building the CPU core with two logical busses (data and instruction), and to marry them to a dual port RAM module. Since the instruction bus will never do a write, this works well and will be sufficient to test out the pipelining.
I don't know how other designers solve this problem in the real world, but my plan is to link the CPU to an L1 cache, and have the cache layer deal with the vagaries of the "outside" bus. This should also reduce the number of clock cycles required in each pipeline phase. Right now my bus access logic requires two clock cycles minimum, but I think I could reduce this to one without too much effort. I'm kind of working on the pipeline stuff one issue at a time, since I don't really have a good reference to crib from. If anyone has any suggestions on something that's not crazy complex and would help give me some direction, leave them in the comments.
Here's the next video, which goes into more detail about the CPU design as well as walking though the state transitions for a simple add operation:
I'm trying to get more documentation in place, in the form of some youtube videos. This one will give you a sense of the overall system architecture, and how the CPU interacts with other devices. Let me know if you have any questions or comments.
I've been working on fleshing out a supervisor mode with a goal towards being able to do multiprocessing in the unix way. The basic work is complete (protected opcodes, hardware and software interrupts that execute in supervisor mode, etc), but I'm working on the nuance now. In particular, I'm testing different ways to pass information from user space into kernel space. Since my current method of parameter passing is solely via the stack and the stack pointer swap out as part of the move to supervisor mode (supervisor stack pointer), this is mostly an exercise in C semantics now. My exception handler pushes the original stack pointer onto the supervisor stack before jumping to the exception handler, and so now I'm just working though the most sane way to reference that element (which isn't an argument to the interrupt handler!), and then use it as an index to pull out the other info on the user stack I care about.
My first cut at an ISA was focused on getting the functions right, and leaving room to add more options later. Now that I've got most of the functionality I want, I can go back and look at ways to reduce the complexity, with a goal of improving performance.
As a mentioned earlier, I've been looking at pushing to the next round of project improvements, and that meant a better testing process. I tried using a "control" CPU, which would be compared to the output of the CPU under test, however that assumes that the number of clock cycles required for each operation wouldn't change. While useful in a few cases, a lot of the changes I'm interested in involve timing, and so that wouldn't work.
I decided instead to make two ROM modes. The default one runs the monitor code, which allows for basic memory interaction as well as parsing of ELF binaries on the microSD to bootstrap other programs. The new ROM module is a set of POST routines written to progressively test the CPU as well as IO functions to check for functional regressions. This method has already paid for itself, since I found a small bug in a couple of the floating point opcodes.
The method of test is fairly simple. I need to assume that some basic operations work, otherwise it won't even run the POST, which means immediate load of a register, immediate add, integer compare, and branch if not equal. The first tests evaluate register operations, the ALU and FPU. Then we test stack operations, branch tests, and all of the load and store operations. For the math and branch operations, we can compute the expected result and store them in the code, and generate an error when the result isn't as expected.
In addition to the basic CPU tests, I'm also implementing a set of memory tests. This will allow me to better test the cache module, which I'll describe in the Doom project update.
So far in these projects, I've been able to build iteratively and not run into too many nasty bugs. There are many layers of abstraction though (libraries, compiler, assembler, machine, CPU), and so when a bug does crop up, it can be really challenging to find.
Most recently, I found that I had misunderstood some subtleties of transferring data between registers. The fix was simple - an opcode that zero fills the upper bits when you make a copy of an object smaller than the register size. But how this manifested itself was that sometimes printf() printed out the wrong character when printing a number. Eventually, I was able to isolate this to 33 % 10 resulting in 9 (not 3), which meant I didn't have to debug libc. After further narrowing the issue down to making a very small test case, I was able to see why the CPU was generating the incorrect value. That probably took me 4 days to debug.
As I plan on making some radical changes that could break things, I need to consider how best to avoid introducing more of these kinds of issues, and if it happens, how to quickly determine the issue.Read more »
The ISA for the CPU is pretty low density. With a word size of 32 bits, there's a fair amount of room to do everything... except for absolute addresses and some large constants. As I've experimented with the ISA, I've left gaps, extra bits, etc and it's a bit messy. I'm starting to clean up and make things a little more orthogonal now, with the idea that this will also allow the CPU core to become more efficient.
The ISA is defined in my Opcode worksheet, and I try to make sure this is up to date as I make changes to the core and the assembler.Read more »
Clone the three source repos.
mkdir bekkat1 cd bexkat1 mkdir gcc binutils newlib
cd binutils $(BINUTILSREPOPATH)/configure --target=bexkat1-elf make sudo make install