Improbable AVR -> 8088 substitution for PC/XT

Probability this can work: 98%, working well: 50% A LOT of work, and utterly ridiculous.

Similar projects worth following
Previously called "Improbable Secret Project"

The idea is to replace a PC/XT clone's 8088 CPU with an AVR. Not an emulation, per-se... more a test-bed for AVR development, with access to the XT's peripherals, ISA cards, CGA, and more.
Maybe a full-on AVR-based "Personal Computer", though, x86 emulation is definitely being considered.

(Interesting note: The 8086 allegedly ran at ~300KIPS, that's 0.3MIPS!)

(random note to self: 18868086881)

This project has been an adventure of the most ridiculous sort...

At the time I started this project-page I planned on emulating an 8088 in an AVR, so the AVR could be a drop-in replacement for the original CPU in an IBM PC/XT clone. The discovery that the original 8086 ran at something like 0.3MIPS, and the knowledge that AVRs can run at 20MIPS gave me the idea that maybe it would be possible to emulate an (even slower) 8088 in an AVR at usable-speeds, and maybe even run faster than the original.

I spent quite a bit of the lead-up time learning the (then, to me) "black-box" that x86-architecture has been to me for decades. This was aided by the fact that the original technical references for the 8088/8086 go into quite a bit of detail explaining *why* they made decisions they made in this (then new) design, in comparison to Intel's earlier processors, like the 8085, which are much more similar to the Atmel AVRs I'm so familiar with. E.G. Why do they use "segments" in the x86 architecture? And what the heck does 33:4567 mean, as an address, anyhow? (One of the many things I couldn't wrap my head around in previous endeavors trying to understand the x86 architecture).

From there, I went on to [re]assemble my PC/XT clone from parts that'd been poorly-stored for years in various boxes with other scrap PCBs and no anti-static bags. That endeavor, alone, was utterly ridiculous; leading me to sleep upright on a tiny spot on my couch for days, if not weeks; home filled with open boxes, cat sleeping angrily under the TV. And some completely unexpected, and amazing, results in the process. Including meeting someone who emailed to me exactly the (very rare) information I needed within 3 hours of my contacting an email address found in a forum post written 17 years earlier. (whoa!). Also, the surprising discovery that PNPs' emitters can be connected to the ground-rail and make for a [only slightly-differently]-functional circuit than the NPN I'd intended. Also, some "fun" with computers that seemingly-consciously refused to be of assistance (one, quite literally, glaring back at me in contempt).

As it stands, I've finally got the PC/XT clone assembled and running in an ATX case, and, with the help of a fellow HaDer, even installed my very first x86-Assembly program as a BIOS-Extension ROM, that it boots straight into!


Still, at this point in the story, I'd been planning to emulate the 8088/86 instruction-set within my AVR... And I'd done quite a bit learning about the instruction-set...

"That's a pretty big lookup-table, nevermind all the code. But I have *some* ideas about how to shrink it... maybe at the cost of execution-speed."

Also, at this point in the story, I started considering other ideas, such as emulating *hardware* within the AVR... Thanks to some other projects 'round HaD. E.G. Why use the 8250 UART, on an ISA card, when the AVR has a USART built-in? I did the math, and discovered that writing a byte to the AVR's USART transmit-register would take ~3AVR instruction-cycles, whereas doing the same with the 8250 could take more than ten times as long! Similarly, if the 8088's 8KB BIOS ROM was stored within the AVR, rather than on the 8088's "bus-interface", execution of the BIOS (the Basic Input/Output System, used for *way* more than just POST!) could be *significantly* faster.


This is the point in the story where I take a step back and start thinking about what led me here in the first place... not only in terms of what I'd discovered *during* the project, but what led me to start this project...? And I realized, the goal from the beginning, from *years* before starting this "project," was to use an otherwise now-outdated (some might say "useless") system's motherboard and peripherals with the processor-architecture I'm already familiar (AVRs).

So, emulation of...

Read more »


IBM PC/XT schematics as layers in a GIMP file. marked-up a bit to help figure things out. Probably outdated.

x-xcf - 9.77 MB - 02/01/2017 at 23:02


View file

  • Warm-Fuzzy

    esot.eric4 days ago 0 comments

    I'm getting a warm-fuzzy feeling reading the DOS-based multi-color 80x25-character help-screen in a program copyright late in the first decade of the twenty-first century, explicitly mentioning its compatibility with (and workarounds for) PC/XT's running at 4.77MHz, to 486's running "pure DOS", to systems running Windows 2000 through XP, in backing-up and creating 5.25in floppy disks for systems like KayPros and Apple II's.

    Seriously, you have no idea how warm-fuzzy this is.


    This is a far tangent from this project... But actually not-so-much. In fact, I dug this guy out *to work on* this project. Though it seems I won't need it, it's a pretty amazing system that would've helped if I'd've known anything about it early-on. Suffice to say, many years ago I acquired an apparently VERY CUSTOM KayPro. I've been doing quite a bit of searching online, and it would seem there's basically nothing more than a few archived magazine-snippets, and *one* article, regarding this specific unit. Yeahp, there's a whole community (if not several), online, regarding KayPros and CP/M... People going to the effort to backup and restore images of Wordstar and things we have in *much* better functionality these days... Even people creating emulators (both CPU *and* circuitry/hardware) to use those things. And yet it would seem in the entirety of the interwebs, and amongst the entirety of those die-hards, not one has encountered this particular system, let-alone gone to the trouble to document anything about it.


    Forgive my being side-tracked... But the warm-fuzzies that those die-hards--wanting to back-up and document things like Wordstar, that most people would consider irrelevant in this era--might just make it possible for someone with my utter-zero knowledge of CP/M, Z80's, nor KayPros, to document something so apparently unique... (and, yet... still usable even by today's standards!)... I dunno if I'm worthy of this experience. But I'll do my best.


    As it stands, I'm staring at a well-written help-screen written for DOS in 2008, compatible with PC/XT's from 1988 through Pentium 4's of the 2000's, explaining how to work with diskettes from an incompatible system from the much-earlier 1980's. It's almost like... well, I won't go into those details. Let's just say it's a nice feeling.


    Meanwhile, someone left a huge pile of TNG episodes on VHS in the building's "free-section", so I've been having quite a throw-back these past several days. (Would you believe there's a couple episodes I *don't* remember having seen previously? The first encounter with Ferengis was quite hilarious.).

  • Question For Experts - Disassembly and Data-Bytes?

    esot.eric7 days ago 2 comments

    How Does A Disassembler Handle Data Bytes intermixed in machine-instructions? anyone?

    Recurring running question in my endeavors-that-aren't-endeavors to implement the 8088/86 instruction-set.

    (again, I'm *not* planning to implement an emulator! But it may go that route, it's certainly been running around the ol' back of the 'ol noggin' since the start of this project.)

    I can understand how data-bytes intermixed in machine-instructions could be handled on architectures where instructions are always a fixed number of bytes... just disassemble those data-bytes as though they're instructions...

    (they just won't be *executed*... I've seen this in MIPS disassembly, though I don't know enough to know that MIPS has fixed-length instructions).

    I can also understand how they're not *executed*... just jump around them.

    But on an architecture like the x86, where instructions may vary in byte-length from anywhere from 1 to 6 instruction-bytes... I don't get how a disassembler (vs. an executer) could possibly recognize the difference between a data-byte stored in "program-memory" vs, e.g. the first instruction-byte in a multi-byte instruction. And, once it's so-done, how could the disassembler possibly be properly-aligned for later *actual* instructions?

    Anyone? (I dunno how to search-fu this one!)


    esot.eric02/12/2017 at 04:29 4 comments

    Seriously, look at the number of experiments...

    The signs were all there:

     #warning "Have we considered the AVR input-latch delay...?!"

    I like that one the best...

    Yahknow, it only reminded me to look into the actual cause of the problem *every time I compiled* since the very beginning. (It's been years, but I still haven't gotten used to the new gcc warning-output... it's too friggin' verbose. Does it really need to show the column of the line that starts with \#warning, then show the \#warning *output* as well?)


    So, here's what it boils down to: early-on (when I wrote that warning) I figured there was basically no way in he** this could possibly be less than at least a few instructions... Read the port-input-register, AND it, invert it, set up a loop, etc...

    while(!(READY_PIN & READY_MASK)) {};

    In reality it compiles to TWO instructions!

    sbis  0x16, 3 
    rjmp  .-4     
    And, technically, the read-input instruction grabs the value stored in its latch during a *previous* clock-cycle.

    I had a vague idea it might need to be looked into, but I was *pretty sure* I had it right... but not *certain*.

    Then the friggin' ORDEAL with h-retrace seemingly confirming various theories... And boy there were many... Friggin' rabbit-hole.

    So, then... this last go-round I had it down...

    After the rabbit-hole revealed its true nature, I found a place of utmost-oddity...

    Inserting the following 'if(awaitRetrace)' before retraceWait() somehow magically fixed the problem, in its entirety. I mean, the rabbit-hole got me close. Visually I was getting no errors, but it was still taking about 500 retries (out of 2000 bytes) to get it there. But somehow inserting that if-statement before the already-running retraceWait() fixed it 100%.

    So, at that point I'd inlined bus88_write() which didn't have any significant effect... maybe the error-rate dropped, slightly, that first-try, but whatever benefit was easily wiped-out by the uncanny results of various seemingly irrelevant changes, such as adding a nop between write and read-back.

    But I left it inline, anyhow, thinking maybe it'd reduce the number of instructions between detecting the h-retrace and actually-writing. (In case, maybe, all those pushes/pops, jumps, returns might've taken longer than the actual h-retrace).

    So, when I added the if(awaitRetrace), what I'd *planned* on doing was reusing the cga_writeByte() function for two purposes, I'd have two tests: one like I'd been doing--drawing while the screen is refreshing (waiting for retrace), and a new one, where I'd shut down the video-output, and only *then* write the data. Then compare the error-rates of the two. (Thus, in the second case, I'd have to disable retrace-waiting, since there would be no retracing).

    That was the idea.

    All I did was add an argument for awaitRetrace to cga_writeByte, and add that if-statement. Then called it with my normal code, (I hadn't written-up the stop-output portion yet) expecting it to function *exactly* the same (but figuring that'd be a lot to ask, since merely inserting a nop in random places was enough to cause dramatic changes).

    Instead, 100% accurate. For the first time in this entire endeavor.

    How the HECK could an if-statement whose argument is 1--wrapping a function which was previously called anyhow--fix the problem?!

    So I looked into the assembly-listings side-by-side.

    And, sure-enough, the optimizer did its thing. I'll come back to what it did.

    Usually I'd write things like bus88_write in assembly to make *certain* the optimizer wouldn't interfere with timing. An easy example is

    uint8_t byteB = byteA&0xAA;
    PORTA = byteA;
    PORTB = byteB;

    Which the optimizer might-well compile as, essentially:

    PORTA = byteA;
    byteA &= 0xAA;
    PORTB = byteA;
    That way it doesn't have to use an additional register. But, the problem is, now PORTB's write occurs *two* cycles after PORTA's, rather... Read more »

  • dumb-luck wins - Color Text is now reliable

    esot.eric02/11/2017 at 17:17 2 comments

    UPDATE2: MWAHAHAHAHAHAHAHA. See the bottom...



    Long story short: wait for horizontal-retrace between *every* character/attribute read/write. (Or don't... this is all wrong. See the updates at the bottom and the next log)


    So, it would seem...

    (I tried to write this next paragraph as a single sentence... you can imagine how that went):

    The last log basically covers the fact that I made a mistake in my implementing a test for the hypothesis that this clone CGA card ignores bus-read/writes while it reads VRAM to draw pixels. Despite the mistake (and it was a big one), the result was *exactly* what I was expecting, seemingly confirming my hypothesis. That "confirming"-result was, in fact, nothing but dumb-luck. And, in fact, seems completely strange that it appeared *any* different than the earlier experiments for earlier hypotheses, let alone *confirmational* of the latest one.

    (This is why I *hated* chemistry labs. "1-2 hours" usually took me two 8-hour days, or more).

    So, yes, the results appear to have led to the *right* confirmation, but the way those results were acquired were no more confirmational than any other form of dumb-luck.


    So the end-result is that *every* read or write should be prefaced with a wait-for-horizontal-retrace. Once that's done, the write/read/verify/(repeat) process dropped from (at one point) 500% errors to less than 10% repeats, and no on-screen errors.

    Still can't explain the need for verify/repeats, but it works.

    Also can't explain why *errors* were coming-through on-screen despite the fact I had a write/read/verify/(repeat) loop. The only thing I can think is that maybe a bus-read that occurs at the same time as an on-card pixel-read might result in the PC's reading the byte requested by the *pixel-read* rather than the byte the PC requested. (and, since the memory was *mostly* full of the same data, a read of another location might return the value we're expecting... hmmm).

    (Note I refer to "pixel-reads", but that's obviously not correct in text-mode, since the VRAM contains *character*/*attribute* bytes, not bytes of pixel-data. So, by "pixel-read" I mean the CGA card is reading the VRAM in order to generate the corresponding pixels.)


    I should be excited about the fact it's working, and as-expected, no less.

    Means my AVR-8088 bus-interface is working!


    In reality, I figured waiting-for-retrace was probably a good long-run idea, but I chose not to implement it, yet. I've read in numerous places that writing VRAM while the card is accessing it for pixel-data causes "snow." I didn't care about snow at this early stage. And, yahknow, the more code you put in in the beginning, the more places there are for human-error. This seemed like a reasonably-"educated" trade-off choice.

    Also, I didn't just *avoid* it... I did, in fact, look into examples elsewhere... The BIOS assembly-listing shows its use in some places, but *not* in others. (Turns out, many of those examples they shut-down the video-output altogether... man Assembly is dense!)

    Though, upon implementing it, it hadn't occurred to me *just how often* it would be required... h-retrace-write-read is too much!

    If I didn't take this path, I wouldn't've discovered that some cards don't respond the same as described ("snow" vs. ignored-writes/reads)... wooot!


    UPDATE: Dagnabbit! Dumb-Luck again!

    New function:

    void cga_writeByte(uint16_t vaddr, uint8_t byte)
                cga_vramAddress+vaddr, byte);
          uint8_t readData;
          readData = bus88_read(S20_READ_MEM, 
          if(readData == byte)
    The contents of this function were, previously, copy-pasted where needed...

    And it worked...

    Read more »

  • weirdness revisited

    esot.eric02/11/2017 at 05:09 0 comments

    UPDATE: Significant-ish rewriting...


    last time I worked on it... a few days ago, now...

    I was trying to determine what was the cause for odd-data. As you may recall, all I was doing was writing the letter 'A' to every position on the screen, along with a color-attribute, then repeating that process, cycling through the color-attributes, incrementing it every second or so.

    The result was odd-data. Sometimes the new values would be placed as expected, other times it seems data was not being written to a location. The result was a screen with somewhat random data... Mostly 'A' everywhere, but with various attributes, apparently from previous "fill"-attempts.

    In the log before last I wrote a lot on my attempts to explain *why* this was happening.

    The first thing was the thought that maybe this cheap-knockoff CGA card was expecting data to *only* be written during the horizontal/vertical retraces... (since that seems to be how the BIOS handles it). The theory being that the card might not have the more-sophisticated RAM-arbitration circuitry of the original IBM CGA card, which would allow writes during pixel-reads (showing "snow" on those occasions). Instead, maybe, this cheaper card's pixel-reads *block* write-attempts from the bus.

    Thus, I added a wait for horizontal-resync to the beginning of my process, and suddenly the data-errors aligned in vertical columns. Kinda makes sense... In fact, makes perfect sense.... In fact, exactly what I was expecting. Say the data-errors were caused by a card who's circuitry wouldn't allow write-access at the same time it was *reading* (to draw the next pixel in a line). Then there would be several writes which go-through, then a read of a pixel (and a failed write), then several more writes, then a read, etc. "beating". Makes some amount of sense.

    Makes sense, as well, if you imagine that the dot-clock is faster than the bus-clock used for writing data... you'll get several read-pixels, but every once in a while [periodically] a write will come through. If that write happens at the time a pixel's being read, it would be ignored, otherwise they might be slightly misaligned and both would go through. AND, if you believe that to be the issue, that could very-well explain the newest problem which I called "even weirder." which, upon revising this log-entry, I never really get around to explaining.)

    And now the "errors" would be aligned in vertical columns because the write-procedure waits until a horizontal retrace... so all writes in each row would be aligned to the left of the screen... right? So I continued my experiments on this theory.

    BUT: There are SEVERAL problems with this theory...

    Problem One: I didn't write only a *row* of data after the horizontal-retrace signal. Nor did I verify we were still in the horizontal-retrace before writing each byte. In fact, I wrote the entire screen's worth of data. So looking back there should be *NO* inherent guarantee of vertical-alignment of the errors due to the addition of retrace-waiting. In fact, it really shouldn't've changed *anything* regarding error-alignment, except through luck.

    320 pixels are drawn in each row and there's some horizontal-porch time, as well, before the next line is drawn. But 40*25*2=2000 data-locations are written after that first horizontal-retrace, to fill the screen's character-memory. Assuming the bus-clock and the pixel-clock were the same, we'd also have to consider that each bus-transaction is a minimum of 4 bus-clocks, so now we're at 8000 pixel-times' worth of data. (And, I think the pixel-clock runs faster than the bus-clock). We're talking each "fill"-process is at least an *order-of-magnitude* longer than a single row's being drawn. Probably more like *numerous* rows' being drawn, maybe even numerous frames. And, that entire fill occurs in one continuous...

    Read more »

  • If you thought it was weird before...

    esot.eric02/08/2017 at 18:18 8 comments

    ...It's gotten significantly weirder.

  • CGA clock and AVR->8088 Bus Interface

    esot.eric02/07/2017 at 07:09 0 comments

    The most-recently-logged experiments with CGA resulted in some interesting patterns. This is the most-boring image, but most-explanatory:

    In the image, I attempted to write the letter 'A' to every location. Instead what I get appears a bit like "beating" in the physics/music-sense. It's almost as if the timing of the CGA card's clock and the timing of the CPU (an AVR, in this case) clock are slightly out-of-sync. So, most of the time, the two are aligned well-enough for the 'A' to be stored properly in the CGA's memory, but sometimes the timing is misaligned so badly that the 'A' comes through as some other value.

    Frankly, I kinda dig the patterns it creates... it's a lot more visually-interesting than ones I can think to program...

    (And check out previous logs for other interesting examples)

    Those, again, are the result of nothing more than writing the letter 'A' to every location, and choosing different background/foreground colors. The above is with a foreground color of red and a background color of white. Where it differs, it appears that the background/foreground "attribute" byte wasn't written properly.

    (Also, interestingly, the font appears to be messed-up with different color-choices. That's an effect I think due to the CGA card's age/wear, as it's visible as well in DOS, via the actual 8088 chip, and described in previous logs, as well.)

    But, I suppose in the interest of science and progress, the trouble--of data not being written properly--needs to be shot.


    A brief review, gone into a bit of detail in past logs:

    The CGA card has a space for an onboard clock-generator crystal and space for jumpers to select it, but that circuitry was not populated, and the jumper was hard-wired to use the ISA-slot's clock-signal. In early issues setting up the system and trying to debug the weird-color-font problem, I wound-up adding an onboard clock-generator, and those jumpers, and have swapped those jumpers numerous times since.


    So, here's a result when the jumpers are set to use the ISA slot's clock:

    This differs from the one at the top in several ways...

    First, the white lines... My explanation for that is that those are "characters" that carried-over from a previous 'mode'. What I'm doing (now) is writing all the 'A's AND the attributes. The entire screen, currently, was supposedly filled with attribute=0x01 (=='mode 0x01'), which should be the letter 'A' in blue (if I recall correctly). But, in a previous mode it wrote 'A's with attribute=0x7f, which is white-on-white. So, again, it would appear these lines are the result of characters that weren't rewritten in this 'mode' (nor, apparently in mode 0x00).

    OK. And that, too, might explain the black lines, as being carried-over from mode 0x00, which would be black-on-black.

    Although, previously, I thought all bytes were being written, just incorrectly (e.g. the white/red 'brick' example, above). Maybe it's not that they're being written incorrectly, but that they're just not being written at all. Plausible, but questionable... So, the black characters in the 'brick' example are those that started either with attribute 0, or with a white-space character... either the character and/or the attribute were not written to those slots. Then there's the 'A' which comes through on a black-background. Where only the attribute wasn't written. OK. But then... this *is* the memory-initialization scheme (in that image), so those bytes likely weren't set to attribute 7, rather than just random-data... I dunno. Let's pretend the power-up default is an initialized memory suited for text-display, white-on-black. OK-then.

    The problem with the theory that it's only *not* writing bytes (as opposed to writing invalid ones)...? Maybe...

    Read more »

  • Check out this ridiculous finding...

    esot.eric02/06/2017 at 15:20 0 comments

    If you've been following the recent saga of assembling the AVR->8088 adapter, you mighta caught that I did most of my calculations based on a 74S04 to delay (and invert) the clock input to the AVR so the timing would align properly...

    Then, when assembling, I not only discovered that I have a *very* small supply of 74x04s, but no 74S04s. (I find both scenarios utterly surprising... Yah'd think I'd be pretty friggin' familiar with supplies I've had and used for 20+ years... But I guess that's the way things go these days).

    So tonight, looking in "crap boxes" (or, in reverse and cropped: "AR-boxes", for those of a sensitive disposition, or those wondering why I've got so many boxes labelled "AR") for an old project, came across this:

    74S04, the lone chip in the center of a huge slab of antistatic foam sitting right at the top of all that other un-foamed "crap."

    I didn't see it before looking at the photo, but there seems to be a friggin' arrow pointing at it, too. Weee!

    Well, I settled on the lone 74F04 I found in the sorted-7400s box, long before this discovery... and it seems to be doing the job, so I won't be changing it unless deemed necessary.

  • Scenes from an AVR-interfaced CGA card - In the Key Of A

    esot.eric02/04/2017 at 20:26 2 comments

    The above are the results of the following code:

    void cga_clear(uint8_t attributes)
       uint32_t i;
       for(i=0; i < CGA_VRAM_SIZE; i+=2)
          //Apparently it's Character, Attribute 
          //(two bytes)
                      cga_vramAddress + i+1, 
          //Thought adding a buncha nops 
          // might've allowed the
          // bus to stabilize, 
          // but they seem to have no effect
                      cga_vramAddress + i, 
                      'A'); //0);
          //Here too...
    combined with my bus88_write() function, which attempts to interface an AVR physically socketted in place of an 8088 on a PC/XT clone motherboard.

    And called with:

        mode &= 0x7f; //don't use blinking
    from main()

    (Here's bus88_write(), but it kinda relies on the physical interface, as well)

    void bus88_write(uint8_t s20, 
                     uint32_t address, 
                     uint8_t data)
       ADDR1916_PORT   = (uint8_t)(address>>16);
       ADDR158_PORT    = (uint8_t)(address>>8);
       ADDRDATA70_PORT = (uint8_t)(address);
       S20_PORT       = s20;
       ADDRDATA70_PORT = data;
       while(!(READY_PIN & READY_MASK))  {};
       S20_PORT = S20_BUS_IDLE;
    The AVR is inserted in the 8088's socket, with an inverter (or a few, for delay-purposes) between the 8088-clock and the AVR's clock-input.

    The CGA card is... in a questionable state. And its connection via composite to an LCD-TV probably accentuates that a bit.


    If everything worked within-specs, what I *should* get is the letter 'A' filling the screen, with changing foreground and background colors.

    What I get is much more interesting!

    I've got some good names for some of these... e.g. "Zelda, in the Key of A", or "Goodnight Princess, in the Key of A", or "Zebra, in the Key of A", or... ok, the key of A is wearing out. We've got "Tetris Level 9," "Donkey Kong", and more!


    esot.eric02/04/2017 at 18:03 11 comments

    Update: Random thoughts on potential explanations to be explored. (at the bottom).


    WE HAVE CGA! -ish...

    So, technically, it's supposed to be All A's... but what canya do...? Looks kinda cool, a bit like a video game town.Or a wall of bricks, maybe legos... maybe I'll try red on gray.

    There seems to be a somewhat regular pattern, maybe having to do with my bus-timing not being 100%. (almost speaks to me of 'beating' in the physics sense, which would make more sense if my AVR clock and the Bus clock weren't synchronized). Though, the DRAM verifies are still giving only one error in 640KB... So, maybe the ISA CGA card has slightly different bus-timing requirements.

    Over all, this is pretty exciting... My old-school Atmega8515 8-bit AVR sitting in an 8088 CPU socket, driving an ISA CGA card...


    Haha! All I changed was the background/foreground color attributes...

    This looks real cool.

    Herein we've got numerous factors at play...

    My CGA card is known for being a bit flakey... (see previous logs) Combine that with an LCD, rather than a CRT, and there's some other interesting effects...

    Now we've added flakey bus-interface...

    But how can this pattern possibly be explained? What I programmed, again, is a full screen of nothing but the letter A in red on a background of white.

    Perty durn cool, though.


    Hah, fewer factors at-play than I thought...

    Though, the LCD definitely seems to be syncing differently on different colors, whereas the CRT has pretty sharp vertical edges.


    UPDATE: Thoughts On Why:

    Frankly, I'm a bit excited by the weirdness of this. Realistically, its coming up with much more interesting visuals than I would. (See the next log)

    I copied most of my CGA-initialization from the PC/XT BIOS Assembly-Listing, (converting it to C). (I'll throw that file up in the files section). But, there are definitely some x86 instructions I don't understand, and parsing it was a bit of a mind-bender for me, some of it I just threw up my arms and made assumptions about what the goal was. So it's plausible I missed some initialization-stuff. Also I skipped over some parts, like the bit that clears the memory. (What a weird idea that there's a machine-instruction that can load a value to 16384 memory locations!).

    Aside from that, it seems some parts of the initialization appears to be interspersed in numerous locations across dozens of pages of otherwise unrelated assembly. So, it's even more likely I've missed something. Also, I didn't have the patience to stare at even more assembly to implement the remaining INT10 (video-I/O) functions... After I finally got initialization coded-up, I just started writing data to the VRAM.

    Also, I vaguely recall something about a portion of the memory being used for the character-set above 0x7f, so I should look into that.

    There's also some info 'round the web regarding the card's registers. I didn't look hard, but get the impression the majority of those aren't low-level enough to explain the *very initial* initialization-process... those I found seem to presume the card's already initialized (and you wish to reconfigure it). Though, I'm willing to bet that low-level info is out there, and probably in greater-depth than ever, what with the decades, and the demoscene.

    Also interesting to note: I broke the DRAM write/verify routine up a little bit: instead of writing all 640K then reading it all back (which took about 4 seconds), it now handles that in 256 chunks, that way I can do some "realtime" stuff in the meantime... And... Now the DRAM shows no verification-failures! So, at least as far as that goes, interfacing with the bus is pretty reliable. Earlier verification-failures must've [still] been refresh-related.

    Though, as I found out a log or two ago, different types of memories (and I/O devices)...

    Read more »

View all 39 project logs

Enjoy this project?



Ted Yapo wrote 02/03/2017 at 17:52 point

I "liked" this before I really knew what it was.  Now I wish there was a second like button.  Keep it coming; this is great stuff!

  Are you sure? yes | no

esot.eric wrote 02/03/2017 at 22:00 point

Thank yah, sir!

  Are you sure? yes | no

Mark Sherman wrote 01/24/2017 at 23:55 point

This is a crazy project.  That's the cool kind.

  Are you sure? yes | no

esot.eric wrote 01/25/2017 at 00:20 point

Hey, thanks!

Your #Cat-644 is definitely one of the inspirations. Recently reread your thoughts on an AVR-based virtual-machine. Would be hard to beat that level of "emulation" speed! I've written a draft-log on thoughts on which instruction-set to implement, if/when I get that far... a lot of rambling between 8086, or maybe even a reimplementation of the AVR... What'd you use for your project?

  Are you sure? yes | no

Mark Sherman wrote 01/25/2017 at 20:09 point

Ah, the instruction set.  I took a break from the project for a while.  A coworker helped me made a board, and I was about to sit down and finish the system software, when then 1kb challenge hit, so I did a little side project.  I'll return to this one soon.  As for the VM, I have written a small proof of concept test about a year ago, where I implemented enough instructions to have a little test 'for' loop to see how fast it goes, and if my schemes around ZH instruction pointers work.  It seems to work fine.  I will work on the VM soon, but recently I've been brainstorming alternate instruction set ideas.  Originally, I avoided stack machines, because I don't want the hit of pushing and popping every parameter.  I considered a register machine, but even with only 4 registers, the R,R combinations on simple things really add up, and I really want 1 byte instructions.  I did settle on a register accumulator architecture, where A, B, C,D are registers, but every 2-operand instruction needs to use 'A' as the destination.  This cuts the number of combos directly.  Lately, I have been trying to work out a good register allocation scheme, because the language running on this will have a forth-like syntax, but I need to compile that stack paradigm down into reg-accum instructions.  I think I have a new instruction scheme I want to try:  a 'rotating accumulator.'  I will probably make a post about it soon.  

  Are you sure? yes | no

esot.eric wrote 01/26/2017 at 09:10 point

@Mark Sherman awesome on the 1K challenge entry!

Great thought-points, here. Interesting coming up with your own instruction-set... Sounds great for Forth. I do like coding in C. ;)

I just had an idea... As long as each instruction is less than 128 (or 64, etc.) bytes of code (and there's only 255 of them, and you've got 64K of program-space (whoops, your scheme won't work on my 8515, better plan for a bigger proc) , it'd be possible to have multiple instruction-sets.

  Are you sure? yes | no

ksemedical wrote 01/28/2017 at 19:35 point

Yeah, it is, ... KEEP GOING Dont Stop ;) 

I do think you identified some key points, (that everything goes through the bus (singular), and there is what I have been told (and in my limited experience proved real). That is the "interrupt problem". While I know little in the details, that is not a new issue (and maybe could stand some revisiting (sure that is a fundamental change, but worth a bit of discussion since you are doing this up from ground up.

An example of what I mean, is how video data does have some independence (implying splitting th eload on one single busy.

I think there are many areas that can also be on a 2nd bus for that need, and share info that is needed.

There are architecture (concepts , hierarchy structure), I know I am speaking in an area that I have only limited experience, but I also think some possibilities exist, and your system is a good one to consider these bold qestions

  Are you sure? yes | no

Ted Yapo wrote 12/09/2016 at 15:26 point

You left a hint on your "project list" / "to-do" page.  I think I guessed what it is - but I'm not telling :-)

Good luck!  I'll PM you a pertinent link.

  Are you sure? yes | no

esot.eric wrote 12/09/2016 at 17:41 point

Shhh! :)

Oh, and thanks for the luck-wishing, I'll need it!

  Are you sure? yes | no

Does this project spark your interest?

Become a member to follow this project and never miss any updates