Improbable AVR -> 8088 substitution for PC/XT

Probability this can work: 98%, working well: 50% A LOT of work, and utterly ridiculous.

Similar projects worth following
Previously called "Improbable Secret Project"

The idea is to replace a PC/XT clone's 8088 CPU with an AVR. Not an emulation, per-se... more a test-bed for AVR development, with access to the XT's peripherals, ISA cards, CGA, and more.
Maybe a full-on AVR-based "Personal Computer", though, x86 emulation is definitely being considered.

(Interesting note: The 8086 allegedly ran at ~300KIPS, that's 0.3MIPS!)

(random note to self: 18868086881)

This project has been an adventure of the most ridiculous sort...

At the time I started this project-page I planned on emulating an 8088 in an AVR, so the AVR could be a drop-in replacement for the original CPU in an IBM PC/XT clone. The discovery that the original 8086 ran at something like 0.3MIPS, and the knowledge that AVRs can run at 20MIPS gave me the idea that maybe it would be possible to emulate an (even slower) 8088 in an AVR at usable-speeds, and maybe even run faster than the original.

I spent quite a bit of the lead-up time learning the (then, to me) "black-box" that x86-architecture has been to me for decades. This was aided by the fact that the original technical references for the 8088/8086 go into quite a bit of detail explaining *why* they made decisions they made in this (then new) design, in comparison to Intel's earlier processors, like the 8085, which are much more similar to the Atmel AVRs I'm so familiar with. E.G. Why do they use "segments" in the x86 architecture? And what the heck does 33:4567 mean, as an address, anyhow? (One of the many things I couldn't wrap my head around in previous endeavors trying to understand the x86 architecture).

From there, I went on to [re]assemble my PC/XT clone from parts that'd been poorly-stored for years in various boxes with other scrap PCBs and no anti-static bags. That endeavor, alone, was utterly ridiculous; leading me to sleep upright on a tiny spot on my couch for days, if not weeks; home filled with open boxes, cat sleeping angrily under the TV. And some completely unexpected, and amazing, results in the process. Including meeting someone who emailed to me exactly the (very rare) information I needed within 3 hours of my contacting an email address found in a forum post written 17 years earlier. (whoa!). Also, the surprising discovery that PNPs' emitters can be connected to the ground-rail and make for a [only slightly-differently]-functional circuit than the NPN I'd intended. Also, some "fun" with computers that seemingly-consciously refused to be of assistance (one, quite literally, glaring back at me in contempt).

As it stands, I've finally got the PC/XT clone assembled and running in an ATX case, and, with the help of a fellow HaDer, even installed my very first x86-Assembly program as a BIOS-Extension ROM, that it boots straight into!


Still, at this point in the story, I'd been planning to emulate the 8088/86 instruction-set within my AVR... And I'd done quite a bit learning about the instruction-set...

"That's a pretty big lookup-table, nevermind all the code. But I have *some* ideas about how to shrink it... maybe at the cost of execution-speed."

Also, at this point in the story, I started considering other ideas, such as emulating *hardware* within the AVR... Thanks to some other projects 'round HaD. E.G. Why use the 8250 UART, on an ISA card, when the AVR has a USART built-in? I did the math, and discovered that writing a byte to the AVR's USART transmit-register would take ~3AVR instruction-cycles, whereas doing the same with the 8250 could take more than ten times as long! Similarly, if the 8088's 8KB BIOS ROM was stored within the AVR, rather than on the 8088's "bus-interface", execution of the BIOS (the Basic Input/Output System, used for *way* more than just POST!) could be *significantly* faster.


This is the point in the story where I take a step back and start thinking about what led me here in the first place... not only in terms of what I'd discovered *during* the project, but what led me to start this project...? And I realized, the goal from the beginning, from *years* before starting this "project," was to use an otherwise now-outdated (some might say "useless") system's motherboard and peripherals with the processor-architecture I'm already familiar (AVRs).

So, emulation of the x86 instruction-set/architecture was never really in the original goals... I got sidetracked (what? Me?!). But it has been a useful learning-experience....

Read more »


IBM PC/XT schematics as layers in a GIMP file. marked-up a bit to help figure things out. Probably outdated.

x-xcf - 9.77 MB - 02/01/2017 at 23:02


  • Wonky unlabelled CGA, jumpers located?

    Eric Hertz08/24/2019 at 04:08 0 comments

    long story short [I'm certain I rambled about it quite a bit, here, in the past]...

    The CGA card didn't sync via composite with my TV [and not at all with an LCD-TV, showing literally nothing]... ultimately I wound-up switching unlabelled jumpers iteratively, and after 8 of 32 possible combinations, managed to find a setting that worked-ish. I stopped there, as I had other goals...

    Later experiments turned-out pretty funky, graphically... [look through that gallery! It's kinda groovy, really...] 

    [This was supposed to be a screen full of red 'A's on a white background. *some* of the wonkiness had to do with improperly timing the ISA bus via an AVR, e.g. the white-on-black portions, but the card itself is responsible for drawing the characters, and those red rows certainly aren't any character I'm familiar with. 

    Further syncing-wonkiness is better seen below, where I used the normal 8088 configuration and regular ol' BIOS calls to cycle through the colors. Note that red is completely illegible.]

    But it was good-nough for the experiments i'd wanted to run [eventually, and quite a bit, running off the AVR].

    Recently I remembered a great resource for figuring out jumper settings on unlabeled ISA [etc.] cards... and even though this project's been in storage for some time, I thought I'd see if I could find the proper jumper settings.

    BTW, that resource is called "Total Hardware 1999," apparently originally from a company called MicroHouse, and seems to have quite a story behind it, which is why I won't link it here... but look er up. It's *amazingly* handy for old unlabeled hardware.

    The key, here, is that my CGA card barely has anything as far as identifying/informative markings.... [image from a previous post].

    Look Ma! I can turn the printer port off-n-on! And... that's it. Oh, I know enough about these to identify the Light-Pen connector, too.

    So, TH99 has a browsable page full of great art showing boards as line-drawings with identifying features clearly visible. 

    Click the image, find the jumper settings!

    [Screenshot taken from a company clearly making money off of someone else's hard work, and not even giving as much credit as to just list the name of the original document! BTW, I will not link them, here, either.]

    I browsed first by looking at the backplate; surely CGA cards with composite and parallel ports were common...?! Well, only one with a similar configuration, and it happened to be EGA.

    Actually, I'm not surprised, the ports are jammed together so tightly, I ultimately wound-up soldering a header and external composite connector.

    But, not finding my card in TH99 was kinda disheartening, until I had an idea..

    Surely many CGA cards had similar features... at least I could get an idea of what the jumpers *might* do by looking at settings on other cards...

    So I opened up new tabs for several cards which had a similar number of jumpers...

    And, learned about some settings my card's jumpers *might* configure, like 5x7 vs. 7x7 fonts, or LPT port base-address...

    And then I stumbled on thus one...

    Note my card's connectors and jumpers match in name almost exactly. J5 is labelled 1-5 on mine, rather'n A-E, but otherwise we have what looks to be a darn-near perfect-matching "spinoff," with a couple extra features [RCA and light-pen connectors] and a couple missing/bypassed [jumpers soldered for color mode only, B/W timing circuitry removed].

    Further investigation shows the jumper-settings I came up with aren't spec'd... a weird mix of RGB-TTL monitor settings but set to composite output, which might explain the funkiness. let that be a lesson to ye... when trying to identify a board visually, keep in mind there may be some things missing or extra, maybe, I guess.

    [Note, this...

    Read more »

  • Believe it or not...

    Eric Hertz11/27/2018 at 05:58 0 comments

    I have an idea involving #Incandescent RAM and this project...

  • "Microcode"

    Eric Hertz11/27/2018 at 05:19 0 comments

    Turns out, it would seem, that what I've off-n-on approached doing, here--in "emulating" the x86 instruction-set on a RISC architecture--is pretty much *exactly* what Intel did in switching from P5->P6, and beyond...

    No kidding! Apparently x86s, since then, are RISC processors with "microcode" to process CISC instructions.

    So, those instruction-handlers I started visualizing early in this project aren't attempting to "emulate" the x86 architecture any more than Intel's own x86 processors emulate x86 processors.


  • Blogged!

    Eric Hertz05/25/2017 at 07:07 0 comments

    Much Thanks to Jenny List for an excellent write-up about this project!

    Not quite as cool as a jet-engine on a car, but I'm pretty proud of this ;)

    Who knows, maybe I'll get back to it soon!

  • The Beginning...ish.

    Eric Hertz03/31/2017 at 06:59 2 comments

    Moved here from my project-ideas page... *long* after having-begun.

    Processor Replacement... (2-4-16) (and some new thoughts 2-5-16)

    What about placing a microcontroller on an old motherboard's CPU socket...?

    Kinda digging this idea, but haven't really thought it out too much, yet.

    AVR would be difficult-ish, since code can only run from FLASH. (Though, have seen some impressive work on that front, running 'code' externally from SRAM, SD-Card, etc. via function-calls...?)

    PIC32... well, there's already linux's for that, right? So plug a MIPS core onto an old 486 board, get some ISA slots, PCI, whatnot... Maybe even SDRAM... And plausibly be able to use the already-available linux drivers for those cards...

    Not sure how much effort this would take... might need some nitty-gritty details on the bridge-chip(s), OTOH, e.g. 486-era bridges were pretty durn simple and pretty standardized... right?

    So, obviously, the BIOS won't be executed (though, I suppose, execution of the BIOS could be emulated), but since it'd be a microcontroller, it could have its own BIOS in firmware...

    Not sure why exactly this seems like a cool idea to me...

    Some thoughts...

    Per my recent experiments with 486 chips, in #Random Ridiculosities and Experiments, it would seem the 486-era was the transition from 5V logic to 3.3V and below... 486DX4's, for instance, have 3.3V core logic, with 5V tolerant I/O, whereas the 486SX is 5V-only... The DX4 was, it would seem, designed to be placed in an SX's socket as an upgrade. (Or, plausibly more important, designed to be used with industry-standard design-practices at the time, which just happen to be somewhat compatible with most hobbiests' abilities). Thus, a 486 mobo being a reasonable starting-point for such an endeavor. I haven't looked into Pentiums and beyond, but I'm guessing as of the later processors, there's probably some likelihood that their interfaces may be 3.3V or even lower. This might lend itself well to e.g. a PIC32 replacement, BUT (again, I haven't looked into them), it's also quite plausible that later processors use a less-standard I/O scheme, being that they may *rely* on the fact of bridge-chips. E.G. newer processors may use 1.8V, or LVDS, or who knows what... It's plausible they don't even use an Address/Data I/O scheme at all, in favor of some sort of newer "transport" scheme made specifically to work with bridge-chips. I really don't know. All I know is that I was pleasantly-surprised to find that 486's (which I just happened to grab at-random) still supported an I/O interface that makes sense to me...

    Oooh, a *QUICK* overview of : Suggests that Pentium processors may be quite similar to 486's interface-wise. A BRIEF overview (and an utter lack of knowledge) suggests that the major difference is a 64-bit data-bus, as opposed to 32-bit. OK, that's Doable... The P66 uses 5V signals, the 75-200MHz chips use 3.3V... Might be doable. 64bit, well... I guess a few+ 74574's could latch two 32-bit data-cycles into 64bit. Doable, anyhow.

    And, who knows what those bridge chips are capable of... The 486 definitely had 8-bit and 16-bit interface-*modes*, the Pentium likely has a 32-bit *mode* (despite having a 64bit data-width)... Is there a PIC32 with a Parallel Master Port that supports 32-bits...? hmm... And, even if not, is it possible to *send* the lower-bit-width mode (rather'n receive)? By which I mean... (it's been a while, I could be *completely* mistaken), I think the bit-width is determined by the device... Would it be possible to specify somehow that *all* devices are less than or equal to 32bits (or whatever bit-width the selected uC can support)...? Then, maybe, it'd be possible to rely on the "bridge" to break even a 64-bit device into 8, 16, or 32 bit transactions for our processor... and avoid the necessity for latches altogether......?

    Ridonculous? Probably.

    Here's a cool one: Someone... Read more »

  • No More About The KayPro Diskette...

    Eric Hertz03/03/2017 at 20:52 0 comments

    *This* "project page" is about installing an AVR in the CPU-socket of an 8088-based PC/XT computer.

    If you've read the recent logs, you know I've gotten a tad-bit sidetracked on what I thought was going to be a one-night-endeavor and turned into several weeks.

    So, now... There's a new "project page" for that, and I'll quit yammering about it *here*.

    #Omni4--A KayPro-based Logic Analyzer

    (See the next log in that endeavor here: )


    Since that guy's occupying so much time/space, it'll probably be a while until I revisit *this* project. But here's a quick overview of where *this* project is at, currently.

    I've managed to plop the AVR (ATmega8515) into the CPU socket and interface with components via the '8088-bus'.

    Can read/write all 640K of DRAM with nearly 100% reliability.

    Can read/write the CGA card (in text-mode), in color, no less. And, after a bit of an ordeal, have managed to get the reads/writes down to near 100% reliability.

    (Turns out, I was reading the "READY" (aka "WAIT") signal too early, so the CGA card didn't have time to respond that it wasn't yet ready).

    As a side-thing trying to solve the CGA problem, I also can read/write the parallel port.


    Future-endeavors, maybe... I've looked into the specs for my SoundBlaster card... and I think I've enough info in there to start producing sound without too much difficulty. Thankfully it can be done without using DMA, because I've little interest in learning to use that blasted thing.

    Looks like there are basically two different systems on the sound-card. There's the raw-sample-based system (e.g. for playing/recording WAV files), but there's also a waveform-synthesizer-chip which can be used to generate surprisingly (to me) sophisticated waveforms...

    Frankly, my artistry when it comes to graphics/sound is limited... So, I really don't know what I'll *do* with this thing when I get that running. (As is the status with the CGA card which is functional). Maybe some copy of some game... Tetris or something. But don't be expecting any "demoscene"-worthy demos coming from my endeavors.

    I suppose I should probably get the keyboard working at some point... Though I could probably more-easily use the RS-232 port.


    For now, back to backing-up that blasted floppy-disk.

  • "The Trick" analyzed...

    Eric Hertz03/02/2017 at 16:21 0 comments

    UPDATE: A more in-depth analysis of another sector-transition:

    More analysis at the bottom...


    Using "the trick" described two logs ago, and the last log's theories of why it's not oft-used, I've been analyzing a track-extraction from the floppy-disk...

    (I'll go into the waveforms more, later).


    Between each data-section there's a bunch of housekeeping-data. But some of that data is (by design) very recognizable.

    Immediately after the data-section (and its CRC) appears to be 24 bytes containing the value 0x4e. This allows the disk-controller's clock to resynchronize between/with each sector.

    The first two sectors on the track I'm analyzing appear to be synchronized with each other. And, thus, the clock maintained its sync starting with the first sector, and into the second sector. I can view data starting at 0x0000, and if I count 512 bytes (or just search for address 0x1ff), I can see that data end, followed by two (CRC) bytes, then followed by the very-recognizable 24 copies of 0x4e.

    If I continue from there, I can determine some low-level details of the format of the floppy. This one's definitely different than an IBM-PC format (as I've read). One example is that the IBM-PC format uses *80* bytes of 0x4e, rather than 24. (This makes sense... this disk appears to have 10 sectors/track whereas IBM uses 9... those extra bytes have to fit somewhere... so reduce some of those redundant-bytes...) Similar elsewhere. After those 0x4e's is 8 bytes containing 0x00. IBM uses 12 bytes, but, these changes in "gap" size are basically the only major difference.

    So, it's easy to see where the sector-header starts (immediately after the 0x00's), and so-on. Thus, I've determined there's 595 bytes used for each sector. 512 for data, and the remaining for sector-headers, CRCs, gaps, etc.

    So, if I advance through the file to address 595=0x253, or thereabouts, I actually see the end of the first sector, and into the next sector, and see that the data and header-stuff is aligned just as I expect.

    From there I advance to address 595*2... but this time it looks different. Instead of 24 bytes containing 0x4e, I get 23 bytes containing 0x21.

    As described, in the previous log, that kinda makes sense... Those gaps are there, largely, for the purpose of allowing the disk-controller to resynchronize its clock with each sector. That way slight timing-variations from one drive to the next won't cause issues like we're seeing in the data-stream here... where 0x4e is coming through as 0x21.

    The thought, then, is that what's happening is a slight bit-shift... those 0x4e's are probably written properly to the disk, but since I'm not reading each sector *individually*, and instead reading the entirety of the track as though it's one gigantic sector, the error is due to bit-shift likely caused by the sectors' being written at different times on different drives.

    But... 0x21 is nowhere near similar to 0x4e shifted-left or shifted-right by a few bits... so what's happening?


    MFM. was my theory, and I think I've proven it...

    Briefly, storing the data on magnetic media requires both data *and* clock-synchronization information to be stored along with that data. (If you're familiar with SPI or other synchronous serial protocols, this is another way of doing that, on "one wire").

    So, MFM is a scheme to assure that the clock stays synchronized despite the fact that one might store twenty bytes containing '0' consecutively. In that case, clock-bits are artificially-inserted... Go check out that link, it served me better than the wikipedia article, and more concise than I could be...

    I wrote a shell-script to take in raw byte-data and "draw" the MFM-encoding for comparison-purposes... And, look-here... 0x4e looks darn-near exactly like 0x21, when encoded in MFM, and shifted by *one* MFM-clock.

    So, probably, what happened is that one sector was written at one time... then the second was written... and it just...

    Read more »

  • The Friggin Trick! And why it fails...

    Eric Hertz03/02/2017 at 05:00 0 comments

    Why the method of reading floppy-disk data-tracks, laid out in the past log-entry, is not de-facto...

    Again, the normal methods for data-extraction are sector-by-sector. Even the "read track" command technically does-so on a sector-by-sector basis.

    (In fact, now that I think about it, I think I could've just as easily used the "read-sector" command, with the trick laid-out in the previous log).

    Why *not* extract the entirety of a track, with *all* its data, including CRC, sector-IDs, gaps, and more... and let the PC process it?

    Well, here's something:

    ...the sector is finally terminated by another gap, GAP3, of 80 bytes of 4eh. This gap is designed to be long enough to allow for any variations in recording speed and thus avoid the inadvertent overwriting of the following sector.

    That's apparently an extreme oversimplification...

    Basically, before every section of information (not going to call it 'data', since that's confusable with the data section, and not going to call those sections 'sectors' similarly...). Between each section of information, be that section the "Sector ID", the "Data", or the "Track ID" sections, there's a "gap" followed by a bunch of "sync" bytes.

    In other words, each of these sections of information is kinda like a barcode-label, the read/write head a bit like a barcode-reader. So, on a single track, on a single side of a floppy-disk, containing 10 sectors of user-data, it's somewhat like 21-ish pieces of sticky-backed paper, each with a printed-on bar-code.

    They're most-likely *not* perfectly-aligned with each other, nor perfectly-spaced. And most-likely each bar-code scanner (human) will scan each barcode at a different rate, most-likely very different from that it was written-at. Thus, each barcode has its own syncing information. Likewise, each section of information on the diskette has its own syncing information.

    So... treating it as "the trick" does... is a bit misleading. In fact, I've just been browsing the hex-dump of a single track and found that by the time the second sector is read, data comes through *completely* wrong.

    The sync-bytes are supposed to be 0x00, and yet they appear as 0xff! The gap3 bytes are supposed to be 0x4e, yet they're coming through as 0x21!

    (Wait, what?!)

    OK, I could expect a certain amount of bit-shift... ... but 0x00 is *nothing* like 0xff, bit-shift-wise, right...? Nor is 0x21 a simple bit-shift away from 0x4e.

    So now... I haven't analyzed it *completely*, but my thought, here, is that it's not actually shifted by a whole *bit* worth of data, but by *half* a bit as stored on the magnetic-media... Wherein it's necessary to look into MFM. Essentially, each data-bit on the magnetic-media is stored as *two* "bits" such that each data-bit contains a transition between 0 and 1. (or, maybe, North and South polarization?). The purpose being to allow the disk-controller to *sync* to the inherent "bit-clock" stored in those transitions.

    But, of course, since I'm using "the trick" it did that syncing *long* ago, (512 data-bytes, + a bunch of other information/header-bytes), and kept that sync with every bit-transition thereafter, rather than trying to resynchronize with every "sync" section, as it would've if I'd've requested the read of each properly-sized *sector.*

    So, for the *first* sector's worth of data everything's aligned properly... The syncing happened on its sector-header. But every "sync" and "gap" section, thereafter, could be *completely* misaligned... maybe a full-bit which would be easy to see, or maybe a half-bit as I think I've found, here... Maybe *several* bits/half-bits... Or, judging by that earlier-quote, it could even be misaligned by *several bytes*.

    Further-still... who's to guarantee that each section is aligned at the magnetic-bit-level...? Maybe they're askew by 1/3 or 1/10th of a bit...?

    I *think* what'd be seen, then, is a few bytes that don't make sense (after the end of one section and upon entry of the next)......

    Read more »

  • The Friggin Trick! - Now that's a hack!

    Eric Hertz03/01/2017 at 19:23 2 comments

    The problem: The floppy-disk-controller IC used in PCs is pretty high-level.

    Reading the raw data on each track is not within-specs. When you attempt to read a sector, the disk-controller scans the track for a sector-ID matching your request. Thing is, the sector-ID may be corrupt, in which case it'll never be found. In which case, you can't extract that sector. Similarly-complicatedly for various reasons sector-IDs can be written to *any* value... E.G. while your physical heads may be located, physically, on cylinder 20, and while you may be reading physical head 1, it's entirely plausible that there may be a sector on that track (track 20, on head 1) that's Identified as something completely different, say Logical cylinder 1023, logical head 0, logical sector 63. If you don't know to request that ID at that cylinder on that head, then you can't use the "read sector" command to get it.

    So, I don't know why, but I can't read certain sectors when I assume that the entire disk is formatted the same as the first track (10sectors/cylinder/side, marked 0-9 on physical head 0, 10-19 on physical head 1, both on logical head 0)... The error message is "sector not found."

    Then, how can one extract data when a sector's ID is corrupt, or weird...?

    fdrawcmd has an *undocumented* command called "read_track". I came across this somewhat randomly... But then, if you look into the floppy-disk-controller's documentation, there's a problem...

    5.1.3 READ TRACK

    This command is similar to the READ DATA command except that the entire data field is read continuously from each of the sectors of a track. Immediately after encountering a pulse on the IDX pin, the 82077AA starts to read all data fields on the track as continuous blocks of data without regard to logical sector numbers. If the 82077AA finds an error in the ID or DATA CRC check bytes, it continues to read data from the track and sets the appropriate error bits at the end of the command. The 82077AA compares the ID information read from each sector with the specified value in the command, and sets the ND flag of Status Register 1 to a ``1'' if there is no comparison. ...
    This command terminates when the EOT specified number of sectors have been read. If the 82077AA does not find an ID Address Mark on the diskette after the second occurrence of a pulse on the IDX pin, then it sets the IC code in Status Register 0 to ``01'' (Abnormal termination), sets the MA bit in Status Register 1 to ``1'', and terminates the command.

    (Intel 82077 datasheet)

    So, even though it ignores the sector-ID information (unlike the read-sector command), it still *only* reads the data-fields... (and, what when the *other* bytes related to each sector are corrupt...? How would it locate the data-field if it can't even locate the sector-field?)

    So the trick came from a japanese website... I had to translate it. More on that later...

    The trick is simple... just tell it that the sector-length is longer than a track, and request more bytes than exist on a track.

    It'll locate the first sector and begin reporting its data field, but now it thinks the sector itself is *really long* so it continues reading data *past* the actual sector-data... which just happens to include the following sector-header (as well as the CRC bytes, etc.). And... by telling it the sector-size is larger than the number of bytes that can fit on a single track, you wind-up extracting *the entire* track, as raw data. Every sector ID, corrupt or oddly-labelled, every CRC field, every gap-byte... (And, probably, some corrupt data, as well). Friggin' amazing! I've been fighting this for *weeks* (omg).

    (Fhwew! I really didn't have the patience to build a custom low-level reader... for this one-off backup for a system I may never even use...)

    Dunno why I had to go to Japan to find this technique... I've some theories, though... which is why I'm not linking it here. Yay!

    All this because somehow apparently I acquired a unique system... If you believe my search-fu abilities,...

    Read more »

  • The Never-Ending Tangent!

    Eric Hertz02/28/2017 at 11:46 1 comment


    It seems amongst all the sector-extraction-attempts, I've managed to recover all but 34

    missing: 29:
    only existing with errors: 5:

    There appears to be a pattern...

    There're a *lot* of sector=10's missing..

    If I understood correctly, this diskette *appears* to be formatted such that it has 39 physical cylinders, 2 physical heads, and 10 sectors /track/side... Furthermore, unlike most diskettes, it appears to have a sector '0'.


    Further-still, it appears that the "logical" sectors completely disregard the physical...

    Logical Sector 10 on Logical Head 0 is actually Physical Sector 0 on Physical Head 1.


    I've obviously managed to extract data from various cylinders with these assumptions.

    But I also see, from the list of errors/missings, that this assumption may not always be the case.

    Plausibly: It's actually plausible, (in fact, mentioned 'round the interwebs) that some tracks/cylinders *may be* formatted *differently* than others. (what?!)

    yeahp. And, the data here seems to suggest that may be the case.

    I haven't looked into *all* these cases, but I looked at a handful, and the errors related to the missing sectors looks to be related to "sector not found".


    So, I'm a bit wonky on my understanding of this... but it seems cylinder 0 is the outer-most cylinder/track. So, there *could* be some justification that the outermost cylinder might be able to accept more sectors than the innermost, where the circumference is smaller. BUT. that seems somewhat irrelevent because The Data Rate is constant. The Rotational-Speed is constant... So... The only difference, then, between the outter and inner tracks, if written a different number of sectors, would be the amount of space (the "gap") *between* sectors. Which... really shouldn't matter, because, if it's capable of discerning data-bits at the same data-rate at the same rotational-speed, then adding a larger/smaller gap between sectors shouldn't change anything.

    Further still, if you look at the list of missing sectors, it seems the majority are related to physical head 1. Again, from what I've read, it *is* possible to format the tracks differently, not only across different cylinders, but *also* across different heads.


    It's *plausible* one physical side of the disk may be formatted with ten sectors/track, while the other side might be formatted with nine. Further-still, it's *plausible* the first side might start with sector '0' while the other side might start with sector '1' (while, again, *also* having 10sectors/track on side 0, but 9sectors/track on side 1).

    But, then, since the *logical* sectors completely disregard the heads, that'd mean sectors 0-9 are on head 0, while *11*-19 are on head 1.... But Apparently Only On *some* cylinders!!!

    Since... again, that "missing" list assumes that the format is constant across all cylinders/heads... assuming that sectors 0-9 are on head 0 and sectors 10-19 are on head 1... and... that missing-list shows that there are some sector-tens that are *not* missing, which again implies that *some* tracks/cylinders on head 1 might in fact be ten sectors/track, rather than 9.

    This is friggin' insane.

    But we've barely scratched the surface! (hopefully, since once the surface is scratched, it's entirely plausible the data may never be recovered)....

    Read more »

View all 50 project logs

Enjoy this project?



Dr Salica wrote 05/28/2017 at 21:51 point

Wow ! That's very impressive ! 

You wrote that the original technical references explain the various design decisions and answer a lot of 'why did they do that' questions. Could you share the references so we can also educate ourselves ? Thanks :-)

  Are you sure? yes | no

Eric Hertz wrote 08/06/2017 at 22:04 point

Oh my... no wonder I had this one under "unread emails".

It's been a minute... I'm pretty sure the file I was referring to is called: "1981_iAPX_86_88_Users_Manual.pdf" which I found online somewhere... Unfortunately, it's too large to upload (58MB!).

I also tried to OCR it, but the utility did some weird stuff (and made it larger) so I only used that for searches, then referred back to the original (scanned) document.

The explanations, as I recall, were related to how they moved from the previous architecture (8048?) which is much more similar to 8-bit microcontrollers of today, like the AVR or 8051, so helps to explain why they did things like segment-addressing and whatnot in the (then new) 8088/86 in a manner that makes some amount of sense to someone familiar with 8-bitters.

  Are you sure? yes | no

adam.klotblixt wrote 05/22/2017 at 07:03 point

Very interesting and good to read, keep it up!

  Are you sure? yes | no

Eric Hertz wrote 08/06/2017 at 22:16 point

Why thank you, Sir! Your custom-computer project looks pretty interesting, as well!

  Are you sure? yes | no

Ted Yapo wrote 02/03/2017 at 17:52 point

I "liked" this before I really knew what it was.  Now I wish there was a second like button.  Keep it coming; this is great stuff!

  Are you sure? yes | no

Eric Hertz wrote 02/03/2017 at 22:00 point

Thank yah, sir!

  Are you sure? yes | no

Mars wrote 01/24/2017 at 23:55 point

This is a crazy project.  That's the cool kind.

  Are you sure? yes | no

Eric Hertz wrote 01/25/2017 at 00:20 point

Hey, thanks!

Your #Cat-644 is definitely one of the inspirations. Recently reread your thoughts on an AVR-based virtual-machine. Would be hard to beat that level of "emulation" speed! I've written a draft-log on thoughts on which instruction-set to implement, if/when I get that far... a lot of rambling between 8086, or maybe even a reimplementation of the AVR... What'd you use for your project?

  Are you sure? yes | no

Mars wrote 01/25/2017 at 20:09 point

Ah, the instruction set.  I took a break from the project for a while.  A coworker helped me made a board, and I was about to sit down and finish the system software, when then 1kb challenge hit, so I did a little side project.  I'll return to this one soon.  As for the VM, I have written a small proof of concept test about a year ago, where I implemented enough instructions to have a little test 'for' loop to see how fast it goes, and if my schemes around ZH instruction pointers work.  It seems to work fine.  I will work on the VM soon, but recently I've been brainstorming alternate instruction set ideas.  Originally, I avoided stack machines, because I don't want the hit of pushing and popping every parameter.  I considered a register machine, but even with only 4 registers, the R,R combinations on simple things really add up, and I really want 1 byte instructions.  I did settle on a register accumulator architecture, where A, B, C,D are registers, but every 2-operand instruction needs to use 'A' as the destination.  This cuts the number of combos directly.  Lately, I have been trying to work out a good register allocation scheme, because the language running on this will have a forth-like syntax, but I need to compile that stack paradigm down into reg-accum instructions.  I think I have a new instruction scheme I want to try:  a 'rotating accumulator.'  I will probably make a post about it soon.  

  Are you sure? yes | no

Eric Hertz wrote 01/26/2017 at 09:10 point

@Mark Sherman awesome on the 1K challenge entry!

Great thought-points, here. Interesting coming up with your own instruction-set... Sounds great for Forth. I do like coding in C. ;)

I just had an idea... As long as each instruction is less than 128 (or 64, etc.) bytes of code (and there's only 255 of them, and you've got 64K of program-space (whoops, your scheme won't work on my 8515, better plan for a bigger proc) , it'd be possible to have multiple instruction-sets.

  Are you sure? yes | no

ksemedical wrote 01/28/2017 at 19:35 point

Yeah, it is, ... KEEP GOING Dont Stop ;) 

I do think you identified some key points, (that everything goes through the bus (singular), and there is what I have been told (and in my limited experience proved real). That is the "interrupt problem". While I know little in the details, that is not a new issue (and maybe could stand some revisiting (sure that is a fundamental change, but worth a bit of discussion since you are doing this up from ground up.

An example of what I mean, is how video data does have some independence (implying splitting th eload on one single busy.

I think there are many areas that can also be on a 2nd bus for that need, and share info that is needed.

There are architecture (concepts , hierarchy structure), I know I am speaking in an area that I have only limited experience, but I also think some possibilities exist, and your system is a good one to consider these bold qestions

  Are you sure? yes | no

Ted Yapo wrote 12/09/2016 at 15:26 point

You left a hint on your "project list" / "to-do" page.  I think I guessed what it is - but I'm not telling :-)

Good luck!  I'll PM you a pertinent link.

  Are you sure? yes | no

Eric Hertz wrote 12/09/2016 at 17:41 point

Shhh! :)

Oh, and thanks for the luck-wishing, I'll need it!

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates