Close

While I'm backing-up...

A project log for Vintage Z80 palmtop compy hackery (TI-86)

It even has a keyboard!

eric-hertzEric Hertz 09/10/2021 at 17:430 Comments

Brief (yeah, right...) recap... 

What I need is a somewhat-accurate reading of the CPU frequency so that I can bitbang a bidirectional UART.

This differs from my previous "UARto[S]T" (Universal Asynchronous Receiver to [Synchronous] Transmitter), because for that I was able to control the interface at the byte-packet-level at the computer-side, using the computer's data transmission as a clock for the calculator's data transmission. This time I don't have that luxury, so need a "real" UART implementation which can read and write packets which actually are truly asynchronous. ...Well, mostly.

The idea, then, comes down to watching /another/ communications port, and deriving timing from it. 

That port is not a UART, nor does it communicate at a similar baud. But, it /does/ have a defined and steady-enough bit-rate, such that if I can count/calculate the number of calculator CPU cycles during its bits, then I can get a pretty accurate idea of how fast the CPU is running. And, from there, can add appropriate CPU-cycle delays in my UART bitbanging.

The logic-level-toggling-rate of that comm-channel, however, is /really/ fast, in comparison. So fast, in fact, that, at least in my z80 skill-set, I can /just barely/ guarantee that every high and low will be detected /at all/. Nevermind trying to measure their duration.

On the plus side, every bit in that protocol has both a high and a low, so clocking is inherent. Also, once a packet is started, all its bits flow back-to-back, even between bytes.

So, I plan to store several hundred samples at a known sampling-rate of 21 CPU cycles, then count the toggles. That'll get me a pretty good idea of the CPU frequency, at the time those samples were taken. I figure it won't vary /dramatically/ thereafter, at least for a few UART transactions, then I can do it again.

Also, luckily, the UART system only communicates after the calculator requests it. So, there's plenty of time to measure/recalculate timing whenever necessary.

...

But, that means processing samples... and I thought it might be handy and easy to do-so with those I'd already taken as logic-analyzer screenshots...

And, I somehow managed to corrupt some data in places that have nothing to do with my project. And now it's acting /really/ weird. So, memory-reset again, despite all those wonderful "Memory Recovered" messages.

That flash backup thing would sure be handy!

...

On the plus side, this other stuff has helped get ready for that, I now know a thing or two about accessing/creating "variables" which includes things like programs and source "files"... so, actually, it shouldn't be too difficult to write that backup/restore utility... yahknow, I imagine a day or two, so it'd probably be two months.

...

Multitasking bit me in the butt...

I recently acquired from a garage sale an external hard disk large enough to backup my internal drive... So I've been writing zeros to it in about 40GB chunks whenever I've got the compy running for other purposes... make sure there's no bad sectors, whatnot.

And, I'd forgotten my internal drive is connected via the same USB hub. Which meant, in this case, that connecting the external drive /before/ power-up could (and did) cause /it/ to be /dev/sda, instead of the boot drive...

'dd if=/dev/zero of=/dev/sda seek=200G'

Yeah, i noticed, thankfully, about 1.5G in because (again thankfully) it was running much slower than usual. And, looking closer, the wrong hd light was blinking. I do like hd lights.

So... shit. Now I'm learning about debugfs and ext4 block sizes and inodes... some 250,000 blocks means potentially up to 250,000 files (especially considering all my small sourcecode includes, or the friggin 512Byte sectors from #OMNI 4 - a Kaypro 2x Logic Analyzer ). Thankfully I do have another backup drive to recover many/most(?!)/all(?!!) of them from, but first I have to figure out which files were zeroed!

So, first yah gotta calculate where the drive's physical sectors are in the partition, then convert those to filesystem blocks... then use debugfs to find out which inode is associated with each block... then you can ask the filename/path of each inode.

So far I have a list of used/unused blocks. From there it takes nearly a minute to convert 100 block numbers (of ~250,000! 2500minutes!) into inode numbers. I /might/ be about halfway through that process, maybe, yeah, probably not, after many hours. Recall: I can't just leave this running overnight, lest I run out of juice! So, I've got to break this process into chunks. Heh. I've a list of it looks like nearly 100 inodes/files so far. I looked a couple up and they're photos from 2017. ...which most likely are of the closest friend I've ever had, who I lost last year. I'm definitely not OK with losing those too. But, on the plus side, the bigger they are the fewer need to be found/recovered. There's probably a couple multi-hundred-megabyte videos in there too. But, it seems, it still takes about a minute to associate 100 blocks with inodes, even if they're all associated with the same inode. Heh.

This brings me to another question, which /maybe/ I'll test... I use rsync to backup my drive... I wonder if it's thorough-enough (how could it be?) to check whether the file's /contents/ changed, when none of its metadata (size, modification date, etc.) has. It can't, I don't think, be set up that way, otherwise it'd have to read the entire drive's contents in the about 45min it takes to do its incremental backups. So, I think, it probably wouldn't even notice they changed.

That could be beneficial: I could try an incremental backup, now, /after/ the disaster, then do a full restore. Hmm...

But, again, 320GB at 480Mbps/2 and on a battery... and /that/ backup drive has to run off the inverter... I don't like running it for more than a couple hours. And starting the engine to recharge causes the inverter to kick out...

Another alternative, if somehow it /did/ notice they changed (maybe it can look up the checksum more quickly?), would be an incremental backup, then looking at the logfile to see what's changed.

Maybe realistically I should do that backup sooner rather'n later, anyhow. I don't think I have since I started this TI-86 project. Oof.

The new drive is also 320GB, /and/ USB-powered, /and/ slim/laptop-sized... So the idea was I could run backups on it far more often without all those power limitations. The drive could be kept identical, so could even be dropped-in if the other fails. Almost like a mirrored RAID. Though, I doubt there's an option in rsync to do it at the byte-identical level... and I've no idea how to track and then dd small changes, heh. It's not /highly/ important, but could've been handy in a case like this!

Sheesh I'm stupid sometimes.

...

Oh, I /did/ manage to finally parse the logic-analyzer screenshots... wow that was a mess. It should've been so easy. But, did not yet get to parsing the frames/packets, and most importantly /timing/, which, of course, I thought was going to be the hard part. I had that pretty well figured out on paper... with a few calls to getNextSample() which, I'd of course change from grabbing from the screenshot to grabbing from the sample buffer, once I was ready to put it all together... grabbing from the screenshot was supposed to be easy! For early testing! Heh.

On hold.

Back to data recovery.

...

Another day, making nearly a week, and finally I think we're back where we started... Data recovered... about 800 photos, many/most of my cat who disappeared last year... It hurts every day, and after a year of searching I still regularly feel tempted to go try again. I dunno what to think about the slap in the face that came soon after I called off the search, almost losing those photos too. Nor the friggin' irony that it happened *as a result* of implementing preventative measures against such things! But, thankfully, earlier such measures-taken were enough, this time. Frankly, it feels to me, that exact same kind of irony is what resulted in his disappearance. And similar for so many other such prior disasters. Oof.

Anyhow, it seems I was able to recover all the lost data, after several days of 100% CPU and quite a bit of hard disk activity dedicated to nothing but figuring out /which/ files were zeroed... after that was figured-out, it was simply (hah!) a matter of deleting those and copying back the originals from my backup drive... that all, actually only took a few hours, despite my flakey script and having to redo it several times. Thankfully it wasn't /too/ flakey, this time. Whew.

BTW: when bash-scripting:

While [ 1 ]

Do

  Read line || break

  ...

  Cp -i sourcefile destination

Done < fileListing.txt

BAD IDEA.

When cp asks for a user response, it gets it from fileListing.txt rather than stdin.

And, I found out, "1" is the same as "yes" in the "overwrite existing file?" prompt.

BETTER:

While [ 1 ]

Do

  Read line <&3  || break

  ...

  Cp -i sourcefile destination

Done 3<fileListing.txt

Notice the 3< and <&3

I'm not a newb... I learned that probably a decade ago. The things we forget...

(Iirc, 2, instead, would've redirected stderr, 1, stdio, so 3 is the first that isn't typically used by the system for special purposes).

What else...?

Did some mathing during the dayslong automated search for zeroed inodes...

It seems my TI-86 is running around 4.7MHz. I coulda sworn 6 was the magic number. I've been on the same set of dollarstore batteries for a surprisingly long time, and that may be part of the reason... OTOH, I have only barely begun to increase the contrast, at least as far as I recall having to regularly do with name-brand alkalines in my TI-82 days. Huh.

And I started to work-out the T-state-count delay function for my bitbanging the UART.

Last time I did similar was on an AVR at 20MHz... in C...

Therein, i had a UART input and output bitbanging function, a keyboard matrix decoding function, a realtime audio-sampling function, and an SD-card writing function all running round-robbin... fast enough for 10bit audio sampling at 11KS/s. Heh!

Herein, I think I can /only/ handle the UART, and /only/ one direction at a time. And, instead of checking the timer to see if it's time to handle the next bit, I'll have to insert a blocking delay-loop between each bit. Heh. Some 400 clock cycles between bits /sounds/ like a lot, but this simple spin loop, only three or so instructions, will take well over 20... certainly not enough time to jump to another function, determine which state it's in, process that state, then return.

I'm constantly in awe of just how much faster the seemingly lowly AVRs I've spent most my adult years working with (while my peers keep insisting on ever more processing power) are than full-fledged CPUs still in use in the same decade.

And, further, that, really, it's not so much about the underlying transistors' abilities, but more about the design. 7400-series logic was capable of 20MHz in the z80 era. 

But, even if we were talking 4MHz, the fastest z80 instructions are 4 clock cycles. 4 times slower than the vast majority of AVR instructions.

I really think something like AVRs /could've/ existed in the z80-era, if they'd just thought that way, then. Exceptions, of course, being in transistor /count/ (MAYBE, seeing as how many were necessary for things like processing variable length instructions, microcode, bus handling, etc.)... And then, of course, memory. I think the Harvard architecture of AVRs probably makes things much faster, since two separate busses are accessible simultaneously. But, again, back then workarounds could've existed for things like slow memory access times by, say, splitting the memory bus into a state machine: request an address, expect the response four cycles later, meanwhile another access could be requested... they already had /multiple/ chips per byte (remember those arrays of DRAMs on the motherboards?), if instead of tying all their address lines together, they'd've added two 74LS574 latches to each bank of 8 DRAMs, they could've had 4 times the access speed!

Funny thing to me is that, I think, many such design ideas /were/ implemented back then... just never really made it mainstream. Maybe in things like Cray, etc. But, at some point, I guess, it was mostly market-driven... compatibility was important, thus 8008->8080->z80 having the same backwards-compatible instructions, similar timings, similar busses... nevermind, of course, implementation complexity... 40-pin DIPs were apparently highly desireable over 100pin doolybabs. Yet, they /obviously/ had the spare transistor-space to design-in significant, near "supercomputer", levels of circuitry, had they chosen to do-so in the computing side of things instead of the bus-interface side. Then, supercomputers were still being built at the gate-chip-level! Weird to think about.

Anyhow, somewhere in there I get ideas of how to improve the device's abilities, like my "24-bit quick-mapper" idea, or using DRAM refresh cycles /for/ DMA... and it could be interesting to actually build such a system with a real z80 and a slew of TTLs... and some arshole will suggest an FPGA, and then I mightaswell do RISCV and forget about all this.

And, anyhow, none of this has to do with my UART task, which I'm kinda dumbfounded to think it'd actually pretty much /require/ spinloops for 400 precious clock cycles! Surely those could be put to better use!

...

...

Time for a new backup of my drive, now that I recovered those files... last backup was July 12. Ironically, looking at the notes, that was when my PiZero (my main compy's brains) decided to bite the dust. Well, several days prior, when I was... trying to run a friggin' backup.

Thankfully, I had a new PiZero sitting around for another project which has yet to happen... that project was years in planning, but pretty much started and ended the night my cat disappeared.

So, once I finally got the new Zero soldered-up, I immediately ran that backup that killed the previous one. And... apparently that was my last backup until today.

I've done a lot /on/ the TI-86 in that time... And have done many backups of it onto the pi's hard drive. But, really, I think that's about all that's new on there... besides, of course, recovering those zeroed files, which, if i did it right, the backup system won't even notice, heh. Except that I also hardlinked those files into a new folder so I could easily see what I'd recovered. Which, seeing, isn't really so easy, being still too fresh wounds.

...

Huh, I had something else TI86/UART-related to mention, but i lost it...

Back to UARTing!

...

LO friggin L

Backup failed.

Right, forgot my battery appears to have a dieing cell. 10V is too low for the inverter.

Hah!

Next time i feel like idling the engine for 45min, then... I knew there was a reason I was kinda stoked about a USB powered backup drive. But /that/ first backup will take /many/ hours... days, probably... and have to be done in many steps. And, I wasn't planning to use my same configuration for it, since it's the same size as my main drive, I was planning to essentially mirror it, rather than keep old backups... so, more days setting up the new config. Heh!

...

JESUS!

22MIN is too fast...

It was 45 last I recall.

Now, comparing to my July backup, there are nearly exactly ONE MILLION FEWER FILES.  We're talking: it was some 1930 thousand before, and is now 990 thousand.

...

What did I DO?!

...

Gather thoughts...

OK, I vaguely recall doing a bunch of cleanup... Specifically: #OMNI 4 - a Kaypro 2x Logic Analyzer had dozens of copies of each sector in individual files... the disk was 400k, that's 400,000/512 or some thousand files per attempt.... but, surely i didn't have a Thousand copies... dozen, dozens maybe, but nowhere near a thousand.

Shit, where's all my files?!

Discussions