discovered something interesting today that could be quite useful to remember for the future... surely would've come in handy in the past!

I've been doing digital circuits and embedded software design for well well over two decades... so I'm not a noob... so, I'm at a complete loss at how something so obvious has eluded me all these years.

Though, I'm not really sure how to put it into words... maybe a table is coming...

1 bit can have two values (0, 1)

2 bits can have 4 (00, 01, 10, 11)

8 bits: 256

I've got them all memorized up to here. From there on I usually grab a calculator (2^numbits) or use some tricks... e.g.

10 bits: has two bits more than 8 bits, 8 bits is 256, two bits is 4, so 10 bits is 256 times 4 = 1024

I always thought it a bit interesting that 2^10 is 1024... the connection between 10 and the binary equivalent of 1000, or 1k.

But, I kinda likened it to the interesting features of 2 in that doesn't seem to occur with any other numbers. E.G. 2+2=4=2×2=2^2... but, none of that works for 3 nor 1. So, surely 2^10=1k is a fluke, right? Always just chuckled at 2^10 when I ran into it, then moved-on.

Today I needed 2^18...

OK... 1024 * 256 = 256k. Hey wait a minute...

Surely that doesn't contin.... uh... why wouldn't it?

Hey, first-off, I'm no moron, well, I probably am... but I can see patterns /that/ glaringly obvious, if they stare me in the face for 20 years... I have an excuse... I /usually/ only need these sorts of numbers when dealing with address bits... and, usually, we do that in hex... and usually address bits are numbered starting with 0.

My FLASH chip has 18 address input bits. Its last address bit is A17. 2^17 is 256K?! Wait a minute... no... 2^18, 18 bits used for addressing. gah. Ok, move on.

So, if you're looking at a device with A19 as its highest address bit, then it stores 2^19=mathTricks=512K... but, wait, that stupid datasheet says 1MB. GAH! I forgot A0, again. It has 20 address bits, it's 1MB... interesting how 2^20 gives a nice round number in binary terms... I mean, it coulda started with 6 or 3 or... I mean, look at the highest value that can be stored in two bytes... 65535? Really? I mean, even in binary terms it's weird, 16 -> 64k? Doesn't really correspond with anything other than 2 bytes.

But:

10 -> 1K, 1024 ~1000 (if you sell hard drives)

20 -> 1M, 1024*1024=~1,000,000

30 -> 1G, 1024^3=~1,000,000,000

40 -> 1T, 1024^4=~1,000,000,000,000

WAIT... So 2^n... 2^(a*10+b)...

2^42 ... 4 tens, 2 ones

4 tens is 1024^4... that's... 1, kilo, 2 mega, 3 giga, 4 tera... ok...

2 ones is 4... 2 bits is 4 values

42 bits, then is 4Terabytes? It's /that/ easy?!

So, one more time... 2^ab, where ab is a decimal number... a is the number of tens, b the number of ones... maybe we should use 2^xi, instead... or XI

64 bits, X=6, I=4 gives:

6 whatever they're called... postfixes... 1 Kilo, 2 Mega, 3 Giga, 4 Tera, 5 uhh... 1024T, 6 whotheheckknows, 1024*1024T... we'll call it 1FollowedBy6setsOf3zeroesWhichAreReally1024s

<s>And 2^6=1,2,4,8,16,32.

So 64bits has 32F6 values.</s>

Friggin' don't count "1" in 1 ,2,4,8,16,32,64... 1 bit has 2 values, not 1. See the same mistake, again, below! 64F6 values.

Ok, maybe it's not so simple... but for kilo, mega, and giga, I think I can wrap my head around it pretty quick:

32bits 2bits=4vals, 3: k,m,g. 4GB

16bits 6bits->32vals, .... Wait a minute...

2,4,8,16,32... no 64. Friggin "bit 0 has two values". 10, of course, is k... 64k. (Thought I had those memorized?)

18bits, 8->256, 10->k

22bits, 4 M

27 bits, 128... M

31 bits, handy for knowing whether you'll approach the limit of a signed 32bit integer... 3=G~=1,000,000,000~=10^(3*3), 1bit->2vals, so roughly (and with a safe margin) the maximum value you can store is 2 billion.

24 bits... There've been numerous of my 8bit projects where 65535 wasn't nearly enough, but 32 bits was a tremendous waste if you think about not only the memory that'll never be used but also the extra addWithCarry, loadByte, etc. /Every Time/ that variable is accessed... Say I'm counting stepper steps, thing's accurate to 10,000 steps per inch, my design currently is 11 inches, for a piece of paper, 110,000 is too big for an uint16... who knows, I might use the same stepper and pulley and software for a huge plotter, later... I can't imagine having room for a 100inch table, but that'd be 1 million steps. 24 bit integers might do the job nicely, but it might be cutting it close... lessee, 2->1,000,000, so 16,000,000... that should suffice for even my craziest designs later down the line, and leave plenty of room for even using signed-int24, should I decide to have a relocatable coordinate system... int24_t didn't exist in my architecture's C compiler when I had this idea, but it looked like it would be worth it to make that addition being that the savings wouldn't only affect the current project, but /also/ almost any I'd come up with down the line. Is int24 big enough? No paper/pen, no calculator, 20 bits is 1million (plus change for headroom), 3 more bits is really nice headroom.

The TI-86's CPU has up to A19, counting A0, 20address bits... 1M of address space. It has 4 separate chip-selects, which are essentially two additional address bits demultiplexed, so 22 address bits, so 4MB of addresses.

I dunno, a lot of rambling. But, i think it's pretty groovy when numbers do interesting things like that... (who memorizes the 9times in the times-table when it's as simple as: the other number minus 1 in the tens place, then whatever adds to that to make 9 in the ones place? 8*9, 72, 5*9, 45... and doing it that way has, for me, inherent "checking" built-in... does 7+2=9? 4+5?).

And, here it crosses base-system boundaries, relating binary and decimal. And does so through exponents, too! WOW!

.

I was going to start coding my FLASH today... A18 is used to toggle /OE. "Interesting that it happens to be 8, 'cause this is a 256(k) device, and yahknow, 8 bits is 256..."

Oh yeah, and it keeps getting really confusing which address bits are "real" (come straight from the z80) and which are used for mapping... and thus, outside the z80 VLSI are controlled by the internal mapping "ports".

So, here we go: each page is 16k.

That's... this is backwards, bear with me... 1k is 10 address bits. 16 times that is 4 more bits. 14 address bits are "real" (come straight from the z80). But, of course they're zero-based, so A0-A13 come from the z80, and A14 and up come from the mapping ports. Is that right? 14bits 4->16 10->K. Nice. Now I think I can remember (or easily verify without my fingers or calculator when I forget) A14 is the first that's mapped, not A15. And A13 is the last that's real, not A14.

Bam.

How'd this slip by me for so long?!

10's are Kilo

20's are Mega

30's are Giga

40's are Tera

And, of course:

1 is 2

2 is 4

3 is 8

4 is 16

5 32

6 64 (!!!)

7 128

8 256

9 512

49 address bits is 512 Tera bytes.

23 data bits (signed int24) maxes-out at ~ +/-8million with room to spare.

But, don't forget most bit-positions start with "bit 0"... count that one!

...

Somewhere in there I feel like there might be a trick for algorithmically converting between base2 and base10, e.g. for printing out variables without needing to successivly divide by ten (in reverse!). But, right now I'm apparently in ramble-mode. (With one thumb, no less).

...

Also, seems to me this is similar to musical octaves, wherein the same note one octave higher is twice the frequency. I was stuck on that a few hours the other day... apparently there was a movement to change the structure based to 256Hz, and another to try to divide the inbetween notes more mathematically... what if they based it around 1024Hz, instead?

## Discussions

## Become a Hackaday.io Member

Create an account to leave a comment. Already have an account? Log In.

Yup - it's /that/ easy! Neat eh?

And if you need a tighter approximation, add 2.5% per 10^3:

2^10 = 1000 + 2.5%

2^20 = 1M + 5%

2^30 = 1G + 7.5%

2^40 = 1T + 10%

Are you sure? yes | no

genius!

But, I gotta remember to be careful with that one; most of the cases I run into, it's better to underestimate... (e.g. deciding an integer size, hah, nevermind hard drive sizes, that'sanother ballgame!).

I think I can remember 2.4% easily-enough, but that'd be calculator-time... 2.5% is nice, since 25% is 1/4 and just divide that by ten.

Slick trick. Thankya!

But, wait... is it still 2.5% for 2^15? I'd better check...

Erm...2.4% is spot-on for 2^10s, 2.5 is close-enough for many needs, and 5% is a good approximation for 2^20s, 7.5% for 2^30s and so-on... but it isn't so easy as 2.4%->4.8%->7.2% being exact. Ah well.

This is quite handy, thankya!

I can't believe I never caught these things before!

Are you sure? yes | no

Yer welcome!

If you haven't figured it already, estimating

2^15 = 32000 + 2.5%

works because same as [estimate](1000 + 2.5%) * [exact]32

Same for 2^{10...19}. Then the new estimate 2^20 = 1M + 5% sets the error for 2^{20...29}. The increment from 2^19 to 2^20 is not double but ~+%95 which introduces the next ~2.5% undershoot.

But then thinking about it too much defeats the idea of quick estimation.

Are you sure? yes | no

Yes, that's a good application of the approximation of (1+x)^n by 1 + nx, as the rest of the terms in the polynomial expansion are much less significant when x << 1.

In daily life for example if you are unfortunate to have $100 in a savings account paying only 0.5% p.a. compounded, then you can estimate quickly that after 3 years you will have $101.50.

A similar approximation is sin(x) and tan(x) by x when x is close to 0 and expressed in radians. It comes out of the Taylor series for sin and tan.

https://www.efunda.com/math/taylor_series/trig.cfm

Are you sure? yes | no

Yup. And in this case fudging x to 0.025 for convenient estimation conveniently improves the estimate for n>2. ("this case" = estimating 2^many)

...hmm...

My math brain has apparently already gone to sleep. How, generally, to figure a fudge factor f for x that minimizes 1+nfx estimation error?...for a given range of n?

Are you sure? yes | no