Close
0%
0%

1kb 6502 Image Compression

Attempt to compress a 1kb image such that the decompression algorithm, display logic, and the compressed image itself all fit in 1kb.

Similar projects worth following
For the 1kb challenge I intend to create an application that decompresses an image and display it where all code and the compressed image itself fit in 1kb of ROM. The project is based on my 6502 portable game system Dodo. The code will all be written in 6502 assembly of course!
The OLED screen in Dodo utilizes the ssd1305 chip which is purely a bitmap display and contains no internal ROM according to the block diagram in the datasheet.

Dodo has a monochrome 128x64 pixel display which just so happens to represent 1kb of uncompressed bitmap data. I thought it would be fun to see if I can take a 1kb image, compress it, and take that left over space and try to fit in the decompression code and the display code. For example if I achieve 50% compression, then that leaves 512 bytes for code.

Here are the tasks that need to be done:

- Experiment with different compression types, for instance RLE vs Huffman to weigh the tradeoff between the compression % and the left over space available for software.

- Update my web IDE to allow assembly programming in addition to C. Here is the web IDE displaying the image I intend to compress: https://play.dodolabs.io/?code=f308275e

- Prototype the software in the web IDE

- Take the code and massage it so that the system firmware for Dodo can be abandoned and replaced with just this binary.

- Pray that this idea fits in 1kb!

  • Bummed I ran out of time :(

    Peter Noyes01/06/2017 at 04:08 1 comment

    Unfortunately I didn't have much time over the Holiday break and I didn't get to finish this. I did however implement a Huffman encoder and funny enough the LZW encoded run length is still better.

    I plan on finishing this at some point because image compression is still useful for my game system.

  • First Crack: LZW

    Peter Noyes12/13/2016 at 06:52 5 comments

    Based on all of the initial comments on this project I decided that LZW is a good initial approach. I implemented an encoder and the results are in: 853 bytes, or 16.6% smaller for the above image. The big question is, is this good enough? Can I implement a decoder and image display routine in 171 bytes??

    To test that things are really working I also compressed the following image:

    As expected this one is significantly smaller and came in at 163 bytes or 84% smaller.

    The LZW algorithm I implemented is compressing run lengths rather than compressing at the byte level. My intuition tells me that this should give better results, but I guess I might as well see what the difference is. The algorithm is as follows:

    • 6 bits for the initial dictionary size (0 - 31 will encode black runs, 32-63 encode white runs).
    • Alphabet encodes a maximum run length of 32.
    • Room for 192 codes with the code length still at 8 bits

    I also did an initial stab at using the UP PNG predictor which basically subtracts the previous line from the current, but it didn't make much of a difference. I will see if I can eek out any better performance with the current trajectory I am on. But I need to balance the complexity of the decode algorithm with the compression ratio. Maybe arithmetic is next...

View all 2 project logs

Enjoy this project?

Share

Discussions

richard broadhurst wrote 05/26/2021 at 14:10 point

I haven't tried to compress your image, but my lz77 (I think - I made it up and then found lz) but my decompressort is 77 bytes and needs 5 bytes of ZP, so it wouldn't need to compress much ;)

  Are you sure? yes | no

Yvan256 wrote 12/20/2016 at 21:15 point

What about encoding/decoding the image in two or more passes? Make a tiny dictionary of 8x8 or smaller blocks (all off, all on, dithered, etc) and what doesn't fit, RLE/LZW/etc the leftover blocks/pixels. As for the image you are using, I know that pure on/off pixels makes it harder to convert, but I'm not quite sure what it is supposed to be. I see the text and the map of the USA (I think) but otherwise there seems to be a lot of noise. Maybe cleaning it up a little would help with the image compression?

  Are you sure? yes | no

Zorbeltuss wrote 12/02/2016 at 23:55 point

Since I assume the images intended will not be made strictly for the Dodo you could optimize a color dithering algorithm to output data easier to compress with any algoritm you choose (or vice versa), images designed specifically for the Dodo will be tougher but having an algorithm in mind when making the image can give savings too.

  Are you sure? yes | no

Yann Guidon / YGDES wrote 12/02/2016 at 17:38 point

Good luck in your fight against entropy !

b&w pictures are some of the hardest to compress and #Recursive Range Reduction (3R) HW&SW CODEC is not adapted for this task. Look at the fax algorithms, I think they use arithmetic coding or range coding.

  Are you sure? yes | no

Peter Noyes wrote 12/02/2016 at 18:00 point

Thanks! I am well aware of bitonal image compression. At work I have implemented both CCITT and JBIG2 decoders. CCITT is the fax encoding, and it is basically Huffman with some additional tricks where it can take in the difference between the previous line of data. 

JBIG2 uses arithmetic encoding on top of some higher order constructs for glyphs and halftoning.  

The image I am trying to compress will be based on a threshold of foreground vs. background and should result in some pretty good runs of non changing data. That is how I generated the 1kb challenge image. I am not going to simply dither the image. A dithered image will suffer much worse compression.

I am going to try just a simple run length encoding first to see where I am at, but I think all I have room to code is at best some sort of simple huffman scheme. Hopefully I don't need to go down the route of arithmetic encoding, although I guess that would be impressive!

  Are you sure? yes | no

Yann Guidon / YGDES wrote 12/02/2016 at 18:15 point

Thanks for the overview of your options :-)

i'd favor arith because it might give you a better result for a not-so-larger code...

Any chance of a modifier LZW (a bit like in GIF but with a different alphabet) ?

Oh and since there are only 2 tones, you just has to encode the run length (in huffman ?), am I right ?

  Are you sure? yes | no

Peter Noyes wrote 12/02/2016 at 18:40 point

Why can' t I reply to your last comment??

Anyway,

Actually, LZW is a great idea. I might actually try that first!

Yeah, 2 tones requires just the run length to be encoded.

  Are you sure? yes | no

Yann Guidon / YGDES wrote 12/03/2016 at 17:12 point

you can't reply because HaD limit the depth of threads to 3, just reply to the parent post.

For the LZW, try using 2 contexts, one for white pixels and one for black ?

On second thought, it's probably not a good idea... But I tried to code a LZW once and whe I "got the trick", I understood why it was used so much. It's an easy algorithm once the data layout is figured out :-)

In your case, you can create an escape code for codes > 7 bits, saying "continue and don't change the length" for very long runs. This lets you code 128px-long runs and keep a symbol density that is close to the original pixel size...

For highly dithered pictures, I wonder about the performance of LZW, though. The original algo combines an existing sequence with a new symbol, but can't combine 2 existing sequences, making it quite under-efficient. This enhancement is not hard to do but will make a huge difference in your case.

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates