• 13
• 16
• 27
• 9
• 9

# image compression question

This topic is 4868 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

An application that I'm currently working on will have the need for many small images to be loaded and memory is a large concern. What can anyone recommend for image formats. I've been looking at wavelet compression, but from what I understand it isn't a completed standard (jpeg 2000). Is jpeg and gif the way to go?

##### Share on other sites
If memory is a concern then jpeg/gif/.. won't help you much, since they save space on disk, not on memory. You still have to uncompress them before use. And don't even thing about uncompressing them on-fly. The only viable option I can think of right now is DXTC (DDS) format. It's used by graphic cards to save memory and can be uncompressed very fast.

##### Share on other sites
Bah, the above poster was me. Why does it keep loging me out.

##### Share on other sites
Quote:
 Original post by toxyIs jpeg and gif the way to go?

No it's the way not to go, jpegs work badly for games and gif i generaly wonky, tga and png is pretty ok, DDS works if you want image compression in the texture memory.
I don't think your memory problems is as mutch a concern as it is for the PS2 in where you have about 750 kb for texture memory.
remember, if you are using recent harware, you could use NPOT textures and make them smaller to save in on texture memory.

##### Share on other sites
I've used a heightmap of earth in my planet engine. It's 43200x21600x16 Bit, so it's about 1GB uncompressed. I've implemented a wavelet compression scheme (more precisely, first a 5.3 wavelet transform, then EZT coding and finally Huffman coding). That's more or less JPEG2000. For Huffman coding, I use ZLib. The EZT is quite straightforward to implement and the wavelet transform I looked up in a textbook. The compressed heightmap is 20MB (50:1 compression ratio). You can select an arbitrary compression ratio at the cost of image quality. At 50:1, you don't see any compression artifacts. The scheme could also be used for images.

I've split the heightmap into chunks of size 64x64, which are all compressed separately. I load the whole compressed heightmap (the 20MB) into memory and then I decompress single chunks on-the-fly. Decompression of a single chunk takes only a few microseconds, which is fast enough for my purpose.

Which format you finally choose depends on your requirements:
- If you have a HUGE image (>1GB) and decompression doesn't need to be too fast, you can use wavelet compression or JPEG.
- If you need to be very fast, but the data is not too big, use DXTC. It has a 12:1 compression ratio for 24Bit RGB images IIRC.
- If your image is cartoon-like (lots of constant areas), you may use GIF or simply run-length-encoding (RLE). GIF is lossless compression and has very bad ratios for natural images. Moreover, it can only compress images with 256 colors.
- PNG is rather slow, and I don't know if it's lossy or lossless. There is a lossless version, but someone told me it can also do lossy compression.

##### Share on other sites
Quote:
 PNG is rather slow, and I don't know if it's lossy or lossless.

PNG is lossless.

A definite go-with-that depends on the exact requirements of your application.

Lots of small images can be compressed using JPEG, decompression is fast enough for realtime applications in that case. You can try different compression ratios offline (e.g. using Photoshop or Paint Shop Pro) to get an idea of the quality tradeoff. From my experience, compression factors of 1% to 10% are a reasonable compromise.

TGA is a bad choice - simple RLE won't reduce the image size much due to the non-linear pixel correlations. I'd suggest looking at vector quantisation. The compression is very time-consuming, but decompression is very very fast.

So if you can affort a big amount of offline pre-processing and only demand fast decompression, go with vector quantisation.

Good luck,
Pat.

##### Share on other sites
Thank you everyone very much. I suppose I should be more specific about my requirements. Well first of all I'm working on a pocket pc so thats the reason for the memory concern - however its mostly a space concern not so much one for speed since I'm not really doing much animation. I'm going to be loading in around 20 photo style(non-cartoon images but w/the need for transparancy) when the user press a specific UI button and those 20 images will be chosen on the fly from a "large" file of images. So, I think I am really only concerned about storing as many images as I can in this "large" file.

So I guess that rules out jpeg since it can't handle transparency. I've checked out everyone's ideas and this is where I'm at:

DXTC - It seems that this would mainly benefit in speed - is that correct?

png - It seems that png would be overkill since I've read that the compression advantage over gif is not that much and I don't think I really need any of its other qualities.

Vector quantization - It sounds good, but I would only have a limited amount of time to implement it and I'm worried it might take quite a bit of time.

I'd appreciate it if anyone has anything to say - if not thanks for the help so far very much.

##### Share on other sites
You can use alpha transparency with JPEGs!
Just split your image into alpha channel and RGB colro data.
Process the color data with jpeglib and either quantise the alpha channel to 4 bpp (provides a stable 2:1 compression ratio) and process it with huffman and/or RLE. You can also process the alpha channel with DXTC compression (4:1 compression ratio) and post-process the result with zlib (fast and very good compression, too).
I'd go with jpeg + alpha. It really isn't that hard to do. Just seperate alpha from color data and multiplex (combine) channels after uncompressing.

##### Share on other sites
Thanks for the information - however do you know of any resources that I can use to help with seperating the alpha and rgb data? I'm new to this kind of thing and so more details would be ideal.

Thanks,
toxie

##### Share on other sites
It's very simple.

// pixel structure. change if colors are stored RGBA or ARGBtypedef struct _pixel_t {    unsigned int  blue  : 8;    unsigned int  green : 8;    unsigned int  red   : 8;    unsigned int  alpha : 8;} pixel_t;// split into rgb and alpha channel.void DeMultiplexImage(unsigned int *rgba, const int pixels, unsigned char **rgb, unsigned char **alpha){     int i = 0, j = 0;     pixel_t *img = (pixel_t*)rgba;     *rgb = (unsigned char*)malloc(pixels * 3);     *alpha = (unsigned char*)malloc(pixels);          for (;i < pixels; ++i, j += 3) {         rgb[j + 0] = img.red;         rgb[j + 1] = img.green;         rgb[j + 2] = img.blue;         alpha = img.alpha;     }}// combine rgb and alpha channelvoid MultiplexImage(unsigned char *rgb unsigned char *alpha, const int pixels, unsigned int **rgba){    int i = 0; j = 0;    pixel_t *img;    *rgba = (unsigned int*)malloc(pixels * 4);    img = (pixel_t*)(*rgba);    for (; i < pixels; ++i, j += 3) {        img.red   = rgb[j + 0];        img.green = rgb[j + 1];        img.blue  = rgb[j + 2];        img.alpha = alpha;    }}

You can pass the rgb pointer to jpeglib and the alpha pointer to your S3TC compressor.
You see, RGBA values are interleaved for each pixel in most formats. So a pixel is a vector with three (e.g. color only) or four (color plus alpha) components of 8 bits each (other sizes possible, ie. 5 bits for red, 6 for green, 5 for blue = 16 bits total a.s.o.). You can either seperate these components using bit operations (masking and shifting) or cast to a suitable struct (like I did in the code sample).

If you need an algorithm for S3TC, let me know. I have written some and they work very well. I can also provide you with a patched jpeglib that supports JPEG+Alpha in various modes (S3TC and JPEG grayscale compression for alpha channel).