convert 32 bit image to 8 bit image

Started by
5 comments, last by eq 16 years, 11 months ago
Does anyone know how to convert a 32 bit image into an 8 bit image in C++ code? Or can anyone point me in the right direction?
Advertisement
If you're referring to an 8-bit palletized format, I've used NeuQuant successfully in the past. It's an image quantizer based on neural networks. Neat stuff.
Yes, I do mean an 8-bit palletized format, but I want the actual code. Do you know where I can see the actual code?

Am I in the wrong forums for this?
Quote:Original post by adawg1414
Yes, I do mean an 8-bit palletized format, but I want the actual code. Do you know where I can see the actual code?

Am I in the wrong forums for this?

The code is there, linked at the top of the page. NEUQUANT.C and NEUQUANT.H.
Thanks Zipster,
You seem to have an idea of what is going on. Is there any way you could explain the process to me in laymen's terms? Is the compression process just stripping out color and going to grayscale? What's the use of this in games?
Looking up "Dither" on wikipedia turns up a reasonable start on what the issues involved are.

Hope that helps,
Geoff
Quote:Is the compression process just stripping out color and going to grayscale?

Not at all!

Quote:explain the process to me in laymen's terms?

At what detail. I assume that you are aware how images are stored?
I.e: a color has three primary components: red, green and blue.
Normally you need eight bits of precision per comonent, i.e 256 levels.
This makes a pixel in an image 3 bytes in size, assume an image of dimensions 800x600, that is 1440000 (800x600x3) bytes.

Now if we use a palette to approximate the "true" image, we first create 256 colors i.e 768 (256 * 3) bytes.
Then for every pixel we store the index of the color to use (from the palette) instead of the full three-byte color, this requires only one byte (eight bits).
Now the size is: 480768 (768 + 800 * 600) bytes, i.e 33% of the original size!
The only problem now is to choose what 256 colors to use for the palette.
This can be done in MANY ways, a simple one (but not very good) is to take the 256 most frequently used colors.

For some usages you could apply dithering to further improve the quality of the output image, typically a form of error diffusion is used.
For textures this doesn't look to good IMO.

Are you intersted in this stuff or do you just need a working solution?
I've written plenty of quantizer's in the past (when palettes was still useful) and I suggest that you start from the beginning, splitting the RGB cube using median cut or something.
Then move to minimum variance based splits.
Quote:If you're referring to an 8-bit palletized format, I've used NeuQuant successfully in the past. It's an image quantizer based on neural networks. Neat stuff.

I agree that the NeuQuant is one of the easiest solutions out there (implemen tation and performance), it's not the best in all cases but good enough in most cases.

Many quantizers work in a different color space than RGB, using biased minimum variance splits in YUV is one of the better approaches IMO.

Edit: The explanation about pixels and indicies above is a common case, ofcourse there are other bitdepths, channel counts etc.

This topic is closed to new replies.

Advertisement