16 bit color modes

Started by
3 comments, last by Skizz 19 years, 6 months ago
Hi everyone, I can't lay hands on the formula that converts a RGB color (24 bits) into a 565 color (16 bits). This macro : #define RGBto16(c) ((((c & 0xff) >> 3) << 11) | ((((c >> 8) & 0xff) >> 2) << 5) | (((c >> 16) & 0xff) >> 3)) doesn't seem to be the right one. To make the tests, I've created a 24-bit bitmap with the color (63,56,64) as the first pixel. Then, I loaded it into a 16-bit DirectX surface, and then read the corresponding color value. Weirdly, the resulting color wasn't 14792 but 14759... I don't understand how conversion works. Does anyone know the TRUE conversion formula ? Thanks in advance, Mark
Advertisement
It's probably because that macro converts RGB888 into BGR565, that is, it swaps the red and blue components around.

You want a macro like:

#define RGB888toRGB565(c) (((((c) >> 19) & 0x1f) << 11) | ((((c) >> 10) & 0x3f) << 5) | ((((c) >> 3) & 0x1f)))

It works by shifting bits around from the source colour value to the destination colour value.

The first part:
(((((c) >> 19) & 0x1f) << 11)
Takes the 5 most significant bits of the red component (bits 19 to 23) and moves them to bits 0 to 4, ands off any higher order bits (in case it was a signed value, or there was garbage in the top eight bits) then moves it to the position required for the 565 format (bits 11 to 15).

The other two parts work in much the same way.

Skizz
1) That macro will swap blue & red around.

2) how are you reading the colour back, and where from?

3) it's possible that things such as dithering, filtering, some forms of anti-aliasing etc could create slightly different colour values when read back.

4) are you 100% certain the format the surface was created in was D3DFMT_R5G6B5? and are you certain your hardware actually supports that format? (some older cards only support 1555 and x555) - if not, then there may be extra translation happening behind your back.

6) if the output "looks" correct, then unless this is for something such as general processing with a GPU, then is it really an issue?

Simon O'Connor | Technical Director (Newcastle) Lockwood Publishing | LinkedIn | Personal site

Hi again,

Thanks for your reply, but I advise both of you to look at the
way Windows defines RGB values (in wingdi.h) :

#define RGB(r,g,b) ((COLORREF)(((BYTE)(r)|((WORD)((BYTE)(g))<<8))|(((DWORD)(BYTE)(b))<<16)))

The component are stored this way : BBBBBBBBGGGGGGGGRRRRRRRR so
yes, the blue and the red value are swapped ! Though, the DirectX
help file specifies that in 565 mode, the color is stored with
the red value preceeding the blue one.

To S1CA, I have no idea what you're meaning when you talk about
the black color... My pixels are stored in a simple bitmap as
a 8x8x8 format. You're right, the output looks correct when I
make a fade-in for instance, but I guess there's a problem with
my conversion since the BMP processor doesn't work the same way.

To Skizz, your macro is quite similar than mine, though it gives
slightly different results.

Regards,
Mark
Hmmm, fair point about the RGB thing. I think we need to see how you're using those macros. The original macro did have a problem in that the use of the 'c' variable should have been '(c)' to prevent any harmful side effects.

Skizz

This topic is closed to new replies.

Advertisement