pixel format question

Started by
1 comment, last by Armedies 20 years, 8 months ago
This morning I was reading a tutorial and noticed that it differed from another I had in the way they used a define for the pixel format. The first(Windows Game Programming Guru) defined the RGB value backwords --BGR-- because of something to do with ''little-endian'', which made sense. The other tutorial didn''t do this and defined the RGB value in the order you would expect. could someone explain why these two defines would be different? 1) #define _RGB16BIT565(r ,g, b) ((b & 31) + ((g & 31) << 5) + ((r & 31) << 11)) 2) #define RGB(r, g, b) ((r << 11) | (g << 5) | b)
Advertisement
There is very little difference, except that the first one is wrong. As the macro name implies, it is intended to create 5/6/5 bit values. However, the & 31 for the green channel clips it to 5 bits. That should be & 63.

The actual functional difference between the two is that the first one keeps the values within range, when the second one doesn''t. See this:

unsigned short a = _RGB16BIT565 (0, 0, 32);unsigned short b = RGB (0, 0, 32);a == 0x0000;b == 0x0020;


32 is out of range...

Kippesoep
pardon, my mistake, that was supposed to be 63 - not 31.

Thanks for the help.

This topic is closed to new replies.

Advertisement