Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

Armedies

pixel format question

This topic is 5427 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

This morning I was reading a tutorial and noticed that it differed from another I had in the way they used a define for the pixel format. The first(Windows Game Programming Guru) defined the RGB value backwords --BGR-- because of something to do with ''little-endian'', which made sense. The other tutorial didn''t do this and defined the RGB value in the order you would expect. could someone explain why these two defines would be different? 1) #define _RGB16BIT565(r ,g, b) ((b & 31) + ((g & 31) << 5) + ((r & 31) << 11)) 2) #define RGB(r, g, b) ((r << 11) | (g << 5) | b)

Share this post


Link to post
Share on other sites
Advertisement
There is very little difference, except that the first one is wrong. As the macro name implies, it is intended to create 5/6/5 bit values. However, the & 31 for the green channel clips it to 5 bits. That should be & 63.

The actual functional difference between the two is that the first one keeps the values within range, when the second one doesn''t. See this:


unsigned short a = _RGB16BIT565 (0, 0, 32);
unsigned short b = RGB (0, 0, 32);

a == 0x0000;
b == 0x0020;


32 is out of range...

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!