Creating a RGB 16-bit value

Started by
8 comments, last by Zahlman 17 years, 11 months ago
I've come across a piece of code that I can't quite figure out. I know what the end result is (I think) and how it works. Perhaps I'm making false assumptions when computing this, I don't know. The macro takes 3 values (r, g, b) and creates the 16-bit short value. This is C. #define RGB(r,g,b) (unsigned short)(r + (g << 5) + (b << 10)) Use: RGB(0, 255, 120); I've tried calculating assuming the numeric literals were 8-bits each, then 16-bits, and finally 32-bits. All three times I've come up with some weird answers that don't reflect a (5,6,5) bit (r,g,b) half-word. This is assuming 32-bit numeric literals: R = 00000000 00000000 00000000 00000000 G = 11111111 11111111 11111111 11111111<<5 = 11111111 11111111 11111111 11100000 B = 00000000 00000000 00000000 01111000<<10= 00000000 00000001 11100000 00000000 R+G+B = 00000000 00000001 11011111 11100000 (short)RGB = 11011 111111 00000 Perhaps I'm doing the shifting wrong? Perhaps I'm over-simplifying the 32-bit to 16-bit conversion? Maybe it's not (5,6,5)?I don't know. Any help would be much appreciated.
Advertisement
Quote:Original post by seconal
I've come across a piece of code that I can't quite figure out. I know what the end result is (I think) and how it works. Perhaps I'm making false assumptions when computing this, I don't know. The macro takes 3 values (r, g, b) and creates the 16-bit short value.


You have to divide your 8 bit values to get them into the 5 and 6 bit range before shifting. Also you are adding them together instead of OR-ing.

A working example:
color = (r/7) | ((g/3)*64) | ((b/7)*2048)

Also, some rgb16 formats use 5:5:5 and/or swap the red and the blue channels.

Viktor

This is your problem:

Quote:Use: RGB(0, 255, 120);


If you've only got 5 (or 6) bits then you can't go all the way up to 255. 5 bits only give you 0-31, 6 bits gives you 0-63. You either need to change the range you're putting in or rescale the numbers to the appropriate range.
-Mike
Quote:Original post by Anonymous Poster
Also you are adding them together instead of OR-ing.


in cases like this then adding will produce the same result as ORing.

Thank you for the solutions. I haven't tested them yet, but I will.

Also, just because it irks me to leave an error, in my first post I represented 255 as 32 ones. Should be 24 zeros with 8 ones.

Update:
Quote:Original post by Anon Mike
This is your problem:

Quote:Use: RGB(0, 255, 120);


If you've only got 5 (or 6) bits then you can't go all the way up to 255. 5 bits only give you 0-31, 6 bits gives you 0-63. You either need to change the range you're putting in or rescale the numbers to the appropriate range.


Alright. I just tried this (limiting Green to 63) and I got the same results as the 255. I see where you get the ranges from. Kind of offputting that the author would be careless in considering that and, as a result, potentially misleading the reader. I guess the macro just ignores anything greater than 31 or 63, or perhaps wraps around to 0. Not sure. If anyone knows, please share.

[Edited by - seconal on May 23, 2006 4:33:59 PM]
Quote:Original post by seconal
...

#define RGB(r,g,b) (unsigned short)(r + (g << 5) + (b << 10))

...
All three times I've come up with some weird answers that don't reflect a (5,6,5) bit (r,g,b) half-word.

...
Perhaps I'm doing the shifting wrong? Perhaps I'm over-simplifying the 32-bit to 16-bit conversion? Maybe it's not (5,6,5)?I don't know. Any help would be much appreciated.


That macro is designed to shift three 5-bit values into a 555 16-bit colour like this:

xBBBBBGGGGGRRRRR

If any one of your input values exceeds the 5-bit range (0 - 31) the shift will cause them to overflow into the neighbouring colours in the resulting 16-bit value.


To change the macro to make a 565 16-bit colour, you can shift the green by 5 and the blue by 11, making sure that red and blue inputs are restricted to 0-31 and green is restricted to 0-63:

#define MAKE_565BGR(r, g, b) (unsigned short)(r + (g << 6) + (b << 11))


Probably more useful would be a macro that took three 8-bit input rgb values in the range 0-255 and converted them to 565 16-bit colour:

#define GREEN_MASK 0x7e0 // 0000011111100000#define BLUE_MASK 0xf800 // 1111100000000000#define CONVERT_888RGB_TO_565BGR(r, g, b) ((r >> 3) | ((g << 3) & GREEN_MASK) | ((b << 8) & BLUE_MASK))Explanation:red source:   10110011green source: 10011110blue source:  11001011red >> 3:                    10110  <-- low 3 bits are shifted out so no need to maskgreen << 3:            10011110000green mask:            11111100000  <-- AND green mask to get rid of low 2 bits of 8-bit green(g << 3) & mask:       10111100000blue << 8:        1100101100000000blue mask:        1111100000000000  <-- AND blue mask to get rid of low 3 bits of 8-bit blue(b << 8) & mask:  1100100000000000   result:           1100110111110110  <-- result is 565bgr 


I did this in notepad and it's untested, so apologies if there are any errors - it should at least help you understand the concept.

If you need to convert to 555 instead, adjust the mask and shift values accordingly.

I'm assuming all values are unsigned shorts - you might want to use an inline function rather than a macro so you can benefit from type-checking. It should work fine with shorts and ints though.

(Edit: Oops... had green and blue masks switched round)

[Edited by - MisterMoot on May 24, 2006 12:56:13 AM]
Ok I see how that works. However, forgive me this might be a dumb question, but if the user were to enter 1 as the value for red, wouldn't red>>3 shave off that one? How would it make it to the final 5,6,5 result?
Quote:Original post by seconal
Ok I see how that works. However, forgive me this might be a dumb question, but if the user were to enter 1 as the value for red, wouldn't red>>3 shave off that one? How would it make it to the final 5,6,5 result?


Yes the 1 would be lost, but converting from 8 bits to 5 bits is always going to result in a loss of colour resolution, as you can no longer represent all the possible values. You're reducing a range of 256 possible values down to a range of 32. 8-bit values of 0-7 will all be converted to 0.

Another version of the macro is this, which might make it clearer what's happening:
#define CONVERT_888RGB_TO_565BGR(r, g, b) ((r >> 3) | ((g >> 2) << 5) | ((b >> 3) << 11))


For red and blue, you are passed a value in the range 0-255 which you need to map to the range 0-31. You can do this by dividing by 8 (shifting right by 3 is equivalent to integer dividing by 8) The remaining 5-bits then need to be shifted into their correct position (for 565bgr, red is already in the correct position in the lowest 5 bits, blue needs to be shifted left 11 places)

Green is similar, but because you have 6 bits available (0-63) you need only divide the source 8-bit value by 4 (or shift right by 2) and then shift the result into the correct position.

Hope that makes sense.
Ok. Makes perfect sense now. I just wasn't completely getting that you suffer a loss even if the value before converting to 2^5, 2^6, 2^5 is within those ranges.
There is seriously no reason to use defines for this in C++. (If you're stuck with C, never mind me.) We can do optimization stuff by just taking care of certain things at compile time:

// This can probably be made more generic still, but probably no point.// We base stuff off of this:template <int r_bits, int g_bits, int b_bits>inline unsigned short RGB(unsigned char r, unsigned char g, unsigned char b) {  return ((((r >> (8 - r_bits)) << g_bits) |             (g >> (8 - g_bits))) << b_bits) |             (b >> (8 - b_bits));}// We can make wrappers like this to handle other colour orders:template <int r_bits, int g_bits, int b_bits>inline unsigned short BGR(unsigned char r, unsigned char g, unsigned char b) {  return RGB<b_bits, g_bits, r_bits>(b, g, r);}// And then more wrappers for syntactic sugar:inline unsigned short RGB565(unsigned char r, unsigned char g, unsigned char b) {  return RGB<5, 6, 5>(r, g, b);}


Each type of invocation will get instantiated and optimized separately, so the compiler will be able to constant-fold and remove useless shifts when we define RGB888, for example, and inline transitively to avoid a function call for BGR555, for example. I like interlacing the shifts and bitwise-ors this way because I feel the field sizes are more explicit as a result. Shouldn't make a big difference to performance either way, compared to the other stuff I mentioned anyway...

This topic is closed to new replies.

Advertisement