Sign in to follow this  
Servant of the Lord

RGBA to/from Hexadecimal: Odd conundrum

Recommended Posts

I'm writing some code to allow conversion between a Uint32 and a RGBA-type color struct, but I ran into an odd problem.

I want my Uint32 value to be of the format 0xRRGGBBAA. That is, the most signifcant byte is red, then green, then blue, and last-wise an optional alpha.
I also want to ensure that the alpha is optional, and so accept values in the format of 0xRRGGBB.

[size="1"](Note: I'm perfectly aware that 'hexadecimal' is just a way to display a value. The same problem would exist if I was displaying it in decimal or binary)[/size]
[size="1"]
[/size]
[size="2"]Here's my current code:[/size]
[code]void cColor::SetHexadecimal(Uint32 hexValue)
{
hexValue = LocalToBigEndian(hexValue);

if(hexValue > 0x00FFFFFF)
{
//If we have alpha, it's in the format 0xRRGGBBAA
this->r = (hexValue >> 24) & 0xff;
this->g = (hexValue >> 16) & 0xff;
this->b = (hexValue >> 8) & 0xff;
this->alpha = hexValue & 0xff;
}
else
{
//If we *don't* have alpha, it's in the format 0x00RRGGBB
this->r = (hexValue >> 16) & 0xff;
this->g = (hexValue >> 8) & 0xff;
this->b = hexValue & 0xff;
this->alpha = 255;
}
}[/code]


I realize however, that if I have red set to 0, it will mistakenly think I'm passing a RGB instead of a RGBA value. It will mistake something like [b]0x00FFFFFF[/b] for [b]0xFFFFFF[/b].
This is probably why people use ARGB, but I would prefer RGBA if I can find a safe way to do it.

Any ideas?

Share this post


Link to post
Share on other sites
Where do these values come from?

Is it, that in code, you want to be able to write [font="Courier New"]0xRRGGBB[/font] instead of [font="Courier New"]0xRRGGBB00[/font], or are you parsing hex-strings at some point?

If you're parsing strings, have the string-parser append the extra 00 for you, or have it record the fact that the user only entered 3 bytes worth of digits.

If you're hard-coding values, use a macro (like [font="'Courier New"]D3DCOLOR_ARGB[/font] and friends) or an inline function with an optional alpha parameter.

Share this post


Link to post
Share on other sites
Hate to break it to you, but the best option is to just use ARGB. You don't really have any other choice here anyway.
There's usually a reason everyone else does something a particular way.

Share this post


Link to post
Share on other sites
[quote name='Hodgman' timestamp='1306981819' post='4818498']
Where do these values come from?

Is it, that in code, you want to be able to write [font="Courier New"]0xRRGGBB[/font] instead of [font="Courier New"]0xRRGGBB00[/font], or are you parsing hex-strings at some point?

If you're parsing strings, have the string-parser append the extra 00 for you, or have it record the fact that the user only entered 3 bytes worth of digits.

If you're hard-coding values, use a macro (like [font="Courier New"]D3DCOLOR_ARGB[/font] and friends) or an inline function with an optional alpha parameter.
[/quote]

No, it's not a string. Having it as a string would be easy. =)

[font="arial, verdana, tahoma, sans-serif"][size="2"][quote name='iMalc' timestamp='1306997563' post='4818552']
Hate to break it to you, but the best option is to just use ARGB. You don't really have any other choice here anyway.
There's usually a reason everyone else does something a particular way.
[/quote][/size][/font]
[font="arial, verdana, tahoma, sans-serif"][/font] [font="arial, verdana, tahoma, sans-serif"] [/font][font="arial, verdana, tahoma, sans-serif"][size="2"]That's what I figured, I was just hoping there was some trick I was overlooking.[/size][/font]
[font="arial, verdana, tahoma, sans-serif"] [/font][font="arial, verdana, tahoma, sans-serif"][size="2"]This causes a new problem though: I still want the alpha to be optional. If I do [b]0xRRGGBB [/b]wanting the alpha to by-default be 255, it'll mistakenly read it as [b]0x00RRGGBB [/b]making alpha instead be 0.[/size][/font]
[font="arial, verdana, tahoma, sans-serif"] [/font][size="2"]It seems the only real option is having two [/size]separate[size="2"] functions (or macroes): [b]SetRGB[/b](), and [b]SetRGBA[/b](), so that's what I'll do unless anyone else has an idea.[/size][font="arial, verdana, tahoma, sans-serif"] [/font]

Share this post


Link to post
Share on other sites
That's certainly a possibility, but kinda defeats the purpose. If I'm using multiple parameters, I might as well do: [b]SetColor(red, green, blue, (optional) alpha)[/b].

Here's how I'm currently doing it:
[code]//Must be the format: RGB. Alpha is assumed to be 255.
void cColor::SetRGB(Uint32 rgb)
{
//If we accidentally included alpha... remove it.
if(rgb > 0x00FFFFFF)
{
rgb = (rgb >> 8);
}

//Shift it over and add the alpha.
rgb = ((rgb << 8) + 0xFF);

this->SetRGBA(rgb);
}

//Must be the format: RGBA.
void cColor::SetRGBA(Uint32 rgba)
{
this->r = (rgba >> 24) & 0xff;
this->g = (rgba >> 16) & 0xff;
this->b = (rgba >> 8) & 0xff;
this->alpha = rgba & 0xff;
}[/code]

Share this post


Link to post
Share on other sites
Fixed:
[source lang="cpp"]// Must be the format: RGB. Alpha is forced to 255.
void cColor::SetRGB(Uint32 rgb)
{
this->SetRGBA(rgb | 0xff000000);
}[/source]


Your code seems inconsistent about which bits are expected to hold alpha and which are expected to hold blue...

Share this post


Link to post
Share on other sites
[quote name='ApochPiQ' timestamp='1307046950' post='4818844']
Fixed:
[source lang="cpp"]// Must be the format: RGB. Alpha is forced to 255.
void cColor::SetRGB(Uint32 rgb)
{
this->SetRGBA(rgb | 0xff000000);
}[/source]

Your code seems inconsistent about which bits are expected to hold alpha and which are expected to hold blue...
[/quote]
The low byte is supposed to hold alpha. ([b]RGB[color="#ff0000"]A[/color][/b]). If you do '[b]rgb | 0xff000000[/b]' wouldn't that make alpha the high byte?

If you re-look at my code... I take[b] 0x00RRGGBB[/b] and shift it over to be [b]0xRRGGBBAA[/b]
[code]//Must be the format: RGB. Alpha is assumed to be 255.
void cColor::SetRGB(Uint32 rgb)
{
//Shift it over and add the alpha.
rgb = ((rgb << 8) + 0xFF);

this->SetRGBA(rgb);
}[/code]


But I also have a safety check involved, to make sure I wasn't accidentally already passing in RGBA.
[code]//Must be the format: RGB. Alpha is assumed to be 255.
void cColor::SetRGB(Uint32 rgb)
{
//----------------------------------------------------------
//This is the safety check.
if(rgb > 0x00FFFFFF)
{
rgb = (rgb >> 8);
}

//----------------------------------------------------------

//Shift it over and add the alpha.
rgb = ((rgb << 8) + 0xFF);

this->SetRGBA(rgb);
}[/code]

It's quite possible I'm getting this wrong; I'm new to bitshifting. Did I make a mistake somewhere?

Share this post


Link to post
Share on other sites
[quote name='Servant of the Lord' timestamp='1307074900' post='4818947']
[quote name='ApochPiQ' timestamp='1307046950' post='4818844']
Fixed:
[source lang="cpp"]// Must be the format: RGB. Alpha is forced to 255.
void cColor::SetRGB(Uint32 rgb)
{
this->SetRGBA(rgb | 0xff000000);
}[/source]

Your code seems inconsistent about which bits are expected to hold alpha and which are expected to hold blue...
[/quote]
The low byte is supposed to hold alpha. ([b]RGB[color="#ff0000"]A[/color][/b]). If you do '[b]rgb | 0xff000000[/b]' wouldn't that make alpha the high byte?

If you re-look at my code... I take[b] 0x00RRGGBB[/b] and shift it over to be [b]0xRRGGBBAA[/b]
[code]//Must be the format: RGB. Alpha is assumed to be 255.
void cColor::SetRGB(Uint32 rgb)
{
//Shift it over and add the alpha.
rgb = ((rgb << 8) + 0xFF);

this->SetRGBA(rgb);
}[/code]


But I also have a safety check involved, to make sure I wasn't accidentally already passing in RGBA.
[code]//Must be the format: RGB. Alpha is assumed to be 255.
void cColor::SetRGB(Uint32 rgb)
{
//----------------------------------------------------------
//This is the safety check.
if(rgb > 0x00FFFFFF)
{
rgb = (rgb >> 8);
}

//----------------------------------------------------------

//Shift it over and add the alpha.
rgb = ((rgb << 8) + 0xFF);

this->SetRGBA(rgb);
}[/code]

It's quite possible I'm getting this wrong; I'm new to bitshifting. Did I make a mistake somewhere?
[/quote]

The solution ApochPiQ provided is correct for the ARGB order, not RGBA.

Your not being consistent with your bit ordering. Your SetRGB function expects that the input has the XRGB order, your 'safety' check is not really a check, you always default the alpha value. What happens when you want to change the RGB values with that function while keeping your alpha value? Unexpected behaviour is a bad thing.

rgb = ((rgb << 8) + 0xFF), change 0xFF to ths->alpha.

Share this post


Link to post
Share on other sites
[quote name='Servant of the Lord' timestamp='1306981257' post='4818494']
I want my Uint32 value to be of the format 0xRRGGBBAA. That is, the most signifcant byte is red, then green, then blue, and last-wise an optional alpha.
I also want to ensure that the alpha is optional, and so accept values in the format of 0xRRGGBB.
[/quote]

This isn't possible. Just use ARGB as suggested. Your code "safety check" breaks if red is zero and the incoming format is RGBA already like you said. The test will result in the shift right not taking place and then on the subsequent line you're going to shift the red component away leaving something like 0XGGBBAAFF rather than 0XRRGGBBFF. There isn't a way around this.

Share this post


Link to post
Share on other sites
I agree with what everyone is telling you, but here is another idea: Perhaps you could use a format I would call (1-A)RGB, where the top 8 bits express opacity instead of transparency. A value of 0xrrggbb will be interpreted as a solid color, just as you want.

Share this post


Link to post
Share on other sites
[quote name='Mussi' timestamp='1307099053' post='4819008']
What happens when you want to change the RGB values with that function while keeping your alpha value? Unexpected behaviour is a bad thing.

rgb = ((rgb << 8) + 0xFF), change 0xFF to ths->alpha.[/quote]
If I want to alter the RGB without altering the alpha, I'll just use a different function.

[quote name='Hodgman' timestamp='1307101256' post='4819018']
This seems to be a solution for a non-problem anyway...

what's wrong with [font="Courier New"]u32 color( u8 r, u8 g, u8 b, u8 a=0xff ) {...}[/font]?
[/quote]
I have that already. The struct holds it's colors as four separate Uint8s, for red green blue and alpha, which I can already access and alter.
The point of this function is to ease storing of the color struct when saving/loading from binary files.
In my code itself, I normally alter the R,G,B and Alpha channels individually, or else go color = cColor(blah, blah, blah, optional blah);

When saving/reading bytes to a file, however, it's nice to have a function that gives you the channels already packed into a Uint32.

[quote name='Michael Tanczos' timestamp='1307102096' post='4819022']
This isn't possible. Just use ARGB as suggested. Your code "safety check" breaks if red is zero and the incoming format is RGBA already like you said. The test will result in the shift right not taking place and then on the subsequent line you're going to shift the red component away leaving something like 0XGGBBAAFF rather than 0XRRGGBBFF. There isn't a way around this.
[/quote]
What about this - Is this any better?

[code]//Must be the format: RGB. Alpha is assumed to be 255.
void cColor::SetRGB(Uint32 rgb)
{
//Make sure it's the right format.
rgb = (rgb & 0x00FFFFFF); //Technically, I don't need this here since it'll be shifted away, but it helps me understand in the future what I was going for.

//Shift it over and add the alpha.
rgb = ((rgb << 8) + 0xFF);

this->SetRGBA(rgb);
}[/code]

It doesn't make any 'safety check', it just takes the lowest three bytes, ignores the top byte. Is this better design?
The only problem I see with this, is if I mistakenly use SetRGB() when I meant SetRGBA() - but the code will work fine regardless. This would be user error; however, I feel I am [i]more[/i] likely to mistakenly pass a RGBA value by mistake when the function wants a ARGB value. Both ones run into the same problem: What if the user passes in the wrong format? And it can't be helped.

If the SetRGB() and SetRGBA() functions are actually worse than a SetARGB function, I'll switch to ARGB... but I want to see [i]how[/i] it's worse, or if it's just a matter of personal preference. [img]http://public.gamedev.net/public/style_emoticons/default/smile.gif[/img]

Share this post


Link to post
Share on other sites
[quote]
The point of this function is to ease storing of the color struct when saving/loading from binary files.[/quote]
The points in this thread only apply to [i]hex literals in the source[/i]. If you are loading the values from a file then you can directly use the saved value you can always load/save the full 4 byte colour.

Share this post


Link to post
Share on other sites
[quote name='rip-off' timestamp='1307116755' post='4819106']
[quote]
The point of this function is to ease storing of the color struct when saving/loading from binary files.[/quote]
The points in this thread only apply to [i]hex literals in the source[/i].[/quote]
Yes, but if I expose the function publicly, it needs to be usable not just in the circumstances I'm expecting it to be used in, but in all reasonable circumstances. It is reasonable to expect that it may be used with literals. Originally, my problem was discerning (when using a literal in the source code) whether the user was trying to pass a RGB or a RGBA. There is no real solution for that.

So instead, I decided to let the user tell me whether he is passing a RGB or a RGBA, by what function he calls (SetRGB() or SetRGBA()). However, people are still saying to use ARGB, despite it [i]appearing to me[/i] that the two-function method (SetRGB() or SetRGBA()) works fine. So I'm wanting to make sure that I'm not making a mistake somewhere.
([size="1"]Because my personal preference is for RGBA not ARGB, but I will use ARGB if I have to... but I'd like to know [i]why[/i] that is the significantly better choice[/size]).

If I'm not making sense, let me know. [img]http://public.gamedev.net/public/style_emoticons/default/wink.gif[/img] Sometimes my mind stubbornly fails to recognize a bad idea.

Share this post


Link to post
Share on other sites
That method works IF and only if you never pass an RGBA value to it since the R value would get masked off (or shifted off).. I'm kind of losing the point though of the method though since if there is no check, avoid the method call and just do (rgb << 8) | 0xFF. This is getting into more of a simple macro territory.

You know what else would work? Use ARGB.. then you don't need two methods SetRGB and SetRGBA and special cases.

Share this post


Link to post
Share on other sites
[quote]I have that already. The struct holds it's colors as four separate Uint8s, for red green blue and alpha, which I can already access and alter.
The point of this function is to ease storing of the color struct when saving/loading from binary files.
In my code itself, I normally alter the R,G,B and Alpha channels individually, or else go color = cColor(blah, blah, blah, optional blah);

When saving/reading bytes to a file, however, it's nice to have a function that gives you the channels already packed into a Uint32.[/quote]

A union would be great for that :).

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this