Jump to content
  • Advertisement
Sign in to follow this  
Servant of the Lord

RGBA to/from Hexadecimal: Odd conundrum

This topic is 2601 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm writing some code to allow conversion between a Uint32 and a RGBA-type color struct, but I ran into an odd problem.

I want my Uint32 value to be of the format 0xRRGGBBAA. That is, the most signifcant byte is red, then green, then blue, and last-wise an optional alpha.
I also want to ensure that the alpha is optional, and so accept values in the format of 0xRRGGBB.

[size="1"](Note: I'm perfectly aware that 'hexadecimal' is just a way to display a value. The same problem would exist if I was displaying it in decimal or binary)
[size="1"]

Here's my current code:
void cColor::SetHexadecimal(Uint32 hexValue)
{
hexValue = LocalToBigEndian(hexValue);

if(hexValue > 0x00FFFFFF)
{
//If we have alpha, it's in the format 0xRRGGBBAA
this->r = (hexValue >> 24) & 0xff;
this->g = (hexValue >> 16) & 0xff;
this->b = (hexValue >> 8) & 0xff;
this->alpha = hexValue & 0xff;
}
else
{
//If we *don't* have alpha, it's in the format 0x00RRGGBB
this->r = (hexValue >> 16) & 0xff;
this->g = (hexValue >> 8) & 0xff;
this->b = hexValue & 0xff;
this->alpha = 255;
}
}



I realize however, that if I have red set to 0, it will mistakenly think I'm passing a RGB instead of a RGBA value. It will mistake something like 0x00FFFFFF for 0xFFFFFF.
This is probably why people use ARGB, but I would prefer RGBA if I can find a safe way to do it.

Any ideas?

Share this post


Link to post
Share on other sites
Advertisement
Where do these values come from?

Is it, that in code, you want to be able to write [font="Courier New"]0xRRGGBB[/font] instead of [font="Courier New"]0xRRGGBB00[/font], or are you parsing hex-strings at some point?

If you're parsing strings, have the string-parser append the extra 00 for you, or have it record the fact that the user only entered 3 bytes worth of digits.

If you're hard-coding values, use a macro (like [font="'Courier New"]D3DCOLOR_ARGB[/font] and friends) or an inline function with an optional alpha parameter.

Share this post


Link to post
Share on other sites
Hate to break it to you, but the best option is to just use ARGB. You don't really have any other choice here anyway.
There's usually a reason everyone else does something a particular way.

Share this post


Link to post
Share on other sites

Where do these values come from?

Is it, that in code, you want to be able to write [font="Courier New"]0xRRGGBB[/font] instead of [font="Courier New"]0xRRGGBB00[/font], or are you parsing hex-strings at some point?

If you're parsing strings, have the string-parser append the extra 00 for you, or have it record the fact that the user only entered 3 bytes worth of digits.

If you're hard-coding values, use a macro (like [font="Courier New"]D3DCOLOR_ARGB[/font] and friends) or an inline function with an optional alpha parameter.


No, it's not a string. Having it as a string would be easy. =)

[font="arial, verdana, tahoma, sans-serif"]

Hate to break it to you, but the best option is to just use ARGB. You don't really have any other choice here anyway.
There's usually a reason everyone else does something a particular way.
[/font]
[font="arial, verdana, tahoma, sans-serif"][/font] [font="arial, verdana, tahoma, sans-serif"] [/font][font="arial, verdana, tahoma, sans-serif"]That's what I figured, I was just hoping there was some trick I was overlooking.[/font]
[font="arial, verdana, tahoma, sans-serif"] [/font][font="arial, verdana, tahoma, sans-serif"]This causes a new problem though: I still want the alpha to be optional. If I do 0xRRGGBB wanting the alpha to by-default be 255, it'll mistakenly read it as 0x00RRGGBB making alpha instead be 0.[/font]
[font="arial, verdana, tahoma, sans-serif"] [/font]It seems the only real option is having two separate functions (or macroes): SetRGB(), and SetRGBA(), so that's what I'll do unless anyone else has an idea.[font="arial, verdana, tahoma, sans-serif"] [/font]

Share this post


Link to post
Share on other sites
What about Color::SetHexadecimal(Uint32 rgb, Uint32 a = 0xff), or does it really need to be in one literal?

Share this post


Link to post
Share on other sites
That's certainly a possibility, but kinda defeats the purpose. If I'm using multiple parameters, I might as well do: SetColor(red, green, blue, (optional) alpha).

Here's how I'm currently doing it:
//Must be the format: RGB. Alpha is assumed to be 255.
void cColor::SetRGB(Uint32 rgb)
{
//If we accidentally included alpha... remove it.
if(rgb > 0x00FFFFFF)
{
rgb = (rgb >> 8);
}

//Shift it over and add the alpha.
rgb = ((rgb << 8) + 0xFF);

this->SetRGBA(rgb);
}

//Must be the format: RGBA.
void cColor::SetRGBA(Uint32 rgba)
{
this->r = (rgba >> 24) & 0xff;
this->g = (rgba >> 16) & 0xff;
this->b = (rgba >> 8) & 0xff;
this->alpha = rgba & 0xff;
}

Share this post


Link to post
Share on other sites
Fixed:
[source lang="cpp"]// Must be the format: RGB. Alpha is forced to 255.
void cColor::SetRGB(Uint32 rgb)
{
this->SetRGBA(rgb | 0xff000000);
}[/source]


Your code seems inconsistent about which bits are expected to hold alpha and which are expected to hold blue...

Share this post


Link to post
Share on other sites

Fixed:
[source lang="cpp"]// Must be the format: RGB. Alpha is forced to 255.
void cColor::SetRGB(Uint32 rgb)
{
this->SetRGBA(rgb | 0xff000000);
}[/source]

Your code seems inconsistent about which bits are expected to hold alpha and which are expected to hold blue...

The low byte is supposed to hold alpha. (RGB[color="#ff0000"]A). If you do 'rgb | 0xff000000' wouldn't that make alpha the high byte?

If you re-look at my code... I take 0x00RRGGBB and shift it over to be 0xRRGGBBAA
//Must be the format: RGB. Alpha is assumed to be 255.
void cColor::SetRGB(Uint32 rgb)
{
//Shift it over and add the alpha.
rgb = ((rgb << 8) + 0xFF);

this->SetRGBA(rgb);
}



But I also have a safety check involved, to make sure I wasn't accidentally already passing in RGBA.
//Must be the format: RGB. Alpha is assumed to be 255.
void cColor::SetRGB(Uint32 rgb)
{
//----------------------------------------------------------
//This is the safety check.
if(rgb > 0x00FFFFFF)
{
rgb = (rgb >> 8);
}

//----------------------------------------------------------

//Shift it over and add the alpha.
rgb = ((rgb << 8) + 0xFF);

this->SetRGBA(rgb);
}


It's quite possible I'm getting this wrong; I'm new to bitshifting. Did I make a mistake somewhere?

Share this post


Link to post
Share on other sites
I suggest you step through all of your code in a debugger, sending in 'valid' RGB values, and seeing what your code does to the value at each step.

Share this post


Link to post
Share on other sites

[quote name='ApochPiQ' timestamp='1307046950' post='4818844']
Fixed:
[source lang="cpp"]// Must be the format: RGB. Alpha is forced to 255.
void cColor::SetRGB(Uint32 rgb)
{
this->SetRGBA(rgb | 0xff000000);
}[/source]

Your code seems inconsistent about which bits are expected to hold alpha and which are expected to hold blue...

The low byte is supposed to hold alpha. (RGB[color="#ff0000"]A). If you do 'rgb | 0xff000000' wouldn't that make alpha the high byte?

If you re-look at my code... I take 0x00RRGGBB and shift it over to be 0xRRGGBBAA
//Must be the format: RGB. Alpha is assumed to be 255.
void cColor::SetRGB(Uint32 rgb)
{
//Shift it over and add the alpha.
rgb = ((rgb << 8) + 0xFF);

this->SetRGBA(rgb);
}



But I also have a safety check involved, to make sure I wasn't accidentally already passing in RGBA.
//Must be the format: RGB. Alpha is assumed to be 255.
void cColor::SetRGB(Uint32 rgb)
{
//----------------------------------------------------------
//This is the safety check.
if(rgb > 0x00FFFFFF)
{
rgb = (rgb >> 8);
}

//----------------------------------------------------------

//Shift it over and add the alpha.
rgb = ((rgb << 8) + 0xFF);

this->SetRGBA(rgb);
}


It's quite possible I'm getting this wrong; I'm new to bitshifting. Did I make a mistake somewhere?
[/quote]

The solution ApochPiQ provided is correct for the ARGB order, not RGBA.

Your not being consistent with your bit ordering. Your SetRGB function expects that the input has the XRGB order, your 'safety' check is not really a check, you always default the alpha value. What happens when you want to change the RGB values with that function while keeping your alpha value? Unexpected behaviour is a bad thing.

rgb = ((rgb << 8) + 0xFF), change 0xFF to ths->alpha.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!