Jump to content
  • Advertisement
Sign in to follow this  
VanillaSnake21

Correct RGBA macro

This topic is 2515 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey all, I'm following LaMothe's book on 2D programming (i know it's old), and I'm having a few problems building the RGBs.
This is the code that I have, but for some reason it's not working.


#define BUILD_ARGB32(a,r,g,b) ( b + g<<8 + r<<16 + a<<24 )
#define BUILD_XRGB32(r,g,b) ( b + g<<8 + r<<16 + 255<<24 )

A few other questions:

Since I'm on Windows 7 64 bit, does the endiness matter (ie does it matter if i have alpha first and blue last or vise-versa).

Also I looked windows RGB() macro, and they do a cast on every value (like b | (BYTE)(g << 8) | (WORD)(BYTE)(r << 16) etc) is this necessary? Also what's the point of doing a double cast (first to BYTE and then to WORD, instead of straight to WORD)

Another thing I'm wondering is does it matter what sort of gfx card I have? Do different manufacturers have different endiness on the card or are they all the same?

And last question is on a modern system like Windows 7 and on a modern gpu ( nvidia 260) is it still ok to access the video memory in a raw manner like LaMothe does in his book? If you're not familiar, I'm basically using DirectDraw4, locking the surface and getting a memory pointer, to which I manually write pixel values? Or is the card going to be quirky about it?

Thanks!

Share this post


Link to post
Share on other sites
Advertisement
b + g << 8 equals to (b + g) << 8,
so you need b + (g << 8)
And since it's macro, wrap each variable in brackets
(b) + ((g) << 8)

Share this post


Link to post
Share on other sites
Hahaha, operator precedence, always sneaks up on me. But the problems not over yet, my XRGB seems to work ok, but now ARGB fails! The alpha has no effect at all!

New code:

#define BUILD_ARGB32(a,r,g,b) ( (b) + (g<<8) + (r<<16) + (a<<24) )
#define BUILD_XRGB32(r,g,b) ( (b) + (g<<8) + (r<<16) + (255<<24) )

Share this post


Link to post
Share on other sites
First of all, shouldn't you be using | instead of +? I know that with valid parameter values the result will be the same, but it'd make the real intention more obvious (and compilers may optimize better when they understand what you're actually trying to do).

And you forgot to wrap the parameters between parenthesis.

Share this post


Link to post
Share on other sites

First of all, shouldn't you be using | instead of +? I know that with valid parameter values the result will be the same, but it'd make the real intention more obvious (and compilers may optimize better when they understand what you're actually trying to do).

And you forgot to wrap the parameters between parenthesis.


I thought the '+' seemed more logical since I'm pretty much adding the 3 components into one, but changed it to see it does anything, also wrapped each component like you said (although not sure why that's needed), still the same problem, color works but alpha doesn't! Setting it to zero, gives me a fully lit pixel

New code:


#define BUILD_ARGB32(a,r,g,b) ( (b) | ( (g)<<8 ) | ( (r) <<16) | ( (a) <<24 ) )
#define BUILD_XRGB32(r,g,b) BUILD_ARGB32(255, r, g, b)

Share this post


Link to post
Share on other sites
Is it possible that my surface is created as an XRGB, so it disregards the alpha altogether? I used the function GetPixelFormat and it says that the mask for alpha is 0? Is that right, isn't alpha supposed to be the highest order component (ie. last)?

Share this post


Link to post
Share on other sites
Why does anybody still use a macro for this? You can use a function instead. Modern compilers are very good at inlining little functions; just make sure the compiler has a chance to inline the function (e.g., put it in a header file using the keyword "inline").

I can only count advantages of the function solution. In particular, you can do things like assert that the arguments are within the limits that they should be, for ease of debugging.

Share this post


Link to post
Share on other sites
I'm with alvaro. Here's a better solution:


// Of course, you could make it so a isn't a default parameter and rearrange the parameters into a, r, g, b if you wanted,
// and make a second function buildRGB32(). Putting a at the end for ARGB is a little backwards, I'll admit, and having two
// functions for RGB and ARGB might be better.
inline unsigned int buildARGB32(unsigned int r, unsigned int g, unsigned int b, unsigned int a = 255)
{
assert(r < 256 && g < 256 && b < 256 && a < 256);
return ( b | (g<<8 ) | (r <<16) | (a <<24 ) );
}


The good things about this is that r, g, b, and a are all going to be correct data types. With that macro, what happens if a, r, g, or b are chars or shorts? Plus I bet it'll compile into the exact same code as that macro (if not something better).

Share this post


Link to post
Share on other sites
Since I'm on Windows 7 64 bit, does the endiness matter (ie does it matter if i have alpha first and blue last or vise-versa).[/quote]
Yes, the ordering of RGBA vs ABRG or such matters as well.

Also I looked windows RGB() macro, and they do a cast on every value (like b | (BYTE)(g << 8) | (WORD)(BYTE)(r << 16) etc) is this necessary?[/quote]Maybe. Implicit conversions are a killer. So unless you're absolutely sure on what which type is and how it gets upcast, it's better to take is slow and safe.

Another thing I'm wondering is does it matter what sort of gfx card I have? Do different manufacturers have different endiness on the card or are they all the same?[/quote]
It matters on display mode and whatever method you use to interact with display buffers or surfaces. For common operations, GPU itself shouldn't matter, unless specifically working with obscure internal encodings.

If you're not familiar, I'm basically using DirectDraw4, locking the surface and getting a memory pointer, to which I manually write pixel values? Or is the card going to be quirky about it?[/quote]
That is just about the slowest way possible to interact with GPU.

It would be done through textures. It's not perfect, but IMHO a more recent option. One advantage is that textures are common operation and optimized, while surface locks are (IIRC) obsolete and likely not optimized. Especially for DX10/11, where everything is just buffers/textures and shaders and multi-threaded.

It's all a big kettle of fish really...

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!