• Advertisement

Archived

This topic is now archived and is closed to further replies.

RGB Craziness

This topic is 6621 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Guest Anonymous Poster

You can't use the RGB macro for 16-bit color. You gotta roll yer
own 16-bit macro.

Share this post


Link to post
Share on other sites
Advertisement
From Andre LaMothe's latest:

// this builds a 16 bit color value in 5.5.5 format (1-bit alpha mode)
#define _RGB16BIT555(r,g,b) ((b%32) +((g%32) << 5) + ((r%32) << 10))

// this builds a 16 bit color value in 5.6.5 format
#define _RGB16BIT565(r,g,b) ((b%32) + (g%64) << 6) + ((r%32) << 11))

Hope that helps.

/Alex

Share this post


Link to post
Share on other sites
You should always get the PixelFormat of your Surfaces. If you don't, your GFX will be messed up on these §"@$&( ATIs.

Share this post


Link to post
Share on other sites
LaMothe is not god! Shifts are faster (methinks) than integer mods.

#define _RGB16BIT555(r,g,b) (b >> 3) + ((g >> 3) << 5) + ((r >> 3) << 10))

#define _RGB16BIT565(r,g,b) (b >> 3) + ((g >> 2) << 5) + ((r >> 3) << 11))

- Splat

Share this post


Link to post
Share on other sites
Guest Anonymous Poster

You can just use fast modulus. Replace %32 with &31.
It only works with powers of 2. Formula is %n = &(n - 1)
or %32 = &(32 - 1) = &31.

Vader

Share this post


Link to post
Share on other sites
Shoot, I should've seen the error before...

Problem with "modulus"ing the color values is that we want the MOST significant bits of the source color, not the least. & 31 and % 32 will cut off the top three bits that have the _most_ relevant data for a 5-bit / 6-bit color version.

With shifting - the correct way -, 10111011 (187) will become 10111 (23) in 5-bit, while using modulus will get you 11011 (27). If you were to convert these back to 8-bit, the shifted 10111 would become 10111000 (184) and the modulus one would become 11011000 (216). Which is closer to the original 187?

An even more ridiculous example is 00011111 (31) in 8-bit, is converted to 5-bit as 11111 (31), which is the brightest color in 5-bit, while in 8-bit it is quite dark. I can't believe I read this in LaMothe and didn't realize. I guess that book has more errors than I saw at first...

- Splat

[This message has been edited by Splat (edited November 26, 1999).]

Share this post


Link to post
Share on other sites
Ok, I tried those different macros but they still are not working for me. When I change the RGB values, they don't work correctly. All I can get are greens and blues. Any other ideas?

Share this post


Link to post
Share on other sites
Guest Anonymous Poster

What I should of explained was that the modulus is not a substitute for shifting.
I was just optimizing the use of %32.

To use the Lamothe 16-bit macro correctly, you have to use 16-bit values, e.g. you
can't stick in 24-bit values directly like (255,255,255) even though this particular
example will come out correctly as (1F,1F,1F). You have to shift FIRST, then use
the 16-bit macro.

Vader

Share this post


Link to post
Share on other sites
It may be your bitmap_buffer pointer is not a char pointer. Since you are using x*2 to get word alignment and are storing only one byte...

Jim

Share this post


Link to post
Share on other sites
I removed the *2 in the x*2 part, and that seemed to give me a LITTLE control over my colors, although it put my pixel farther to the left on my screen. Here is some code I used to make my RGB macro, define my buffer, and draw my pixel ):

#define _RGB16BIT(r,g,b) ((b%32) + ((g%32) << 5) + ((r%32) << 10))

typedef unsigned char UCHAR;
UCHAR *bitmap_buffer = NULL;
bitmap_buffer = (UCHAR *)ddsd.lpSurface;

bitmap_buffer[x+ddsd.lPitch*y]=_RGB16BIT(255,255,255);

Now the colors are better, but not perfect. (0,0,0) makes black, (255,255,255) makes white, but then (0,0,255) makes a really bright blue (it should be dark blue), and other combinations don't work either.

Now about that shifting my color values: I am fairly new to programming, and I am only beginning to grasp the whole shifting thing. Now Vader said that (255,255,255) is 24-bit values, so how EXACTLY do I convert those to 16-bit values?

[btw, I tried those other macros and they all worked the same for me, however I didn't time them so their speeds may vary)

[This message has been edited by CoolMike (edited November 27, 1999).]

Share this post


Link to post
Share on other sites
I prefer to shift the values:

#define _RGB16BIT(r,g,b) ((r>>3)<<10) | ((g>>3)<<5) | (b>>3)

But, you should replace the pointer as such:

typedef unsigned short USHORT
USHORT *bitmap_buffer = NULL;
bitmap_buffer = (*USHORT)ddsd.lpSurface;
bitmap_buffer[x+ddsd.lpPitch*y] = _RGB16BIT(255,255,255);

I don't remember offhand if you have to divide lpPitch by two beforehand. Try both ways.

BTW - RGB(0,0,255) should make a bright blue (unless you mean it's adding some green for a teal color).

Remember - when writing 16 bit colors, you should use a 16-bit pointer (USHORT), not a byte pointer (CHAR).

Jim

Share this post


Link to post
Share on other sites
I have 3 notes about this:
1/ Your explaination is very clear, Splat! I have ever written on 16 bit colors. The problem is: we must get the 5 most significant bits of each component, ignore the 3 low bits. So, we must use the shift bit macro.
2/ The pixel's format of each Video card is different. Some card are 555, and anothers are 565 (S3 Trio 64V+ is an ex.). So, you must get the pixel's format to get the correct macro.
3/ I have a problem on ATI card too! Sorry, I don't remember the version of my card. But I want to tell you know that: the GDI functions will destroy the DirectSurface interface! It means: in my program, I use the LoadBitmap function to load the external bitmap file, get Device context of surface GetDC(), BitBlt the memory device context to Surface. And I had error when blitting with src color key. So, I must load bitmap into surface manually, not using the GDI functions! And it work well.
Goodluck.

------------------

Share this post


Link to post
Share on other sites
I am getting nowhere with this. I failed to get it to work when I changed my UCHAR's to USHORT's and I know my ATI card is 555 because I can already load sprites from bitmaps and display them just fine on the screen. It is only when I blit pixels and use my _RGB16BIT macro that it doesn't work right.
BTW, would anyone be willing to be emailed my project? I would really like it if someone could actually look at my entire project (as small as it is) and tell me what's wrong. It is about 70kb and is for Visual C++ 5.0. Thanks in advance for any gracious soul willing to undertake this task!

-Mike

Share this post


Link to post
Share on other sites
I have two problems: My blits are not transparent when I set the color key of the source surface to RGB(0,0,0), and when I plot a 16-bit pixel, it doesn't display the right color. For example, if I plot an RGB(255,255,255) pixel, it displays a blue pixel, and if I plot a pixel as RGB(255,255,0), it comes out dark green.
Here is the piece of code I use to plot my pixels (RGBred, RGBgreen, and RGBblue are my color values, and bitmap_buffer is a pointer to the locked backbuffer):

bitmap_buffer[x*2+ddsd.lPitch*y]=
RGB(RGBred,RGBgreen,RGBblue);

I have a feeling these two problems are caused by a problem with RGB. Is it my video card being screwy? I have an ATI All-In-Wonder PRO--does it maybe have some strange hardware thing I have to work with? I think I might need to use GetPixelFormat or something like that, but I'm not sure how to do that. Can anybody help me with this?

Thanks in advance.

Share this post


Link to post
Share on other sites

  • Advertisement