Sign in to follow this  

16bit

This topic is 4840 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

hey, i can get my ddraw programs running fine in 32bit color, but if i try to switch over to 16bit the colors get messed up. the macro i'm using to convert r,g,b values: ____________________________________________________ #define _16BIT(r,g,b) ((b & 31) + ((g & 63) << 5) + ((r & 31) << 11)) ____________________________________________________ the part where the bmp is coppied to the surface: _____________________________________________________ //red, green, and blue are defined with color values //x and y are the choordinates int pixel = _16BIT(red,green,blue); int lpitch16 = (int )(ddsd.lPitch)>>1; primary_buffer[ x + y*lpitch16 ] = pixel; _____________________________________________________ thanks for the help!

Share this post


Link to post
Share on other sites
Umm, then don't use 16 bit?

If you're using DDraw, then you should have access to the function D3DColorXRGB(r as single, g as single, b as single) or something of the sort. I would suggest using this (or one of the other Dx color makers) in preference to your own.


Mushu - trying to help those he doesn't know, with things he doesn't know.
Why won't he just go away? An question the universe may never have an answer to...

Share this post


Link to post
Share on other sites
Your macro is wacky; it takes the least significant bits from each color channel. What's the range of each of the inputs to _16BIT? if it's 0-255, you'll need to change the macro.

Share this post


Link to post
Share on other sites
what macro should i use?

and...

i just want my game to run in 16bit also so all the gay
videocards that only support 16 and 24 bit color can run it too.
the last thing i made only supported 32bit.

Share this post


Link to post
Share on other sites
Also, I remember from my direct draw programming that there are at least two 16 bit color modes. One uses 6 bits for green and 5 for the other two. The other uses 5 bits for each channel and has one left over. I think you have to detect which one the video card uses and convert your colors using the appropriate method.

Share this post


Link to post
Share on other sites
Surely there's a way to have a 32 bpp surface internally and automatically convert that when blitting to whatever your actual resolution is? I know that with OpenPTC and TinyPTC you can do that (it has some very fast converters).

cheers
sam

Share this post


Link to post
Share on other sites
In case you have masochistic tendencies and want to just fix the macro to look at most significant bits instead:

Quote:
Original post by eal
#define _16BIT(r,g,b) ((b >> 3) + ((g >> 2) << 5) + ((r >> 3) << 11))


Here I assume your input r,g,b values are in the 0..255 range.

Do not try to combine the shifts together, it's like that for a reason. If you can't figure it out (or for that matter, don't understand how this works)... then you shouldn't have been using the original macro in the first place. :P

Share this post


Link to post
Share on other sites

This topic is 4840 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this