Archived

This topic is now archived and is closed to further replies.

Furie

16bit colors?

Recommended Posts

Im having problem with getting 16bit colors in directdraw to work.. I have defined the video buffer unsigned short *video_buffer and i place the pixels with video_buffer[x + y*(ddsd.lPitch>>1)] = color; the problem is that i can''t really seem to get the right colors. if i assume that the 16bits are placed 5,6,5 and try to get white (all bits 1) I get kind of a light pink color. if i assume its 5,5,5 with the last bit 0 i get kindof a bluegreen color... i have been using these defines without succes. #define RGB16B(r,g,b) ((b%32)+((g%64)<<6)+((r%32)<<12)); #define _16BIT(r,g,b) (((r&0xf8)<<8) | ((g&0xfc)<<3) | (b>>3)) and i made this so i can place the bits my self #define COLAZ(a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p) ((32768 * a) + (16384 * b) + (8192 * c) + (4096 * d) + (2048 * e) + (1024 * f) + (512 * g) + (256 * h) + (128 * i) + (64 * j) + (32 * k) + (16 * l) + (8 * m) + (4 * n) + (2 * o) + (1 * p)); but i cant still get it to work. so how are the colors distributet in the 16bits ?

Share this post


Link to post
Share on other sites
The placement of the bits is totally dependent on your video card/driver. You need to check this at runtime. I don''t have any reference material in front of me but there are calls in DirectX that will create the desired color and they take the bit placement into account. You should use these rather than rolling your own.

Share this post


Link to post
Share on other sites
This is the method i've been using:

///////////////
// MACROS
///////////////

#define _RGB16BIT555(r,g,b) ((b&31)+((g&31)<<5)+((r&31)<<10))
#define _RGB16BIT565(r,g,b) ((b&31)+((g&63)<<5)+((r&31)<<11))

#define DDRAW_STRUCT_INIT(ddstruct)
{
memset(&ddstruct,0,sizeof(ddstruct));
ddstruct.dwSize=sizeof(ddstruct);
}
///////////////
// function ptr to RGB16 builder
///////////////
USHORT (*RGB16Bit)(int r, int g, int b)=NULL;

///////////////
// root functions
///////////////
USHORT RGB16Bit565(int r, int g, int b)
{
r>>=3; g>>=2; b>>=3;
return(_RGB16BIT565((r),(g),(b)));
}
USHORT RGB16Bit555(int r, int g, int b)
{
r>>=3; g>>=3; b>>=3;
return(_RGB16BIT555((r),(g),(b)));
}

Use code similar to this in your function that initializes DirectDraw
...
DDPIXELFORMAT ddpf;
DDRAW_STRUCT_INIT(ddpf);
lpddsPrimarySurface->GetPixelFormat(&ddpf);
dd_pixel_format = ddpf.dwRGBBitCount;

if (dd_pixel_format == DD_PIXEL_FORMAT555)
RGB16Bit = RGB16Bit555;
else
RGB16Bit = RGB16Bit565;
...
This code checks what pixel format is being used and sets up the RGB16Bit() function accordingly... once set you dont have to worry about whether or not the machine is running 565 or 555 mode.


Just place your pixels like so:
video_buffer[x + y*(dds.lPitch>>1)]=RGB16Bit(red,green,blue);

Hope this helps.

[edited by - Asch on May 2, 2003 3:26:08 AM]

Share this post


Link to post
Share on other sites