How to load a 24-Bit BMP on a 16-Bit DDraw Surface

Started by
1 comment, last by Oscar_GH 23 years, 7 months ago
Hi, I am trying to do a functionto load 24-bit BMP into a 16-Bit Direct Draw Surface, Currently I have loaded my BMP and stored al my BMP Data in a CHAR *Data array, but my trouble is how to extract the RGB; values of each pixel and convert it to fit exactly in the 16-bit surface (Without lose of color). I am using this formulas, to rectify the 24-bit image onto a 16-BIT DDrawSurface, but I dont know how to get the RGB, of my BMP DATA and store it (the New values) onto my DDraw, surface any Idea? Here is the formulas based in an article done by TANSTAAFL... http://www.gamedev.net/reference/articles/article863.asp GetPixelFormat takes a pointer to a DDPIXELFORMAT structure (not surprisingly). I looked at the DDPIXELFORMAT structure, and I found a couple of fields that interested me… namely dwRBitMask, dwGBitMask, dwBBitMask. Now I had all of the information I wanted, just not in a format I could use. DDPIXELFORMAT ddpf; memset(&ddpf,0,sizeof(DDPIXELFORMAT); ddpf.dwSize=sizeof(DDPIXELFORMAT); lpDDS->GetPixelFormat(&ddpf); What now? Well, in order to figure out how far to shift bits to the left, I had to do this: //warning, this code is UGLY DWORD dwBMask=ddpf.dwBBitMask; DWORD dwRMask=ddpf.dwRBitMask; DWORD dwGMask=ddpf.dwGBitMask; DWORD dwBSHL=0; DWORD dwRSHL=0; DWORD dwGSHL=0; while((dwBMask & 1)==0) { dwBMask=dwBMask >> 1; dwBSHL++; } while((dwGMask & 1)==0) { dwGMask=dwGMask >> 1; dwGSHL++; } while((dwRMask & 1)==0) { dwRMask=dwRMask >> 1; dwRSHL++; } At this point, if we assumed that I had the correct number of bits stored in three UCHARs named Red, Green, and Blue, I could calculate the 16-bit color by doing this: Color=Blue << dwBSHL + Green << dwGSHL + Red << dwRSHL; However, right now, we have 8 bits in each of Red, Green, and Blue, and we need only 5 or 6. We have to be able to shift the bits of the colors to the right before shifting them to the left to make a 16-bit color. So, let''s determine how many bits to the right we have to shift them: DWORD dwBSHR=8; DWORD dwRSHR=8; DWORD dwGSHR=8; while((dwBMask & 1)==1) { dwBMask=dwBMask >> 1; dwBSHR--; } while((dwGMask & 1)==1) { dwGMask=dwGMask >> 1; dwGSHR--; } while((dwRMask & 1)==1) { dwRMask=dwRMask >> 1; dwRSHR--; } Now, we can accurately calculate a 16bit color from a 24Bit color. Color=(Blue >> dwBSHR) << dwBSHL + (Green >> dwGSHR) << dwGSHL + (Red >> dwRSHR) << dwRSHL;
Advertisement
hmmm

16 bits come in two flavours: 565 and rarely 555

as the names say its 5bits for red 6 bits for green and 5 bits for blue

after u get RGB components from 24bits data u AND them with:
0x1f for red
0x3f for green
0x1f for blue

this is on a 5:6:5 board

and then shift them back into positions in order to get 16bits
word data "et voila" thats ur pixel...

of course u will loose color...but u cant make the same thing in 16 bits as in 24 bits....otherwise

Bogdan
obysoft
Oscar: When you want to display a 24-bit bitmap on a 16-bit surface, you have two options. The first is to do it manually, one pixel at a time. From what you said in your post, I assume you already have a char* that points to the actual image data in the bitmap. You have to read it in one pixel at a time. Each pixel will have one byte for red, one byte for green, and one byte for blue. To scale these down to the appropriate range for 16-bit color, you would use this for 565-mode:

#define RGB16BIT565 (((r>>3)<<11) | ((g>>2)<<5) | (b>>3))

Or this for 555-mode:

#define RGB16BIT555 (((r>>3)<<10) | ((g>>3)<<5) | (b>>3))

Each macro yields a USHORT that represents one pixel in the appropriate 16-bit pixel format. So then you write the bitmap onto your surface one pixel at a time. To avoid any sign extension screwing up the results, you should probably cast your char* to a UCHAR*.

Bogdan: I know what you're saying with the bit masks, but I don't think it will yield the correct result. The way you have it, a pixel in the 24-bit bitmap whose red component is 31 will be as bright as possible, and a pixel whose red component is 32 will be as dark as possible. Know what I mean? I think the values should be scaled down rather than just masking out the low bits, hence all those bit shifts in my macros.

Oscar again : A much easier solution is to use device contexts. Load your bitmap by using LoadImage(), then create a memory device context with CreateCompatibleDC() and select the bitmap into the DC using SelectObject(). Next, create another DC (the destination DC) from your DirectDraw surface, using IDirectDrawSurface7::GetDC(), and copy the bitmap from one to the other using BitBlt() -- or StretchBlt() if you want to scale it. In this case, the conversion of color depths is automatic.

If that sounds like something that would work for your particular program, and you want to see it in detail, I have an article covering a bunch of GDI stuff on my website (link below). Just take the function I use as an example, and replace the destination device context with one you get from your surface, and it should work fine.

-Ironblayde
 Aeon Software

Edited by - Ironblayde on August 29, 2000 10:17:53 PM

Edited by - Ironblayde on August 29, 2000 10:22:33 PM
"Your superior intellect is no match for our puny weapons!"

This topic is closed to new replies.

Advertisement