Sign in to follow this  
Alpha_ProgDes

Getting a pixel from a bitmap

Recommended Posts

Alpha_ProgDes    6934
Hello again! Ok I'm trying to get a pixel from a bitmap, then use that value to set the transparency. Here's the horror:
    EraseBackground();
    G.lpDDSRes->GetDC(&G.hDC);
    DWORD trans_color = (DWORD)GetPixel(G.hDC, 48,48);
    G.lpDDSRes->ReleaseDC(G.hDC);
    DDCOLORKEY trans_key;
    trans_key.dwColorSpaceHighValue = 0x008000; //i got values from PhotoShop
    trans_key.dwColorSpaceLowValue = 0x006B00; //i got values from PhotoShop
    G.lpDDSRes->SetColorKey((DDCKEY_SRCBLT | DDCKEY_COLORSPACE), &trans_key);
    G.lpDDSBack->Blt(&bulletDest, G.lpDDSRes, &bulletSrc, (DDBLT_WAIT | DDBLT_KEYSRC), NULL);
    G.lpDDSBack->Blt(&dest, G.lpDDSRes, &src, (DDBLT_WAIT | DDBLT_KEYSRC), NULL);
    G.lpDDSPrimary->Flip(NULL, 0);
Unfortunately, all I get is either the original sprite or nothing at all. Don't know what I'm doing wrong. So. Help. Please. [sad]

Share this post


Link to post
Share on other sites
Alpha_ProgDes    6934
A hack, but it works nonetheless:

static DWORD value = -1 ;


//timing and movement code here
//timing set for 1 pixel per second, so value can be read
moveTickCount += G.diffTickCount;
while (moveTickCount >= (1000 / moveRate)) {

if (move == walk)
++Jin_xPos;
else if (move == pedal)
--Jin_xPos;

bullet_xPos += 1;
++value;
if (Jin_xPos > SCREEN_WIDTH)
Jin_xPos = -100;
if ((Jin_xPos + 100) < 0)
Jin_xPos = SCREEN_WIDTH;
if (bullet_xPos > SCREEN_WIDTH) {
bullet_xPos = Jin_xPos + 30;
onScreen = false;

}
moveTickCount -= (1000 / moveRate);

//this goes right before the blt
DDCOLORKEY trans_key;

trans_key.dwColorSpaceHighValue = value;
trans_key.dwColorSpaceLowValue = value;
G.lpDDSRes->SetColorKey(DDCKEY_SRCBLT, &trans_key);

if (value > 255)
value = 0;
char fpsBuffer[32];
sprintf(fpsBuffer, "This value is transparent: %2d", value);
SetTextColor(G.hDC, RGB16COLOR(16, 2, 16));
SetBkColor(G.hDC, RGB16COLOR(0, 0, 0));
TextOut(G.hDC, 0, 0, fpsBuffer, strlen(fpsBuffer));

G.lpDDSBack->Blt(&bulletDest, G.lpDDSRes, &bulletSrc, (DDBLT_WAIT | DDBLT_KEYSRC), NULL);
G.lpDDSBack->Blt(&dest, G.lpDDSRes, &src, (DDBLT_WAIT | DDBLT_KEYSRC), NULL);
G.lpDDSPrimary->Flip(NULL, 0);

but if someone could show me the "official" way or point me to a website, that'll be great.

oh! the point of the code is to cycle through the 255 color indexes until your background color disappears. since the rate at which you move is extremely slow, you'll be able to see the value of the color index at the same time the background color disappears.

Share this post


Link to post
Share on other sites
Jiia    592
G.lpDDSRes, is this a directdraw buffer?

I'm not sure, but you may need to convert to a different pixel format for direct draw or the GDI.

Why not set the transparency as you load the bitmap in? This way you know the exact format of the colors before sending them off to DirectX land :)

If you can't do that, I would just lock the buffer and grab the pixel that way.

Also, and again I'm not sure, but I think the low and high transparent values should be the same thing.

Share this post


Link to post
Share on other sites
Jiia    592
You have to show me some code, first. I don't have any code for what you're doing, and I have no idea what invironment you're working in. How do you load in the images? What version of Direct3D or DirectDraw are you using?

Share this post


Link to post
Share on other sites
Alpha_ProgDes    6934
Dev-C++ 4.9.9.1, DirectDraw 6.1.

int DD_Init()
{
HRESULT hRet;
DDSURFACEDESC ddsd;
DDSCAPS ddscaps;

// Create our DirectDraw object
hRet = DirectDrawCreate(NULL, &G.lpDD, NULL);
if (FAILED(hRet)) return -1;

// Set the cooperative level
hRet = G.lpDD->SetCooperativeLevel(G.hWnd, DDSCL_EXCLUSIVE |
DDSCL_FULLSCREEN);
if (FAILED(hRet)) return -2;

// Set the video mode
hRet = G.lpDD->SetDisplayMode(SCREEN_WIDTH, SCREEN_HEIGHT,
SCREEN_BITDEPTH);
if (FAILED(hRet)) return -3;

// Create our primary surface with one backbuffer
ZeroMemory(&ddsd, sizeof(ddsd));
ddsd.dwSize = sizeof(ddsd);
ddsd.dwFlags = DDSD_CAPS | DDSD_BACKBUFFERCOUNT;
ddsd.ddsCaps.dwCaps = DDSCAPS_PRIMARYSURFACE | DDSCAPS_FLIP |
DDSCAPS_COMPLEX;
ddsd.dwBackBufferCount = 1;
hRet = G.lpDD->CreateSurface(&ddsd, &G.lpDDSPrimary, NULL);
if (FAILED(hRet)) return -4;

// Attach G.lpDDSBack to the backbuffer
ZeroMemory(&ddscaps, sizeof(ddscaps));
ddscaps.dwCaps = DDSCAPS_BACKBUFFER;
hRet = G.lpDDSPrimary->GetAttachedSurface(&ddscaps, &G.lpDDSBack);
if (FAILED(hRet)) return -5;

// Create a clipper for the entire display
hRet = G.lpDD->CreateClipper(0, &G.lpDDClipper, NULL);
if (FAILED(hRet)) return -6;

// Set the clipper to clip the entire window
hRet = G.lpDDClipper->SetHWnd(0, G.hWnd);
if (FAILED(hRet)) return -7;

// Attach the clipper to the backbuffer surface
hRet = G.lpDDSBack->SetClipper(G.lpDDClipper);
if (FAILED(hRet)) return -8;

/* */
// Create and set the palette
if (SCREEN_BITDEPTH == 8)
{
G.lpDDPalette = Utils_LoadPalette(G.lpDD, "Jun_Attack.bmp");
if (G.lpDDPalette) hRet = G.lpDDSPrimary->SetPalette(G.lpDDPalette);
if (!G.lpDDPalette || FAILED(hRet)) return -6;
}

// Create the offscreen surface, by loading our bitmap.
G.lpDDSRes = Utils_LoadBitmap(G.lpDD, "Jun_Attack.bmp", 0, 0);
if (G.lpDDSRes == NULL) return -7;

return 0;
}

Share this post


Link to post
Share on other sites
Jiia    592
I'm not sure if this code will help ya. It's from an old game engine.

ULONG KSprite::GetPixel(INT x,INT y)
{
DDSURFACEDESC2 SurfaceDesc;
memset(&SurfaceDesc,0,sizeof(DDSURFACEDESC2));
SurfaceDesc.dwSize = sizeof(DDSURFACEDESC2);
if(DDSurface->Lock(NULL,&SurfaceDesc,DDLOCK_WAIT|DDLOCK_SURFACEMEMORYPTR|DDLOCK_READONLY,NULL) != DD_OK)
return Error("Cannot Lock Surface to GetPixel");
ULONG Color = *(ULONG*) ( ( (BYTE*) SurfaceDesc.lpSurface ) + ( x * ( BitDepth / 8 )) + ( y * SurfaceDesc.lPitch ) );
DDSurface->Unlock(NULL);
return Color;
}


"BitDepth" is the display bit depth that should match your image. You need to either make a different version of this function for 8, 16, 24, and 32 bit, or just mask the pixel value for each mode.

To make a different function version (or use a template), change "ULONG"s to BYTE for 8 bit, or USHORT for 16 bit. You still have to mask for 24 bit though.

If you want to mask the pixel instead of making different function versions..

To mask the pixel for 8 bit..
ULONG Color = 0x000000FF & *(ULONG*) ( ( (BYTE*) ...
To mask the pixel for 16 bit..
ULONG Color = 0x0000FFFF & *(ULONG*) ( ( (BYTE*) ...
To mask the pixel for 24 bit..
ULONG Color = 0x00FFFFFF & *(ULONG*) ( ( (BYTE*) ...

So you could pass a 32 bit mask to the function depending on which depth you're in. If you already have a GETCOLOR(r,g,b) type function that uses the bit depth to compute, you can use that instead. GETCOLOR(255,255,255) will work as the mask you need.

Sorry if I didn't make sense. Feel free to ask questions

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this