Jump to content
  • Advertisement
Sign in to follow this  
sirlemonhead

Offsetting into image data

This topic is 3796 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've got a bitmap font here, where there is only one character per row. Each character is 30 pixels wide, 33 pixels high and there's 224 of them. So the image is 30 pixels wide and 7392 pixels high (33 x 224) Here's a small sample section of that texture http://homepage.eircom.net/~duncandsl/sample_of_font.jpg I want to create a direct3d texture from this, so I want to restructure the character layout. I've figured out I can fit this onto a 450 wide and 495 high, with 15 characters per row and 15 characters per column. So I create a blank 512 x 512 d3d texture, fill that entirely with black first (just to ensure I have some padding for the leftover space) So how I then get the characters onto the d3d texture is character per character, setting an offset into the d3d texture where each texture will be drawn. This lets me just loop through the source data image as normal. Here's my function code for doing all of this:
LPDIRECT3DTEXTURE8 CreateD3DTallFontTexture(D3DTexture *tex) {

	LPDIRECT3DTEXTURE8 tempTexture = NULL;
	LPDIRECT3DTEXTURE8 destTexture = NULL;

	int height = 495;
	int width = 450;

	int pad_height = 512;
	int pad_width = 512;

//	int size = 30 * 7392;

	if(FAILED(d3d.lpD3DDevice->CreateTexture(pad_width, pad_height, 0, D3DUSAGE_DYNAMIC, D3DFMT_R5G6B5, D3DPOOL_SYSTEMMEM, &tempTexture))){
		return NULL;
	}

	if(FAILED(d3d.lpD3DDevice->CreateTexture(pad_width, pad_height, 0, D3DUSAGE_DYNAMIC, D3DFMT_R5G6B5, D3DPOOL_DEFAULT, &destTexture))){
		return NULL;
	}

	D3DLOCKED_RECT lock;
	
	if(FAILED(tempTexture->LockRect(0, &lock, NULL, D3DLOCK_DISCARD ))){
		return NULL;
	}

	unsigned short *destPtr;
	unsigned char *srcPtr;
	
	srcPtr = (unsigned char *)tex->buf;

//	D3DCOLOR pad_colour = D3DCOLOR_XRGB(120,120,120);
	D3DCOLOR pad_colour = D3DCOLOR_XRGB(0,0,0);

	// lets pad the whole thing black first
	for (int y = 0; y < pad_height; y++)
	{
		destPtr = ((unsigned short *)(((unsigned char *)lock.pBits) + y*lock.Pitch));

		for (int x = 0; x < pad_width; x++)
		{
			// >> 3 for red and blue in a 16 bit texture, 2 for green
				*destPtr =	((pad_colour>>3)<<11) | // R
					((pad_colour>>2)<<5 ) | // G
					((pad_colour>>3)); // B
				
			destPtr+=1;
		}
	}

	for (int i = 0; i < 224; i++) {

		int row = i / 15; // get row 
		int column = i % 15; // get column from remainder value

		int offset = (column * 30) + ((row * 33) * lock.Pitch);

		for (int y = 0; y < 33; y++) {

			destPtr = ((unsigned short *)(((unsigned char *)lock.pBits + offset) + y*lock.Pitch));

			for (int x = 0; x < 30; x++) {
				*destPtr =	((srcPtr[0]>>3)<<11) | // R
					((srcPtr[1]>>2)<<5 ) | // G
					((srcPtr[2]>>3)); // B

				destPtr+=1;
				srcPtr+=4;
			}
		}
	}

	tempTexture->UnlockRect(0);

	if(FAILED(d3d.lpD3DDevice->UpdateTexture(tempTexture, destTexture))) {
		OutputDebugString("UpdateTexture failed \n");
	}

	tempTexture->Release();
	tempTexture = NULL;

	return destTexture;
}

So my problem is that this doesn't quite work. I'm getting overlapping characters on my final d3d image. Here's what I get if i draw the bitmap font image to the screen http://homepage.eircom.net/~duncandsl/bmp_font.jpg There's stuff drawn underneath it that makes things look a little unclear(ignore the garbage after the letter C), but you'll see where the numbers are, and in the letters (particularly 'M' which is badly cut off) So I'm guessing i'm not calculating my offset correctly? I haven't taken into account the 62 pixels of pad space (the 512 'real' width minus the 450 actual width) but i'm not sure to go about this. I have other functions to create padded textures where I dont take the pad space into account and they come out ok.. What am I missing? Thanks guys :)

Share this post


Link to post
Share on other sites
Advertisement
Sorry if I'm missing something, but why are you using a pointer to a char as the source pointer and a pointer to a short as the dest pointer? In other words, why are you not using the same type? Also, you advance the src pointer by 4, but shouldn't it be advanced by 2?

BTW, you should really put the code for constructing a pixel from RGB values into a separate function or macro. Typing it each time is just asking for trouble.

You should also replace the magic numbers with named constants, so that it's harder to accidentally put one number where another should be (and for the other usual reasons).

Share this post


Link to post
Share on other sites
Sorry forgot to say originally..my source data is a 32bit image, and i'm using a 16bit image as my destination image (ie my d3d texture)

I'm advancing by 4 for all my image functions so i'm sure that's correct for my source image data.

What do you mean by magic numbers? (edit: oh you mean my '30' and '33' values and whatnot? yeah, good point)

Share this post


Link to post
Share on other sites
Another couple of points:

1. You can use D3DXSaveSurfaceToFile to check that the texture is generated as you expect.

2. Why on earth are you creating two textures like that? You should really be creating one texture in the managed pool, unless you're going to be updating the texture frequently (I.e. locking it more than once every half second or so). Unless you have a good reason otherwise (E.g. you're creating a render target), all your non-dynamic textures should be in the managed pool.

EDIT: Also, you're not cleaning up resources properly if any part of that function fails. For instance, if you fail to create destTexture, you don't Release() tempTexture.

Share this post


Link to post
Share on other sites
Yeah, I've tried getting the D3DX into this project before, but I get lots of conflicts with the d3dxmaths.ini (redefinition of 'operator *' and other stuff)

So I don't need to create two textures, I can write my data straight onto a managed pool texture? cool :) I wasn't aware I could do the locking any other way, and then have d3d use it on primitives.

Share this post


Link to post
Share on other sites
Quote:
Original post by sirlemonhead
Yeah, I've tried getting the D3DX into this project before, but I get lots of conflicts with the d3dxmaths.ini (redefinition of 'operator *' and other stuff)
That sounds pretty suspicious. You should just be able to #include <d3dx9.h> and add d3dx9.lib to your project settings. You might want to start another thread about that. You're not using VC6 are you? That would certainly be an extremely bad thing and would explain it.
Alternatively, you could try adding this at the top of the file you want to use D3DXSaveTextureToFile() in (Just noticed I said / linked to the wrong function last time):

typedef enum _D3DXIMAGE_FILEFORMAT
{
D3DXIFF_BMP = 0,
D3DXIFF_JPG = 1,
D3DXIFF_TGA = 2,
D3DXIFF_PNG = 3,
D3DXIFF_DDS = 4,
D3DXIFF_PPM = 5,
D3DXIFF_DIB = 6,
D3DXIFF_HDR = 7, //high dynamic range formats
D3DXIFF_PFM = 8, //
D3DXIFF_FORCE_DWORD = 0x7fffffff

} D3DXIMAGE_FILEFORMAT;

HRESULT WINAPI D3DXSaveTextureToFileA(LPCSTR pDestFile, D3DXIMAGE_FILEFORMAT DestFormat, LPDIRECT3DBASETEXTURE9 pSrcTexture, PALETTEENTRY* pSrcPalette);

#pragma comment(lib, "d3dx9.lib")


That should avoid including the entire d3dx9.h header (Although it's very hacky, and I wouldn't recommend keeping it there).

Quote:
Original post by sirlemonhead
So I don't need to create two textures, I can write my data straight onto a managed pool texture? cool :) I wasn't aware I could do the locking any other way, and then have d3d use it on primitives.
Correct. When you create a managed texture, D3D internally creates two textures, one in system memory and one in the default pool. It'll then lock the system memory texture when you ask it to, and update the default pool texture when needed (Which is pretty much what your code does). D3D will also swap the texture in and out of video memory as required, so you could create 400MB of resources on a card with only 256MB VRAM - Although if you use more resources in one frame than can fit into VRAM you're in for some severe performance penalties.

Share this post


Link to post
Share on other sites
No, i'm using VS2005, but I do have VC 6.0 on the system. Windows shouldn't know of it though as I just copied its folder over from an old windows install..so the registry shouldn't know about it..

Saying that though, I tried the code you posted to get that D3DX function working, and the linker is telling me it can't find libci.lib, which I believe is a vc 6.0 file?

edit: never mind, apparently I can just tell the linker to ignore this..

I'm using the dx 8.1 sdk by the way Steve. I need to for eventually moving this code onto an xbox.

oh, and what should my CreateTexture() function look like?

I tried:

if(FAILED(d3d.lpD3DDevice->CreateTexture(pad_width, pad_height, 0, NULL, D3DFMT_R5G6B5, D3DPOOL_MANAGED, &destTexture))){
//tempTexture->Release();
return NULL;
}

But all I get is a white quad when I try render that texture.

Cheers for the help Steve :)

Share this post


Link to post
Share on other sites
Quote:
Original post by sirlemonhead
Saying that though, I tried the code you posted to get that D3DX function working, and the linker is telling me it can't find libci.lib, which I believe is a vc 6.0 file?

edit: never mind, apparently I can just tell the linker to ignore this..
Are you linking to any other libs that aren't from the platform SDK? I've had all sorts of problems using libs that were compiled against VC6.

Quote:
Original post by sirlemonhead
oh, and what should my CreateTexture() function look like?

I tried:

if(FAILED(d3d.lpD3DDevice->CreateTexture(pad_width, pad_height, 0, NULL, D3DFMT_R5G6B5, D3DPOOL_MANAGED, &destTexture))){
//tempTexture->Release();
return NULL;
}

But all I get is a white quad when I try render that texture.

Cheers for the help Steve :)
That looks correct to me. Are you using the Debug Runtimes? Any of your functions failing? Have you filled the texture properly (Should be exactly the same as what you currently have though, just Lock() it, fill it and Unlock() it)?

Share this post


Link to post
Share on other sites

if(FAILED(d3d.lpD3DDevice->CreateTexture(pad_width, pad_height, 0, NULL, D3DFMT_R5G6B5, D3DPOOL_MANAGED, &destTexture))){
//tempTexture->Release();
return NULL;
}

D3DLOCKED_RECT lock;

if(FAILED(destTexture->LockRect(0, &lock, NULL, D3DLOCK_DISCARD ))){
return NULL;
}



doing that and filling as before. If you didn't see my previous edit, i'm using the dx 8.1 sdk(sorry, should have mentioned that once we got dx specific)

Debugging is on full, nothing failing and nothing out of the ordinary in the debug window.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!