Archived

This topic is now archived and is closed to further replies.

Directx8 Locked surfaces/textures

This topic is 5747 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey all, Right, Ill cut to the chase : I have an image which is 96 x 96 pixels. It is a .tga file. I load this in using D3DXCreateTextureFromFileInMemoryEx() and get a texture. I request a format of A8R8G8B8. Then a bit later on (before I actually draw it), I lock the texture (requesting level 0) and write a black dot to it (4 x 0''s). The first black dot does indeed appear in the top left. However, if I put a black dot at pixel 48 (offset number 48 * 4), it does not appear anywhere near half-way on the image! In fact a pixel that is supposed to be at the top right of the image actually shows to be 3/4 of the way across. Its like DirectX loaded my file and decided that 96x96 just wasnt good enough, and decided to internally upsize the texture!!! I am aware that it should be stored as a 128x128 surface, but it shouldnt be stretching the 96x96 to fill the extra space, should it? hmmm! comments?

Share this post


Link to post
Share on other sites
Oh... not sure if this is your problem, but you might wanna check it. This is how D3D stores images in memory


  
XXXXXXXXXXX----
XXXXXXXXXXX----
XXXXXXXXXXX----
XXXXXXXXXXX----
XXXXXXXXXXX----


Where "X" = image data and "-" = waste of memory. But that''s just the way it is.

So, when you lock your surface, you pass a pointer to a surface descriptor structure, right?

well, it has Width, Height, and Pitch (not sure of the exact names).

Width/Height refer to the actual image (the X''s). The Pitch, however, is the length of the row including both X''s and -''s. So when you calculate the position of a pixel (x,y) you do it like so:

pixel[x,y] = surface_memory[ (y * Pitch) + x ]

Use "Pitch" insted of "Width"

BTW that happens with BMP images too, when you load them with the GDI.

Share this post


Link to post
Share on other sites
Oh and BTW, no it doesn''t have to be stored as 128x128. Graphics cards have different capabilities. Really old cards could only support squares of sides powers of 2, starting with 64x64, 128x128, 256x256 (e.g. Voodoo 2). Then some cards went up to 512x512 and 1024x1024.

All modern high end cards, however, support textures of any sizes (the cap was 4096x4096 for GeForce3 I believe) and any dimensions too

Hope that helps. What kind of graphics card have you got?

------------------------
CRAZY_DUSIK* pCrazyDuSiK = new CRAZY_DUSIK;
pCrazyDuSiK->EatMicroshaft(MS_MUNCH_BILL_GATES | MS_CHEW_BILL_GATES);

Share this post


Link to post
Share on other sites
Thanks for your comments, but I have already solved it! It always happens this way - you cant figure it out, so you post a message asking people for help and then BAM! you figure it out.

Here was what was happening if youre interested:

Original image:
XXXXXX
XXXXXX 96
XXXXXX
XXXXXX
96


When directx loaded it, it wouldnt create a 96x96 area of memory (I have a TNT2) so it made it 128 across. BUT when it loaded in the 96, it DIDNT fill up the remainder with blank space, but it stretched the texture to fit 128 across!

So after loading it was:
XXXXXX000
XXXXXX000 128 pixels!!
XXXXXX000
XXXXXX000

But the ''0''s were not blank, it stretched the texture to fill the full 128 pixels. I found out that this is default behaviour for the D3D_DEFAULT filter. So I had to specify D3DX_FILTER_NONE to stop it from stretching the image.

Once that was done it then correctly padded up the extra space with blank data. The problem being it still says the surface is 128 pixels across (which it is), and so the texture UV co-ordinates map with 0,0 being 0,0 on the texture and 0,1 being 0,128 on the texture.

So I had to scale my UV co-ordinates by the ratio of the image size to texture size so that 1.0 becomes 0.75, which is the 96th pixel across.

phew! If you understood all that, congrats - you deserve some fancy degree because if I werent writing it, I wouldnt make sense of it.

Share this post


Link to post
Share on other sites
hehe... well if you tile a texture you still need to stretch it

I don''t use the D3DX functions, so I wrote my own BMP loader, which stretches/shrinks the bitmap if desired.

Then again, by the time I release my engine, I doubt texture dimensions will be a problem on any cards hehe

------------------------
CRAZY_DUSIK* pCrazyDuSiK = new CRAZY_DUSIK;
pCrazyDuSiK->EatMicroshaft(MS_MUNCH_BILL_GATES | MS_CHEW_BILL_GATES);

Share this post


Link to post
Share on other sites