Jump to content
  • Advertisement
Sign in to follow this  
maxest

D3DLOCKED_RECT's pitch parameter nad locking the textures

This topic is 4202 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I read everywhere that D3DLOCKED_RECT's pitch doesn't have to be the same, as the locked texture's width. So here goes my question: WHEN pitch != width? Isn't it for example when the texture I have is a non-power-of-two?

Share this post


Link to post
Share on other sites
Advertisement
Yes, usually only non-p2 textures will get padded, however IMO it is good practice to always the pitch parameter.

Also the pitch is the amount of bytes in a scanline, so unless you're using an 8-bit texture format, it will not be equal to the width.

Share this post


Link to post
Share on other sites
So if I, for example, use D3DFMT_A8R8G8B8 I can be sure, that pitch == width?
I know I should always take care of what pitch's value is but it makes the code look worse :)



I don't create the next thread so I will ask about it here...

I have such a code:

bitmapFile bitmap;
loadBitmapFromFileWithAlpha("logo.bmp", &bitmap);

D3DXCreateTexture(device, 256, 512, 1, 0, D3DFMT_A8B8G8R8, D3DPOOL_MANAGED, &texture);

D3DLOCKED_RECT lockedRect;
texture->LockRect(0, &lockedRect, NULL, 0);
unsigned char *data = (unsigned char*)lockedRect.pBits;
memcpy(data, bitmap.data, 256*512*4);
texture->UnlockRect(0);

// bitmap.data contains the data describing the bitmap in the following format: RGBARGBARGBA...

When I copy data I get the texture with inversed R and B channels. So I changed the texture formiat into D3DFMT_A8R8G8B8 and... I got the same! No matter what I pass as a texture format the lockedRect.pBits contains data in the BGRA format! I thought that pBits depends on what the texture format is. Does pBits always contains the BGRA data format?

btw: don't say me to use D3DXCreateTextureFromFile - I know that. I only want to tes locking/unlocking the texture :)

Share this post


Link to post
Share on other sites
Quote:
Original post by Maxest
So if I, for example, use D3DFMT_A8R8G8B8 I can be sure, that pitch == width?
No, you can *NEVER* be sure. The whole reason for that variable to be there is to represent the fact that it can be different. Never assume.

Quote:
Original post by Maxest
I know I should always take care of what pitch's value is but it makes the code look worse :)
Make your decision - are you going to be a good programmer or a sloppy programmer? Cutting corners is an unfortunate reality, but choosing to do so out of laziness only looks bad on you. Who are your players/customers going to come crying to when your application comes crashing down due to your sloppy code? (cummon, you're not even using safe memory copies [wink])


Quote:
Original post by Maxest
So I changed the texture formiat into D3DFMT_A8R8G8B8 and... I got the same! No matter what I pass as a texture format the lockedRect.pBits contains data in the BGRA format! I thought that pBits depends on what the texture format is. Does pBits always contains the BGRA data format?
No, when locking a texture the returned pointer is direct access to the pixel data in whatever format the texture is stored. It does not contain any sort of adapter to or from any format.

If you're seeing characteristics like this then it's indicative of some other error elsewhere or some other sloppy/incorrect code you've written. Do the usuals - debug runtimes, reference rasterizer, return codes, enumeration etc..etc...

hth
Jack

Share this post


Link to post
Share on other sites
Quote:

No, when locking a texture the returned pointer is direct access to the pixel data in whatever format the texture is stored. It does not contain any sort of adapter to or from any format.

You're right. I've just noticed in DirectX SDK that:
Quote:

D3DFMT_A8R8G8B8 21 32-bit ARGB pixel format with alpha, using 8 bits per channel.
...
D3DFMT_A8B8G8R8 32 32-bit ARGB pixel format with alpha, using 8 bits per channel.

That's quit surprising, but it seems that ARGB and ABGR ar the same format.

Quote:

No, you can *NEVER* be sure.

Tell me, at least, what does it depend on? Graphics Card?

Quote:

Make your decision - are you going to be a good programmer or a sloppy programmer? Cutting corners is an unfortunate reality, but choosing to do so out of laziness only looks bad on you. Who are your players/customers going to come crying to when your application comes crashing down due to your sloppy code? (cummon, you're not even using safe memory copies )

I'll keep that on my mind :).
And... what else should I use instead of memcpy? I see it the simplest and commong method (used also by DX SDK tutorials authors)

Share this post


Link to post
Share on other sites
If you ask the D3D device to create a texture, it will make it in the format you've asked, or fail.

When you ask D3DX to create a texture, it will try to make what you've asked for, but will substitute parameters if needed to make it work. That's the whole point of the "X" library. It wraps commonly wanted logic around D3D so you don't have to waste your time handling everything. Your card probably doesn't support A8B8G8R8, and D3DX is changing it behind your back to a pretty much identical format that your card does support. Use GetLevelDesc to find which format the texture actually is.

As others have said, always use pitch. It may depend on graphics card, driver, formats used, phase of the moon, or how many burgers McDonald's sold on Tuesday. It doesn't matter... because you're not going to hope pitch=width*bytesPerPixel. I wish the debug runtimes would pass a flag to the driver to ensure that pitch was oddly formed just to make people use it correctly. I should suggest it for the REF device.

Because of pitch you can only memcpy a line at a time, not the entire surface. You should also verify that the lock actually succeeded before writing to the pointer.

Share this post


Link to post
Share on other sites
I changed the way I create the texture on something like this:

device->CreateTexture(256, 512, 1, 0, D3DFMT_A8B8G8R8, D3DPOOL_MANAGED, &texture, NULL);

And... The program at the start just crushed. So I switched the format to D3DFMT_A8R8G8B8 and now everything's ok.
I would never expect that my GeForce 6600 didn't support the D3DFMT_A8B8G8R8 format! Thanks for clearing my mind :)

And I will try to remember to check all the errors that may occur, as you all tell me to do it :). Thanks a lot for all of your advice and explanations

Share this post


Link to post
Share on other sites
Quote:
Original post by Namethatnobodyelsetook
As others have said, always use pitch. It may depend on graphics card, driver, formats used, phase of the moon, or how many burgers McDonald's sold on Tuesday. It doesn't matter... because you're not going to hope pitch=width*bytesPerPixel. I wish the debug runtimes would pass a flag to the driver to ensure that pitch was oddly formed just to make people use it correctly. I should suggest it for the REF device.

Because of pitch you can only memcpy a line at a time, not the entire surface. You should also verify that the lock actually succeeded before writing to the pointer.
To expand on this, the graphics card is free to store any information it wants at the end of a scanline. It could use it for storing extra data (Random example: Perhaps if it was an alpha only texture and that scanline was all zero alpha, the card could store a flag at the end of the scanline so it can optimize that line somehow), or padding (What if the graphics card needs texture scanlines padded to a 32-byte boundary or something?)

Also, although you can't memcpy() the whole surface, you can if you wish, check if the pitch is equal to the width time the bytes per pixel, and if so, then you can memcpy() the entire surface, otherwise you need to do it for each line. I do this, but I doubt the performance difference is really worth it (memcpy is pretty optimised already).

Share this post


Link to post
Share on other sites
Quote:
Original post by Maxest
I changed the way I create the texture on something like this:

device->CreateTexture(256, 512, 1, 0, D3DFMT_A8B8G8R8, D3DPOOL_MANAGED, &texture, NULL);

And... The program at the start just crushed. So I switched the format to D3DFMT_A8R8G8B8 and now everything's ok.
I would never expect that my GeForce 6600 didn't support the D3DFMT_A8B8G8R8 format! Thanks for clearing my mind :)

And I will try to remember to check all the errors that may occur, as you all tell me to do it :). Thanks a lot for all of your advice and explanations
You should also be using the debug runtimes with the debug output level set to maximum, they'll spit out a lot more information and warnings that will help, and they'll give detailed information about errors. For example you might get: "D3D (ERROR): D3DFMT_A8B8G8R8 is not a supported texture format, CreateTexture fails."

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!