Jump to content
  • Advertisement
Sign in to follow this  

Texture Lock, buffer write, no effect

This topic is 4337 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm locking a texture and writing data to the buffer, with no access problems. But when I unlock the texture and render, it's solid white. After testing a bit, I realize it's already solid white before I locked it. So the lock and write doesn't seem to have any effect. I've tried rendering the texture onto objects, and as an image with ID3DXSprite (to verify that there were no lighting problems). Even worse, the texture is in system memory. I don't understand what I'm doing wrong. Does anyone know off-hand what this might be? There wouldn't happen to be some sort of UpdateTextureDirtyRegions or EmptyBuffers type function to make my lock take effect? I'll include the buffer-write code, but I'm pretty sure this part is okay: http://www.rafb.net/paste/results/8U5hzw61.html I had to move some things around to make the code make sense, so let me know if I messed that up. I've tried changing around the lock flags, but that didn't help. Thanks for any advice.

Share this post


Link to post
Share on other sites
Advertisement
How are you creating your texture?

Below is a function i have that creates and returns a texture that is a fade from one colour to another.


namespace DXGTF
{
void VerticalFade( const DXGDevice& device,
const D3DXVECTOR3& colTop,
const D3DXVECTOR3& colBot,
const size_t width,
const size_t height,
DXGTexture& out )
{
DWORD* pixels;
DWORD finalColour;
D3DXVECTOR3 finalColourVec;
size_t count = 0;

// Validate the device
assert( device.Get() != 0 );

pixels = new DWORD[ width*height ];

// First of all create a texture to copy to the output
D3DXCreateTexture( device.Get(),
static_cast<UINT>( width ),
static_cast<UINT>( height ),
1,
0,
D3DFMT_A8R8G8B8,
D3DPOOL_MANAGED,
out.Address() );

for ( size_t row = 0;
row < height;
++row )
{
for ( size_t col = 0;
col < width;
++col )
{
float topWeighting = static_cast< float >(row)/static_cast< float >(height);
float botWeighting = 1 - static_cast< float >(row)/static_cast< float >(height);


finalColourVec.x = ( topWeighting * colTop.x ) + ( botWeighting * colBot.x );
finalColourVec.y = ( topWeighting * colTop.y ) + ( botWeighting * colBot.y );
finalColourVec.z = ( topWeighting * colTop.z ) + ( botWeighting * colBot.z );

finalColour = RGBToARGBDWORD( finalColourVec );

pixels[ count ] = finalColour;

++count;
}
}

WriteToTexture( out, pixels, width * height );
}
}



Hope you can get some idea of how i do things, at least.

Dave

Share this post


Link to post
Share on other sites
It looks very similar:
IDirect3DTexture9 *texture = NULL;
Device->CreateTexture( EngineData.Display.Size.X,
EngineData.Display.Size.Y,
1, 0, D3DFMT_A8R8G8B8,
D3DPOOL_SYSTEMMEM, &texture, NULL );


There are no errors during the creation.

Share this post


Link to post
Share on other sites
I threw together some code to lock the texture surface and read certain pixels from it right before I draw it, and the pixel colors are exactly what I wrote into the buffer.

I'm confused. I'm drawing other textures right next to this code. In fact, it looks like this:

texture_that_doesnt_work->Draw();
textures_that_do_work->Draw();

And the result is still pure white for the texture I wrote to. Are there other texture properties that I need to set after I create a texture? Does anyone know why it only renders white, even though I can lock it and see that the color is not white? If I modulate the texture with a light grey, the result is pure light grey.

Any stabs for an answer would be appreciated.

Share this post


Link to post
Share on other sites
Pure white is what you get for a texture that's not set for either NVIDIA or ATI (don't remember which; I think NVIDIA).

Are you getting anything to the debug output? What does the program show when running with the reference device?

What I'd suggest is using a managed texture instead of sysmem, and seeing what happens. Have you made sure (using the caps) that the card supports rendering from sysmem textures?

Share this post


Link to post
Share on other sites
Quote:
Original post by ET3D
Pure white is what you get for a texture that's not set for either NVIDIA or ATI (don't remember which; I think NVIDIA).

Are you getting anything to the debug output? What does the program show when running with the reference device?

What I'd suggest is using a managed texture instead of sysmem, and seeing what happens. Have you made sure (using the caps) that the card supports rendering from sysmem textures?

Apparently it doesn't support system memory texture rendering. Thanks for the heads up. But why does D3DPOOL_MANAGED work? D3DPOOL_DEFAULT doesn't work, because you can't lock it unless it's dynamic.

I've always thought that D3DPOOL_MANAGED was video memory data, but backed up in system memory? I normally woudn't need that, since my data is stored on the hard drive to be reloaded when bad things happen.

Would the reason that managed works be because of something it is doing in the background? Like updating a system memory surface, then transferring that to a video memory version? I actually planned to do that myself, but I didn't expect my card not to support system textures. I didn't realize that was an issue.

I really appreciate your extremely accurate stab at my problem.

Share this post


Link to post
Share on other sites
In case anyone doesn't know, the API uses IDirect3DDevice9::UpdateSurface to do exactly what I've been needing to do since I started this thread.

Appreciate the help.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!