Jump to content
  • Advertisement
Sign in to follow this  
rlange

SlimDX - Texture from stream causes FPS drop

This topic is 2590 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm using easyhook to attach myself right before EndScene is called. The code is pretty self explanatory and contained within my EndScene hook function. When I run this, the fps drop from 60 to 30. Any ideas on how to optimize this code?



using (Surface backBuffer = device.GetBackBuffer(0, 0))
{
Surface _renderTarget;
device.GetRenderTargetData(backBuffer, _renderTarget);
using (Texture texture = Texture.FromStream(device, Surface.ToStream(_renderTarget, ImageFileFormat.Dds, new Rectangle(1620, 910, 289, 285))))
{
using (SlimDX.Direct3D9.Sprite sprite = new SlimDX.Direct3D9.Sprite(device))
{
sprite.Begin(SpriteFlags.AlphaBlend);
sprite.Draw(texture, null, new SlimDX.Vector3(610, 280, 0), new SlimDX.Color4(.4f, 1, 1, 1));
sprite.End();
}
}
}

Share this post


Link to post
Share on other sites
Advertisement
I should mention that the slowdown comes from creating the texture. Maybe I could create it directly from the rendertarget's memory? Not sure how I'd go about doing that. Could someone provide a sample?


Texture texture = Texture.FromStream(device, Surface.ToStream(_renderTarget, ImageFileFormat.Dds, new Rectangle(1620, 910, 289, 285)))

Share this post


Link to post
Share on other sites
You're not going to be able to optimize this code as the code itself is not the cause of your performance drop. Pulling data back from the GPU to the CPU is an inherently slow operation. Your bandwidth in this direction may be limited if you've an older GPU (and that's a hardware limitation; nothing to do with code) and even if not, doing this needs to stall the GPU and perform a GPU/CPU synchronization. Creating another texture to use with that data is just exacerbating the problem. You're round-tripping data from GPU to CPU and then back to GPU again, whereas what you need is to rearchitect your renderer so that everything stays on the GPU all the time. There is a further problem in that you're creating the new texture using DXT compression, meaning that the data also needs to go through a compression step when creating the new texture.

So you'll need to do this in a completely different way. See this thread for a discussion of almost the very same problem you're having (and note my reply near the end).

In summary (and bearing in mind that I don't know Slim DX so I'm going to be talking in some natrive D3D speak here):
  • Create a texture with render target usage during startup. This should be the same size as your backbuffer.
  • Use GetRenderTarget at the beginning of the scene to save out the current backbuffer.
  • Obtain the surface interface for the render target texture (using GetSurfaceLevel in native D3D), then SetRenderTarget to this surface.
  • Draw the scene as normal.
  • Use SetRenderTarget on the original back buffer surface (then Release the two surface interfaces you have as GetRenderTarget and GetSurfaceLevel will have incremented their reference counts).
  • Draw the render target texture as a fullscreen quad.Again, sorry for not being able to provide actual code here (like I said, I don't know SlimDX so I can't) but I've tried to keep this in terms you should be able to translate across, perhaps with some help from the SlimDX documentation.


Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!