Jump to content
  • Advertisement
Sign in to follow this  
JoeMM

How to manage latency when filling buffers

This topic is 2133 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,

 

I'm having a problem with my shadows on D3D, that I can only assume is fairly common. I have a lot of experience of console development but I'm not so hot on D3D...

 

The problem is that D3D seems to be introducing some sort of buffering into the filling of my structured buffers. So the light view and proj that I use during the generation of the shadow map (read via constant buffers) is what I expect, but it does not match the values used during the sampling thereof (read from a structured buffer of light properties). So as soon as the light moves (and/or camera for cascades) the shadow wobbles, as the light position (etc) used for the shadow map is a few frames ahead of what's in the structured buffers.

 

I can "work around" this by adding latency into the values I put in the constant buffer, but that's not a very robust solution...

 

It's quite possible I've done something stupid, as this is nothing fancy - the only slightly unusual thing is that my light data is read from a structured buffer during the lighting pass.

 

I've tried N-buffering my structured buffers to see if that helps remove any internal buffering but to no avail. In both cases (constant buffer and structured buffer) I'm updating the buffer with UpdateSubresource().

 

How do I control / remove / manage this latency?

Any thoughts much appreciated..! 

 

Cheers,

Joe

Share this post


Link to post
Share on other sites
Advertisement

There's no latency when updating resources in D3D, the data lifetime is completely tied to D3D commands. Once you've issued D3D commands to update a resource, any commands following the update should have the new data visible to the GPU. UpdateSubresource can be suboptimal in terms of performance (when the GPU is using the resource, it will copy the data to the command buffer first to avoid stomping on data that the GPU is currently reading from) but it should still work fine in terms of visibility of the data. I would probably try using PIX or the GS graphics debugger to step through the commands for a frame and inspect your buffer contents to try to see where things are going wrong.

Share this post


Link to post
Share on other sites

I got to the bottom of this in the end, sort of.

 

To support generating the GPU command list from multiple threads, I use one main immediate d3d context, which executes multiple child deferred contexts. The deferred contexts are constructed from various threads and the immediate context doesn't do much other than kick off the deferred contexts.

 

The latency was introduced by the fact that I was calling UpdateSubresource from a deferred context. Once I changed this to use the immediate context all was well. I can't say I fully understand why it wasn't ok to do this from a deferred context or how it managed to introduce 3 frames of latency, but I am aware that there are restrictions with deferred contexts so fair enough...! Unfortunately I don't have time to investigate further right now.

 

Hope this helps someone else..!

Joe

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!