Sign in to follow this  

Why call DiscardResource?

Recommended Posts

This is more useful in conjunction of IDXGIAdapter3::QueryVideoMemoryInfo and other IDXGIAdapter3 function in regards to memory budget. When you are dealing with a memory budget, and with potentially other application that can lead to windows to ask you to reduce your footprint, Discard is a tool that let you say "hey, i do not need that resource for the moment but i may need later, if you need, you can get the memory back, but if you do not need, i prefer you to not". 

 

This is an improvement over Destroy/Create, as it may give a harder time to the driver to manage allocations, it may also skip some life management, and conserve trace of what memory is better for your resource as the driver move things around based on usage and load.

Share this post


Link to post
Share on other sites

Galop1n, you're thinking of OfferResources/ReclaimResources.

 

DiscardResource enables an optimization when rendering to the resource, particularly on tile-based deferred rasterizers. It's essentially saying that you're going to opaquely render to a region of a resource, so there's no need to read in those tiles next time you access it.

 

Your use case described here does not work as intended, ( at least for AMD and nVidia ). The best when you do not care about the surface previous content is to clear, because it turns on the fast clear system that is the only one to save you gpu cycles by using an extra memory buffer to keep track of surface tile status, using things like depth buffer compression, fast clear elimination, ... 

 

Discard is the ancestor dx11.1 of the offer/reclaim system in dx11.2.

Edited by galop1n

Share this post


Link to post
Share on other sites

Agreed, compression metadata serves a similar purpose, but Discard definitely does not have any implications on memory usage. In fact, offer/reclaim don't really have any impact on memory usage until IDXGIDevice4::OfferResources1 with the ALLOW_DECOMMIT flag.

 

Discard is intended for tile-based deferred rasterizers (TBDRs). See this StackOverflow post for more comments.

Share this post


Link to post
Share on other sites

galop1n is incorrect. DiscardResource has NOTHING to do with memory claiming.

 

DiscardResource has two purposes:

  1. TBDR: When tilers rasterize, they keep data on on-chip memory. When they need to flush, they write the data back to RAM. If for example in the next passes you will be reading from the colour RenderTarget you're now writing to; but you don't care at all about the depth buffer; discarding the depth buffer tells the driver the GPU should not flush the depth buffer's contents back to RAM. This saves bandwidth and power. Even desktop GPUs might benefit from this, as NVIDIA's Pascal is an immediate tiler. Though I doubt the NV driver actually listens for DiscardResource for this optimization.
  2. Resolving SLI / CrossFire inter-frame dependency. When for some reason clearing is expensive or unnecessary, DiscardResource tells the driver the GPU should not synchronize the contents between the resources. Whether a clear is better is arguable. Depth buffers do benefit from clears as they have a lot of algorithms behind the scenes (Early Z, Hi Z, Z Compression) that could use a state reset. But colour buffers often were believed not to. This changed when AMD introduced DCC (Delta Colour Compression) and other minor algorithms. Also MSAA resources will benefit from clears too.

Edit: See http://developer.download.nvidia.com/assets/events/GDC15/GEFORCE/SLI_GDC15.pdf

Edited by Matias Goldberg

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this