Jump to content
  • Advertisement
Funkymunky

DX12 [D3D12] Descriptor Heap Strategies

This topic is 391 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

A lot of DX12 articles talk about implementing the descriptor heap entries as a ring buffer (notably the nVidia Do's and Don'ts).  I've also read in these forums that some people prefer a stack-allocated scheme.  I don't see why these methods would be the preferred way of solving this problem.  A ring buffer of descriptors is great if you're always adding new descriptors while deleting the oldest ones.  But what happens when you want to remove a descriptor from the middle of the active set?  And as for a stack-allocated scheme, wouldn't that involve copying in the descriptors every frame?  Why wouldn't something like a free-list or buddy allocator be preferable to either of these setups?

Share this post


Link to post
Share on other sites
Advertisement

I guess what I don't understand is why there would be a lot of objects with transient lifetimes.  It seems like most textures and constant buffers are going to stick around for awhile.  In fact it seems like adding new descriptors/removing old ones would happen pretty infrequently.  Can you describe a use case where a majority of objects would require new descriptors every frame?  And also, are you saying to call CreateShaderResourceView/CreateConstantBufferView every frame?

Share this post


Link to post
Share on other sites

The problem isn't the individual textures having transient lifetimes, but rather the sets/tables of textures having transient lifetimes, as they're arbitrarily combined by the engine. I've seen both sides of this, one where every descriptor table was pre-allocated at the initialization of the engine, and others where everything was dynamic. In the static case, the unit of allocation was descriptor tables of fixed size, used in a heap allocator scheme. In the dynamic case, the unit of allocation was the descriptor or view.

 

For the dynamic case, a common pattern is to use a set of "offline" descriptor heaps which exist on the CPU timeline to stage the descriptors, and CopyDescriptors on a per-frame basis to gather them into "online" descriptor heaps, into tables for binding. The Create*View APIs only need to be called on these "offline" descriptor heaps.

Share this post


Link to post
Share on other sites

Until now I've just been creating two descriptors for any per-frame resources (mostly buffers that are constantly updated),  But I've also been calling SetGraphicsRootDescriptorTable for every bound resource, rather than batching things into contiguous regions and minimizing those calls.  This has worked fine for the relatively small shaders I've tested my scenes with, but it's clear now that this strategy could quickly hit a wall.

 

It's pretty much a classic allocation problem, except there's no reason not to apply extra memory/processing power to making the allocations/deallocations as fast as possible.  I am trying to dream up a faster scheme, but so far it seems like the ring buffer / stack allocator strategy is the way to go.

 

That bindless strategy is intriguing.  That approach would use a common root signature with access to every descriptor, right?  It might be tricky getting root constants to work with that, but the tradeoff for not having to manage the descriptor heap is enticing...

Edited by Funkymunky

Share this post


Link to post
Share on other sites

After some deliberation, I think I'm going to adopt the following scheme:

 

I'll create one or more "offline" heaps to create descriptors in, as this will let me create resources in separate threads.  For the "online" heap, I'll use a freelist allocator to give me descriptor ranges.  I'll track three lists for maintaining this.  The first will be a list of available allocations, sorted by size (a normal freelist).  The second will be a list of allocated and deallocated ranges, sorted by their offset from the start of the heap.  The third will be a list of just the deallocated ranges, also sorted by their offset from the start of the heap.  (The second and third lists will use the same structures, with each structure having pointers to its neighbors).

 

Every frame I will run a basic defragmentation pass.  It will look at the first entry in that third list (deallocations).  If the neighbor to the right of that entry is also a deallocation, then I will coalesce the two into a single deallocation.  If the neighbor is instead already-allocated, then I will shuffle that allocated range to the left, essentially bubbling the deallocation toward the end of the heap.

 

In practice, I'll probably split the "online" heap into multiple regions (one for each frame).  This way I can shuffle descriptors after a fence without disrupting something that's being used.  I think as long as I don't hammer the heap with constant allocations and fragmenting deallocations, this should keep me relatively well managed.  And even if I do, I can always increase the number of defragmentation passes to keep things in check.

Share this post


Link to post
Share on other sites

My approach to this was every command list gets a portion of the heap so there is no need for any kind of fence synchronization. Works well with Vulkan too where every command list has its own descriptor pool. The user has the option to specify the size of the sub-allocation and some more customizations. This approach guarantees lock free descriptor allocation except for the one time where the command list needs to sub-allocate its "local" descriptor heap from the global one.

Edited by mark_braga

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!