Sign in to follow this  
Shnoutz

DX12 [DX12] Updating a descriptor heap using a command list?

Recommended Posts

Hello,

 

Is there a way to update a (GPU visible) descriptor in a heap from a command list?

 

As an exemple, I have a list of 100 texture descriptors:

 

- Draw call 1 uses this list.

- Modify one of the descriptor (lets say, descriptor #17)... something like "commandList->UpdateDescriptor(descHeap, 17, srvDesc)"

- Draw call 2 uses the modified list.

 

As I understand, descriptors are managed from the CPU and to be sure I do not modify something the GPU may be using I would need to create a brand new list of 100 descriptors with only one of them modified.

 

- Draw call 1 uses first list.

- Create a second list of 100 descriptors with #17 different from first list.

- Draw call 2 uses the second list.

 

That does not look too bad but I have a feeling that scaling the number of descriptors to 10k can become problematic.

 

Hopefully you can understand what I mean...

 

Thanks.

Share this post


Link to post
Share on other sites

Currently no, there is no way to do what you want. If you want to have per-draw lists, keep them small so that renaming them is cheap. If you want large lists, consider just using new elements instead of replacing them in place.

Share this post


Link to post
Share on other sites
I bet it's like this because of limitations of some hardware (Intel? NVIDIA?)..
It's a pity because it makes things a lot harder then they should be.

Share this post


Link to post
Share on other sites

It seems this was a planned feature and was in the documentation at one point:

http://www.gamedev.net/topic/670726-d3d12-copying-descriptors/

IIRC it's even mentioned in the Frank Luna book...

This would be hell for driver teams to implement safely though, so it's probably best they've scrapped it... :(

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      627731
    • Total Posts
      2978831
  • Similar Content

    • By Mr_Fox
      Hi Guys,
      Does anyone know how to grab a video frame on to DX texture easily just using Windows SDK? or just play video on DX texture easily without using 3rd party library?  I know during DX9 ages, there is a DirectShow library to use (though very hard to use). After a brief search, it seems most game dev settled down with Bink and leave all hobbyist dx programmer struggling....
      Having so much fun play with Metal video playback (super easy setup just with AVKit, and you can grab movie frame to your metal texture), I feel there must be a similar easy path for video playback on dx12 but I failed to find it.
      Maybe I missed something? Thanks in advance for anyone who could give me some path to follow
    • By _void_
      Hello guys,
      I have a texture of format DXGI_FORMAT_B8G8R8A8_UNORM_SRGB.
      Is there a way to create shader resource view for the texture so that I could read it as RGBA from the shader instead of reading it specifically as BGRA?
      I would like all the textures to be read as RGBA.
       
      Tx
    • By _void_
      Hello guys,
      I am wondering why D3D12 resource size has type UINT64 while resource view size is limited to UINT32.
      typedef struct D3D12_RESOURCE_DESC { … UINT64                   Width; … } D3D12_RESOURCE_DESC; Vertex buffer view can be described in UINT32 types.
      typedef struct D3D12_VERTEX_BUFFER_VIEW { D3D12_GPU_VIRTUAL_ADDRESS BufferLocation; UINT                      SizeInBytes; UINT                      StrideInBytes; } D3D12_VERTEX_BUFFER_VIEW; For the buffer we can specify offset for the first element as UINT64 but the buffer view should still be defined in UINT32 terms.
      typedef struct D3D12_BUFFER_SRV { UINT64                 FirstElement; UINT                   NumElements; UINT                   StructureByteStride; D3D12_BUFFER_SRV_FLAGS Flags; } D3D12_BUFFER_SRV; Does it really mean that we can create, for instance, structured buffer of floats having MAX_UNIT64 elements (MAX_UNIT64 * sizeof(float) in byte size) but are not be able to create shader resource view which will enclose it completely since we are limited by UINT range?
      Is there a specific reason for this? HLSL is restricted to UINT32 values. Calling function GetDimensions() on the resource of UINT64 size will not be able to produce valid values. I guess, it could be one of the reasons.
       
      Thanks!
    • By pcmaster
      Hello!
      Is it possible to mix ranges of samplers and ranges of SRVs and ranges of UAVs in one root parameter descriptor table? Like so:
      D3D12_DESCRIPTOR_RANGE ranges[3]; D3D12_ROOT_PARAMETER param; param.ParameterType = D3D12_ROOT_PARAMETER_TYPE_DESCRIPTOR_TABLE; param.DescriptorTable.NumDescriptorRanges = 3; param.DescriptorTable.pDescriptorRanges = ranges; range[0].RangeType = D3D12_DESCRIPTOR_RANGE_TYPE_SRV; .. range[1].RangeType = D3D12_DESCRIPTOR_RANGE_TYPE_UAV; .. range[2].RangeType = D3D12_DESCRIPTOR_RANGE_TYPE_SAMPLER; .. I wonder especially about CopyDescriptors, that will need to copy a range of D3D12_DESCRIPTOR_HEAP_TYPE_SAMPLER and a range of D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV.
      Thanks if anyone knows (while I try it :))
      .P
    • By Infinisearch
      So I was reading the presentation Practical DirectX 12 - Programming Model and Hardware Capabilities again and finally decided to tackle proper command list submission.  Things mentioned in the document regarding this subject:
      Aim for (per-frame): ● 15-30 Command Lists ● 5-10 ‘ExecuteCommandLists’ calls
      Each ‘ ExecuteCommandLists’ has a fixed CPU overhead ● Underneath this call triggers a flush ● So batch up command lists
      Try to put at least 200μs of GPU work in each ‘ExecuteCommandLists’, preferably 500μs
      Small calls to ‘ExecuteCommandLists’ complete faster than the OS scheduler can submit new ones
      OS takes ~60μs to schedule upcoming work
      So basically I want to estimate how long my draw calls take.  Benchmarking for a particular piece of hardware seems impractical.  So given the stats primitive count, pixel count(approximately how many screen space pixels the call will be rendered to), and some precomputed metric associated with shader ALU complexity(like # of alu ops) do you think that I can get a reasonable estimation of how much time a draw call will take?
      What do you do to take this into account?
      What about other things like transitions?  I can only think of actual measurement in this case.
  • Popular Now