• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Jason Smith
      While working on a project using D3D12 I was getting an exception being thrown while trying to get a D3D12_CPU_DESCRIPTOR_HANDLE. The project is using plain C so it uses the COBJMACROS. The following application replicates the problem happening in the project.
      #define COBJMACROS #pragma warning(push, 3) #include <Windows.h> #include <d3d12.h> #include <dxgi1_4.h> #pragma warning(pop) IDXGIFactory4 *factory; ID3D12Device *device; ID3D12DescriptorHeap *rtv_heap; int WINAPI wWinMain(HINSTANCE hinst, HINSTANCE pinst, PWSTR cline, int cshow) { (hinst), (pinst), (cline), (cshow); HRESULT hr = CreateDXGIFactory1(&IID_IDXGIFactory4, (void **)&factory); hr = D3D12CreateDevice(0, D3D_FEATURE_LEVEL_11_0, &IID_ID3D12Device, &device); D3D12_DESCRIPTOR_HEAP_DESC desc; desc.NumDescriptors = 1; desc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_RTV; desc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE; desc.NodeMask = 0; hr = ID3D12Device_CreateDescriptorHeap(device, &desc, &IID_ID3D12DescriptorHeap, (void **)&rtv_heap); D3D12_CPU_DESCRIPTOR_HANDLE rtv = ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart(rtv_heap); (rtv); } The call to ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart throws an exception. Stepping into the disassembly for ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart show that the error occurs on the instruction
      mov  qword ptr [rdx],rax
      which seems odd since rdx doesn't appear to be used. Any help would be greatly appreciated. Thank you.
       
    • By lubbe75
      As far as I understand there is no real random or noise function in HLSL. 
      I have a big water polygon, and I'd like to fake water wave normals in my pixel shader. I know it's not efficient and the standard way is really to use a pre-calculated noise texture, but anyway...
      Does anyone have any quick and dirty HLSL shader code that fakes water normals, and that doesn't look too repetitious? 
    • By turanszkij
      Hi,
      I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
    • By NikiTo
      Some people say "discard" has not a positive effect on optimization. Other people say it will at least spare the fetches of textures.
       
      if (color.A < 0.1f) { //discard; clip(-1); } // tons of reads of textures following here // and loops too
      Some people say that "discard" will only mask out the output of the pixel shader, while still evaluates all the statements after the "discard" instruction.

      MSN>
      discard: Do not output the result of the current pixel.
      clip: Discards the current pixel..
      <MSN

      As usual it is unclear, but it suggests that "clip" could discard the whole pixel(maybe stopping execution too)

      I think, that at least, because of termal and energy consuming reasons, GPU should not evaluate the statements after "discard", but some people on internet say that GPU computes the statements anyways. What I am more worried about, are the texture fetches after discard/clip.

      (what if after discard, I have an expensive branch decision that makes the approved cheap branch neighbor pixels stall for nothing? this is crazy)
    • By NikiTo
      I have a problem. My shaders are huge, in the meaning that they have lot of code inside. Many of my pixels should be completely discarded. I could use in the very beginning of the shader a comparison and discard, But as far as I understand, discard statement does not save workload at all, as it has to stale until the long huge neighbor shaders complete.
      Initially I wanted to use stencil to discard pixels before the execution flow enters the shader. Even before the GPU distributes/allocates resources for this shader, avoiding stale of pixel shaders execution flow, because initially I assumed that Depth/Stencil discards pixels before the pixel shader, but I see now that it happens inside the very last Output Merger state. It seems extremely inefficient to render that way a little mirror in a scene with big viewport. Why they've put the stencil test in the output merger anyway? Handling of Stencil is so limited compared to other resources. Does people use Stencil functionality at all for games, or they prefer discard/clip?

      Will GPU stale the pixel if I issue a discard in the very beginning of the pixel shader, or GPU will already start using the freed up resources to render another pixel?!?!



       
  • Advertisement
  • Advertisement
Sign in to follow this  

DX12 Debug layer on/off will change DX behavior?

This topic is 462 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey Guys,

 

Does anyone know how debug layer will affect DX12 behavior?  I spend a whole day fighting a bug looks like GPU data corruption with atomic ops (I also trying specifically clear data before use, but only get much more weird behaviors...). But it turn out turning debug layer on everything just all right....  

So any comments, suggestions or even idea will be greatly appreciated.

 

P.S. I do suppress some debuglayer warnings, if you think that may matters here are they:

 

D3D12_MESSAGE_ID_EXECUTECOMMANDLISTS_GPU_WRITTEN_READBACK_RESOURCE_MAPPED

D3D12_MESSAGE_ID_INVALID_DESCRIPTOR_HANDLE

D3D12_MESSAGE_ID_CREATEGRAPHICSPIPELINESTATE_PS_OUTPUT_RT_OUTPUT_MISMATCH

D3D12_MESSAGE_ID_COMMAND_LIST_DESCRIPTOR_TABLE_NOT_SET

 

I have tested this on nvidia GTX 680, 1080, they both have the same problem, so I don't think this is a HW related 'bug'

 

Thanks

I have tested this on nvidia GTX 680, 1080, they both have the same problem, so I don't think this is a HW related 'bug'

Share this post


Link to post
Share on other sites
Advertisement
Not sure if this also affects the debug layer of dx(12) specific/ itself, but it smells a bit like an uninitialized variable or property

Share this post


Link to post
Share on other sites

Not sure if this also affects the debug layer of dx(12) specific/ itself, but it smells a bit like an uninitialized variable or property

You mean GPU side variable, property?  I mean I tested it both in debug/release build, bug only appear when debuglayer is off...

 

Also I have a UI button which will reset all related GPU buffer (some are through compute shader, some are through API calls like ClearUnorderedAccessViewUint/Float) but call that doesn't help anything.

 

Also I found ClearUnorderedAccessViewUint/Float call is very suspicious though... 

Share this post


Link to post
Share on other sites

P.S. I do suppress some debuglayer warnings, if you think that may matters here are they:   D3D12_MESSAGE_ID_EXECUTECOMMANDLISTS_GPU_WRITTEN_READBACK_RESOURCE_MAPPED D3D12_MESSAGE_ID_INVALID_DESCRIPTOR_HANDLE D3D12_MESSAGE_ID_CREATEGRAPHICSPIPELINESTATE_PS_OUTPUT_RT_OUTPUT_MISMATCH D3D12_MESSAGE_ID_COMMAND_LIST_DESCRIPTOR_TABLE_NOT_SET
My first assumption would be that these are the source of your issue...

If you don't suppress these message ID's, what text gets printed in the debug output, what are you doing to cause these messages, and what makes you sure that it's safe? 

Share this post


Link to post
Share on other sites

The debug layer has several features which can be enabled/disabled. By default, the debug layer runs with one relatively big behavior change, which is that any GPU-side waits get turned into CPU-side waits, in order to run more validation as work is executing, instead of just when it's submitted, when a lot of things (e.g. resource state, descriptor heap contents) might not be knowable.

 

Beyond that, unless you turn on GPU-based validation, which is a pretty heavy-handed feature, the behavior changes should be very minimal.

 

I agree with Hodgman, the errors you're suppressing are likely the problem: D3D12_MESSAGE_ID_COMMAND_LIST_DESCRIPTOR_TABLE_NOT_SET sounds like it would cause the behavior that you're seeing, because that would mean you're using uninitialized data.

Share this post


Link to post
Share on other sites

To implement the debug driver is not an easy thing, and small difference in behavior may happen.but d3d12 drivers are robust enough now to put the bug on the user side 99% of time.

 

Now that the user has the task of synchronization, even a log free run is not a warrant for a bug free code.

 

Things to try :

* Run with the software adapter

* Enable the GPU based validation ( and cry if it is your first time, you have at least a dozen of new errors to fix ).

 

From the 4 you put, it is quite ugly to have a mapped resource while the GPU write on it, definitely, your sync point and fences are missing something. As for not set descriptor tables, you ends not doing what you think you do on some resources. You can also try nsight or renderdoc that may give you a better idea of what is wrong in your frame.

 

But Rules number one of D3D12 programming : If there is a debug layer warning or error, fix it first, you do not want this to escalate into a giant mess. And do not assume once you are clean, that it will stay like that, it is easy to generate new bugs, and even windows update can introduce new validation.

Share this post


Link to post
Share on other sites

Thanks Guys,  if you are interested here are the error message I get if I don't suppress any thing:


D3D12 ERROR: ID3D12CommandList::DrawIndexedInstanced: Root Parameter Index [2] is not set. On a Resource Binding Tier 2 hardware, all descriptor tables of type CBV and UAV declared in the currently set Root Signature (0x0000026D2DE9E200:'TSDFVolume') must be populated, even if the shaders do not need the descriptor. [ EXECUTION ERROR #991: COMMAND_LIST_DESCRIPTOR_TABLE_NOT_SET]
D3D12 ERROR: ID3D12CommandList::DrawIndexedInstanced: Root Parameter Index [2] is not set. On a Resource Binding Tier 2 hardware, all descriptor tables of type CBV and UAV declared in the currently set Root Signature (0x0000026D2DE9E200:'TSDFVolume') must be populated, even if the shaders do not need the descriptor. [ EXECUTION ERROR #991: COMMAND_LIST_DESCRIPTOR_TABLE_NOT_SET]
D3D12 ERROR: ID3D12CommandList::DrawInstanced: Root Parameter Index [2] is not set. On a Resource Binding Tier 2 hardware, all descriptor tables of type CBV and UAV declared in the currently set Root Signature (0x0000026D2DE9E200:'TSDFVolume') must be populated, even if the shaders do not need the descriptor. [ EXECUTION ERROR #991: COMMAND_LIST_DESCRIPTOR_TABLE_NOT_SET]
D3D12 ERROR: ID3D12CommandList::DrawInstanced: Root Parameter Index [2] is not set. On a Resource Binding Tier 2 hardware, all descriptor tables of type CBV and UAV declared in the currently set Root Signature (0x0000026D2DE9E200:'TSDFVolume') must be populated, even if the shaders do not need the descriptor. [ EXECUTION ERROR #991: COMMAND_LIST_DESCRIPTOR_TABLE_NOT_SET]
D3D12 ERROR: ID3D12CommandList::DrawIndexedInstanced: Root Parameter Index [2] is not set. On a Resource Binding Tier 2 hardware, all descriptor tables of type CBV and UAV declared in the currently set Root Signature (0x0000026D2DE9E200:'TSDFVolume') must be populated, even if the shaders do not need the descriptor. [ EXECUTION ERROR #991: COMMAND_LIST_DESCRIPTOR_TABLE_NOT_SET]
D3D12 ERROR: ID3D12CommandQueue::ExecuteCommandLists: Specified GPU Descriptor Handle (ptr=0x26d29360370 at 12 offsetInDescriptorsFromDescriptorHeapStart) of type SRV, for Root Signature (0x0000026D2DE9E8F0:'NormalGenerator')'s Descriptor Table (at Parameter Index [2])'s Descriptor Range (at Range Index [0] of type D3D12_DESCRIPTOR_RANGE_TYPE_UAV) have mismatching types, at Dispatch Index: [0]. On Resource Binding Tier 2 hardware, all descriptor tables of type CBV and UAV declared in the set Root Signature must be populated and initialized, even if the shaders do not need the descriptor. [ EXECUTION ERROR #646: INVALID_DESCRIPTOR_HANDLE]
D3D12 ERROR: ID3D12CommandQueue::ExecuteCommandLists: Specified GPU Descriptor Handle (ptr=0x26d293603b0 at 14 offsetInDescriptorsFromDescriptorHeapStart) of type SRV, for Root Signature (0x0000026D2DE9E8F0:'NormalGenerator')'s Descriptor Table (at Parameter Index [2])'s Descriptor Range (at Range Index [0] of type D3D12_DESCRIPTOR_RANGE_TYPE_UAV) have mismatching types, at Dispatch Index: [1]. On Resource Binding Tier 2 hardware, all descriptor tables of type CBV and UAV declared in the set Root Signature must be populated and initialized, even if the shaders do not need the descriptor. [ EXECUTION ERROR #646: INVALID_DESCRIPTOR_HANDLE]
D3D12 ERROR: ID3D12CommandQueue::ExecuteCommandLists: Specified GPU Descriptor Handle (ptr=0x26d29360590 at 30 offsetInDescriptorsFromDescriptorHeapStart) of type SRV, for Root Signature (0x0000026D2DE9E200:'TSDFVolume')'s Descriptor Table (at Parameter Index [2])'s Descriptor Range (at Range Index [0] of type D3D12_DESCRIPTOR_RANGE_TYPE_UAV) have mismatching types, at Dispatch Index: [1]. On Resource Binding Tier 2 hardware, all descriptor tables of type CBV and UAV declared in the set Root Signature must be populated and initialized, even if the shaders do not need the descriptor. [ EXECUTION ERROR #646: INVALID_DESCRIPTOR_HANDLE]
D3D12 ERROR: ID3D12CommandQueue::ExecuteCommandLists: Specified GPU Descriptor Handle (ptr=0x26d29360590 at 31 offsetInDescriptorsFromDescriptorHeapStart) of type SRV, for Root Signature (0x0000026D2DE9E200:'TSDFVolume')'s Descriptor Table (at Parameter Index [2])'s Descriptor Range (at Range Index [0] of type D3D12_DESCRIPTOR_RANGE_TYPE_UAV) have mismatching types, at Dispatch Index: [1]. On Resource Binding Tier 2 hardware, all descriptor tables of type CBV and UAV declared in the set Root Signature must be populated and initialized, even if the shaders do not need the descriptor. [ EXECUTION ERROR #646: INVALID_DESCRIPTOR_HANDLE]
D3D12 ERROR: ID3D12CommandQueue::ExecuteCommandLists: Specified GPU Descriptor Handle (ptr=0x26d29360590 at 34 offsetInDescriptorsFromDescriptorHeapStart) of type SRV, for Root Signature (0x0000026D2DE9E200:'TSDFVolume')'s Descriptor Table (at Parameter Index [2])'s Descriptor Range (at Range Index [0] of type D3D12_DESCRIPTOR_RANGE_TYPE_UAV) have mismatching types, at Dispatch Index: [1]. On Resource Binding Tier 2 hardware, all descriptor tables of type CBV and UAV declared in the set Root Signature must be populated and initialized, even if the shaders do not need the descriptor. [ EXECUTION ERROR #646: INVALID_DESCRIPTOR_HANDLE]
D3D12 ERROR: ID3D12CommandQueue::ExecuteCommandLists: Specified GPU Descriptor Handle (ptr=0x26d29360610 at 34 offsetInDescriptorsFromDescriptorHeapStart) of type SRV, for Root Signature (0x0000026D2DE9E200:'TSDFVolume')'s Descriptor Table (at Parameter Index [2])'s Descriptor Range (at Range Index [0] of type D3D12_DESCRIPTOR_RANGE_TYPE_UAV) have mismatching types, at Dispatch Index: [1]. On Resource Binding Tier 2 hardware, all descriptor tables of type CBV and UAV declared in the set Root Signature must be populated and initialized, even if the shaders do not need the descriptor. [ EXECUTION ERROR #646: INVALID_DESCRIPTOR_HANDLE]
D3D12 ERROR: ID3D12CommandQueue::ExecuteCommandLists: Specified GPU Descriptor Handle (ptr=0x26d29360610 at 35 offsetInDescriptorsFromDescriptorHeapStart) of type SRV, for Root Signature (0x0000026D2DE9E200:'TSDFVolume')'s Descriptor Table (at Parameter Index [2])'s Descriptor Range (at Range Index [0] of type D3D12_DESCRIPTOR_RANGE_TYPE_UAV) have mismatching types, at Dispatch Index: [1]. On Resource Binding Tier 2 hardware, all descriptor tables of type CBV and UAV declared in the set Root Signature must be populated and initialized, even if the shaders do not need the descriptor. [ EXECUTION ERROR #646: INVALID_DESCRIPTOR_HANDLE]
D3D12 ERROR: ID3D12CommandQueue::ExecuteCommandLists: Specified GPU Descriptor Handle (ptr=0x26d29360610 at 38 offsetInDescriptorsFromDescriptorHeapStart) of type SRV, for Root Signature (0x0000026D2DE9E200:'TSDFVolume')'s Descriptor Table (at Parameter Index [2])'s Descriptor Range (at Range Index [0] of type D3D12_DESCRIPTOR_RANGE_TYPE_UAV) have mismatching types, at Dispatch Index: [1]. On Resource Binding Tier 2 hardware, all descriptor tables of type CBV and UAV declared in the set Root Signature must be populated and initialized, even if the shaders do not need the descriptor. [ EXECUTION ERROR #646: INVALID_DESCRIPTOR_HANDLE]
D3D12 ERROR: ID3D12CommandQueue::ExecuteCommandLists: Specified GPU Descriptor Handle (ptr=0x26d29360690 at 38 offsetInDescriptorsFromDescriptorHeapStart) of type SRV, for Root Signature (0x0000026D2DE9E200:'TSDFVolume')'s Descriptor Table (at Parameter Index [2])'s Descriptor Range (at Range Index [0] of type D3D12_DESCRIPTOR_RANGE_TYPE_UAV) have mismatching types, at Dispatch Index: [2]. On Resource Binding Tier 2 hardware, all descriptor tables of type CBV and UAV declared in the set Root Signature must be populated and initialized, even if the shaders do not need the descriptor. [ EXECUTION ERROR #646: INVALID_DESCRIPTOR_HANDLE]
D3D12 ERROR: ID3D12CommandQueue::ExecuteCommandLists: Specified GPU Descriptor Handle (ptr=0x26d29360690 at 39 offsetInDescriptorsFromDescriptorHeapStart) of type SRV, for Root Signature (0x0000026D2DE9E200:'TSDFVolume')'s Descriptor Table (at Parameter Index [2])'s Descriptor Range (at Range Index [0] of type D3D12_DESCRIPTOR_RANGE_TYPE_UAV) have mismatching types, at Dispatch Index: [2]. On Resource Binding Tier 2 hardware, all descriptor tables of type CBV and UAV declared in the set Root Signature must be populated and initialized, even if the shaders do not need the descriptor. [ EXECUTION ERROR #646: INVALID_DESCRIPTOR_HANDLE] 

So from the message it said all those  errors is target on Resource Binding Tier 2 hardware, so if these do cause problem on my GTX680m  why it also cause same bug on GTX1080 (which is tier 3 right?)

The main reason for those errors is that I wish to share rootsignature among different PSO which have different amount of resource to bind, so there will be unused slots, but my shader will not touch unbound resources. 

 

If you don't suppress these message ID's, what text gets printed in the debug output, what are you doing to cause these messages, and what makes you sure that it's safe? 

 

The error messages are as mentioned above, and along with the causes.  I do have concern about how safe it is, but since Microsoft's MiniEngine (in their DX12 sample repo) also suppressed those errors, I take the safety as granted (if it is not safe, they shouldn't put this in their DX12 samples right?) Also their sample works properly, and my previous projects also works properly (even on my Tier 2 GPU). So I never thought this will be source of my bugs....(maybe I totally wrong....)

But how could unbound slot cause issues if I didn't access those slots? I understand as validation layers those warning(or error) totally make sense, but how 'uninitial variable causes problem if you never use those variable'

 

Thanks

 


D3D12_MESSAGE_ID_COMMAND_LIST_DESCRIPTOR_TABLE_NOT_SET sounds like it would cause the behavior that you're seeing, because that would mean you're using uninitialized data.

 

Those unbind rootsig slots are caused by share rootsignature among different PSOs, and my shader won't access those descriptors.... It still causes problems?

Thanks 

Share this post


Link to post
Share on other sites

From the 4 you put, it is quite ugly to have a mapped resource while the GPU write on it, definitely, your sync point and fences are missing something

 

Thanks for the reply, the mapped resource is only for roughly print data on debug UI panel, and I wait on fence to make sure copy from default heap to readback heap is done before CPU read... it's not triggered in my project, I just put it there during a quick debug though...   


Enable the GPU based validation ( and cry if it is your first time, you have at least a dozen of new errors to fix ).

How to enable it? any resource on that? unfortunately it seems its my first time.... is something different than debuglayer?

Thanks 

Share this post


Link to post
Share on other sites
You need to be on a recent enough windows update, use the proper windows sdk (in the project setting). Then you can query ID3DDebug1 that has a function to enable the gpu validation, with a whole new world of new errors for you. One of their big catch is the resource barrier mismatch for example.

Share this post


Link to post
Share on other sites

Is it SetXXXRootSignature api call is really cheap so that it is discouraged to have multiple PSOs share rootsig?  I mean if you have PSOs which have different number of resource need to bound, just create rootsig for each of them, and have extra runtime SetXXXRootSignature when you need to change PSO?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement