• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Jason Smith
      While working on a project using D3D12 I was getting an exception being thrown while trying to get a D3D12_CPU_DESCRIPTOR_HANDLE. The project is using plain C so it uses the COBJMACROS. The following application replicates the problem happening in the project.
      #define COBJMACROS #pragma warning(push, 3) #include <Windows.h> #include <d3d12.h> #include <dxgi1_4.h> #pragma warning(pop) IDXGIFactory4 *factory; ID3D12Device *device; ID3D12DescriptorHeap *rtv_heap; int WINAPI wWinMain(HINSTANCE hinst, HINSTANCE pinst, PWSTR cline, int cshow) { (hinst), (pinst), (cline), (cshow); HRESULT hr = CreateDXGIFactory1(&IID_IDXGIFactory4, (void **)&factory); hr = D3D12CreateDevice(0, D3D_FEATURE_LEVEL_11_0, &IID_ID3D12Device, &device); D3D12_DESCRIPTOR_HEAP_DESC desc; desc.NumDescriptors = 1; desc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_RTV; desc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE; desc.NodeMask = 0; hr = ID3D12Device_CreateDescriptorHeap(device, &desc, &IID_ID3D12DescriptorHeap, (void **)&rtv_heap); D3D12_CPU_DESCRIPTOR_HANDLE rtv = ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart(rtv_heap); (rtv); } The call to ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart throws an exception. Stepping into the disassembly for ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart show that the error occurs on the instruction
      mov  qword ptr [rdx],rax
      which seems odd since rdx doesn't appear to be used. Any help would be greatly appreciated. Thank you.
       
    • By lubbe75
      As far as I understand there is no real random or noise function in HLSL. 
      I have a big water polygon, and I'd like to fake water wave normals in my pixel shader. I know it's not efficient and the standard way is really to use a pre-calculated noise texture, but anyway...
      Does anyone have any quick and dirty HLSL shader code that fakes water normals, and that doesn't look too repetitious? 
    • By turanszkij
      Hi,
      I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
    • By NikiTo
      Some people say "discard" has not a positive effect on optimization. Other people say it will at least spare the fetches of textures.
       
      if (color.A < 0.1f) { //discard; clip(-1); } // tons of reads of textures following here // and loops too
      Some people say that "discard" will only mask out the output of the pixel shader, while still evaluates all the statements after the "discard" instruction.

      MSN>
      discard: Do not output the result of the current pixel.
      clip: Discards the current pixel..
      <MSN

      As usual it is unclear, but it suggests that "clip" could discard the whole pixel(maybe stopping execution too)

      I think, that at least, because of termal and energy consuming reasons, GPU should not evaluate the statements after "discard", but some people on internet say that GPU computes the statements anyways. What I am more worried about, are the texture fetches after discard/clip.

      (what if after discard, I have an expensive branch decision that makes the approved cheap branch neighbor pixels stall for nothing? this is crazy)
    • By NikiTo
      I have a problem. My shaders are huge, in the meaning that they have lot of code inside. Many of my pixels should be completely discarded. I could use in the very beginning of the shader a comparison and discard, But as far as I understand, discard statement does not save workload at all, as it has to stale until the long huge neighbor shaders complete.
      Initially I wanted to use stencil to discard pixels before the execution flow enters the shader. Even before the GPU distributes/allocates resources for this shader, avoiding stale of pixel shaders execution flow, because initially I assumed that Depth/Stencil discards pixels before the pixel shader, but I see now that it happens inside the very last Output Merger state. It seems extremely inefficient to render that way a little mirror in a scene with big viewport. Why they've put the stencil test in the output merger anyway? Handling of Stencil is so limited compared to other resources. Does people use Stencil functionality at all for games, or they prefer discard/clip?

      Will GPU stale the pixel if I issue a discard in the very beginning of the pixel shader, or GPU will already start using the freed up resources to render another pixel?!?!



       
  • Advertisement
  • Advertisement
Sign in to follow this  

DX12 DX12 validation error spam

This topic is 743 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,

I was wondering if I was the only one getting spammed with this message :

D3D12 ERROR: ID3D12CommandAllocator::Reset: A command allocator is being reset before previous executions associated with the allocator have completed.

 

I am only running the basic HelloWorld sample from microsoft. I also get awful crashes (full fatal reboot your PC types) when using the graphics debugger.

 

I have a nVidia mobile adapter (970m) & note that none of that is happening if I use Intel integrated GPU.

Share this post


Link to post
Share on other sites
Advertisement

I am using the driver 364.72

I did a few more tests and it seems that on top of all the other issues mentioned above, the "Present" call does not seem to respect command list fences as I get very unusual tearing. As if the back buffer was presented mid-way during drawcalls. I know it sounds silly but I can see the clear color rip through the triangle shown on screen.

I was not able to take a screen capture of the issue. It reminds me of good old days with mode 13 where you could render directly on screen and see the pixel show up as the where updated in memory.

 

Share this post


Link to post
Share on other sites

It could very possibly be just a driver issue. The D3D12 API is still quite new, and the hardware vendors are still very much tweaking the drivers for D3D12 interoperability. That being said (And not having seen the Microsoft HelloWorld Example), is it possible that the CPU & GPU aren't being synchronized before the Allocator is reset? (Which your error message strongly suggests)

 

Some flow control that prevents the function that delimiters the fence object on the GPU, and CPU waits for the event not being called? I'd step through the code (And pay special attention when you reach the Synchronization function, or code) and see if something is aloof.

 

p.s. That driver I believe is bleeding edge for NVidia. Wouldn't hurt if my previous suggestions get you nowhere to consider rolling back to an older version of the Driver.

 

Marcus

Edited by markypooch

Share this post


Link to post
Share on other sites

I believe its a driver bug.

The message indicate an error that would normally throw an exception which it does not. The call to reset actually return S_OK and still produces the message.
I've looked at the code and compared it to my own and I believe the sample is fine. I also ran it with warp without any issue.

 

Looks like ill have to live with my Intel GPU until they improve the drivers.

 

Thanks for the answers!

Share this post


Link to post
Share on other sites

I can confirm such issues with the GeForce GTX 980M too.

  • V-Sync is enabled - If disabled, there aren't any issues.
  • The Render Targets are running via the NVIDIA GPU instead of Intel iGPU.

On Desktop GeForce GTX 980's and the Intel iGPUs all is working fine.

Share this post


Link to post
Share on other sites

What builds of Windows are you both running?

 

If you run 'winver' from a command prompt, it'll say something like "OS Build 14318.1000", what's that number?

 

I've got a GTX 970 here that I've just put on the 364.72 driver and I get no such warnings on the Debug Layer here. I'm compiling/running on the 10586 Windows SDK too, you might have some out-of-date bits?

Share this post


Link to post
Share on other sites

Hi,

 

Windows 10 x64 Build 10586 (End-User Build)

Current NVIDIA Driver 364.72

But this bug is since Driver 350er Series I remember

 

But this issue seems to be only specific to NVIDIA Mobile GPUs.

Share this post


Link to post
Share on other sites

My mistake, I missed the 'm' on the original post and thought we had a case of one Desktop and one Mobile part with the issue. I'll raise it with the team and see if we can have someone chase this up.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement