Sign in to follow this  
Shnoutz

DX12 DX12 validation error spam

Recommended Posts

Hi,

I was wondering if I was the only one getting spammed with this message :

D3D12 ERROR: ID3D12CommandAllocator::Reset: A command allocator is being reset before previous executions associated with the allocator have completed.

 

I am only running the basic HelloWorld sample from microsoft. I also get awful crashes (full fatal reboot your PC types) when using the graphics debugger.

 

I have a nVidia mobile adapter (970m) & note that none of that is happening if I use Intel integrated GPU.

Share this post


Link to post
Share on other sites

I am using the driver 364.72

I did a few more tests and it seems that on top of all the other issues mentioned above, the "Present" call does not seem to respect command list fences as I get very unusual tearing. As if the back buffer was presented mid-way during drawcalls. I know it sounds silly but I can see the clear color rip through the triangle shown on screen.

I was not able to take a screen capture of the issue. It reminds me of good old days with mode 13 where you could render directly on screen and see the pixel show up as the where updated in memory.

 

Share this post


Link to post
Share on other sites

It could very possibly be just a driver issue. The D3D12 API is still quite new, and the hardware vendors are still very much tweaking the drivers for D3D12 interoperability. That being said (And not having seen the Microsoft HelloWorld Example), is it possible that the CPU & GPU aren't being synchronized before the Allocator is reset? (Which your error message strongly suggests)

 

Some flow control that prevents the function that delimiters the fence object on the GPU, and CPU waits for the event not being called? I'd step through the code (And pay special attention when you reach the Synchronization function, or code) and see if something is aloof.

 

p.s. That driver I believe is bleeding edge for NVidia. Wouldn't hurt if my previous suggestions get you nowhere to consider rolling back to an older version of the Driver.

 

Marcus

Edited by markypooch

Share this post


Link to post
Share on other sites

I believe its a driver bug.

The message indicate an error that would normally throw an exception which it does not. The call to reset actually return S_OK and still produces the message.
I've looked at the code and compared it to my own and I believe the sample is fine. I also ran it with warp without any issue.

 

Looks like ill have to live with my Intel GPU until they improve the drivers.

 

Thanks for the answers!

Share this post


Link to post
Share on other sites

I can confirm such issues with the GeForce GTX 980M too.

  • V-Sync is enabled - If disabled, there aren't any issues.
  • The Render Targets are running via the NVIDIA GPU instead of Intel iGPU.

On Desktop GeForce GTX 980's and the Intel iGPUs all is working fine.

Share this post


Link to post
Share on other sites

What builds of Windows are you both running?

 

If you run 'winver' from a command prompt, it'll say something like "OS Build 14318.1000", what's that number?

 

I've got a GTX 970 here that I've just put on the 364.72 driver and I get no such warnings on the Debug Layer here. I'm compiling/running on the 10586 Windows SDK too, you might have some out-of-date bits?

Share this post


Link to post
Share on other sites

Thanks for letting us know. We think you're hitting an OS bug that we're in the process of tracking down and fixing, which is specific to laptop configurations with an integrated and discrete GPU. There's a pretty good chance the error will go away if you launch your app on an external monitor too.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628300
    • Total Posts
      2981900
  • Similar Content

    • By lubbe75
      I am looking for some example projects and tutorials using sharpDX, in particular DX12 examples using sharpDX. I have only found a few. Among them the porting of Microsoft's D3D12 Hello World examples (https://github.com/RobyDX/SharpDX_D3D12HelloWorld), and Johan Falk's tutorials (http://www.johanfalk.eu/).
      For instance, I would like to see an example how to use multisampling, and debugging using sharpDX DX12.
      Let me know if you have any useful examples.
      Thanks!
    • By lubbe75
      I'm writing a 3D engine using SharpDX and DX12. It takes a handle to a System.Windows.Forms.Control for drawing onto. This handle is used when creating the swapchain (it's set as the OutputHandle in the SwapChainDescription). 
      After rendering I want to give up this control to another renderer (for instance a GDI renderer), so I dispose various objects, among them the swapchain. However, no other renderer seem to be able to draw on this control after my DX12 renderer has used it. I see no exceptions or strange behaviour when debugging the other renderers trying to draw, except that nothing gets drawn to the area. If I then switch back to my DX12 renderer it can still draw to the control, but no other renderers seem to be able to. If I don't use my DX12 renderer, then I am able to switch between other renderers with no problem. My DX12 renderer is clearly messing up something in the control somehow, but what could I be doing wrong with just SharpDX calls? I read a tip about not disposing when in fullscreen mode, but I don't use fullscreen so it can't be that.
      Anyway, my question is, how do I properly release this handle to my control so that others can draw to it later? Disposing things doesn't seem to be enough.
    • By Tubby94
      I'm currently learning how to store multiple objects in a single vertex buffer for efficiency reasons. So far I have a cube and pyramid rendered using ID3D12GraphicsCommandList::DrawIndexedInstanced; but when the screen is drawn, I can't see the pyramid because it is drawn inside the cube. I'm told to "Use the world transformation matrix so that the box and pyramid are disjoint in world space".
       
      Can anyone give insight on how this is accomplished? 
       
           First I init the verts in Local Space
      std::array<VPosData, 13> vertices =     {         //Cube         VPosData({ XMFLOAT3(-1.0f, -1.0f, -1.0f) }),         VPosData({ XMFLOAT3(-1.0f, +1.0f, -1.0f) }),         VPosData({ XMFLOAT3(+1.0f, +1.0f, -1.0f) }),         VPosData({ XMFLOAT3(+1.0f, -1.0f, -1.0f) }),         VPosData({ XMFLOAT3(-1.0f, -1.0f, +1.0f) }),         VPosData({ XMFLOAT3(-1.0f, +1.0f, +1.0f) }),         VPosData({ XMFLOAT3(+1.0f, +1.0f, +1.0f) }),         VPosData({ XMFLOAT3(+1.0f, -1.0f, +1.0f) }),         //Pyramid         VPosData({ XMFLOAT3(-1.0f, -1.0f, -1.0f) }),         VPosData({ XMFLOAT3(-1.0f, -1.0f, +1.0f) }),         VPosData({ XMFLOAT3(+1.0f, -1.0f, -1.0f) }),         VPosData({ XMFLOAT3(+1.0f, -1.0f, +1.0f) }),         VPosData({ XMFLOAT3(0.0f,  +1.0f, 0.0f) }) } Then  data is stored into a container so sub meshes can be drawn individually
      SubmeshGeometry submesh; submesh.IndexCount = (UINT)indices.size(); submesh.StartIndexLocation = 0; submesh.BaseVertexLocation = 0; SubmeshGeometry pyramid; pyramid.IndexCount = (UINT)indices.size(); pyramid.StartIndexLocation = 36; pyramid.BaseVertexLocation = 8; mBoxGeo->DrawArgs["box"] = submesh; mBoxGeo->DrawArgs["pyramid"] = pyramid;  
      Objects are drawn
      mCommandList->DrawIndexedInstanced( mBoxGeo->DrawArgs["box"].IndexCount, 1, 0, 0, 0); mCommandList->DrawIndexedInstanced( mBoxGeo->DrawArgs["pyramid"].IndexCount, 1, 36, 8, 0);  
      Vertex Shader
       
      cbuffer cbPerObject : register(b0) { float4x4 gWorldViewProj; }; struct VertexIn { float3 PosL : POSITION; float4 Color : COLOR; }; struct VertexOut { float4 PosH : SV_POSITION; float4 Color : COLOR; }; VertexOut VS(VertexIn vin) { VertexOut vout; // Transform to homogeneous clip space. vout.PosH = mul(float4(vin.PosL, 1.0f), gWorldViewProj); // Just pass vertex color into the pixel shader. vout.Color = vin.Color; return vout; } float4 PS(VertexOut pin) : SV_Target { return pin.Color; }  

    • By mark_braga
      I am confused why this code works because the lights array is not 16 bytes aligned.
      struct Light {     float4 position;     float radius;     float intensity; // How does this work without adding // uint _pad0, _pad1; }; cbuffer lightData : register(b0) {     uint lightCount;     uint _pad0;     uint _pad1;     uint _pad2; // Shouldn't the shader be not able to read the second element in the light struct // Because after float intensity, we need 8 more bytes to make it 16 byte aligned?     Light lights[NUM_LIGHTS]; } This has erased everything I thought I knew about constant buffer alignment. Any explanation will help clear my head.
      Thank you
    • By HD86
      I don't know in advance the total number of textures my app will be using. I wanted to use this approach but it turned out to be impractical because D3D11 hardware may not allow binding more than 128 SRVs to the shaders. Next I decided to keep all the texture SRV's in a default heap that is invisible to the shaders, and when I need to render a texture I would copy its SRV from the invisible heap to another heap that is bound to the pixel shader, but this also seems impractical because ID3D12Device::CopyDescriptorsSimple cannot be used in a command list. It executes immediately when it is called. I would need to close, execute and reset the command list every time I need to switch the texture.
      What is the correct way to do this?
  • Popular Now