Sign in to follow this  
Yu Liu

DX12 No way to debug D3D12's HLSL code with Visual Studio 2015

Recommended Posts

I'm using the in-build capture mechanism in VS2015(VC2015 accurately), when I was about to open the UI for shader tracing, it showed "such functionality is not supported for DX12".

 

Is that true? If so, Microsoft doesn't support its flagship API compared to the existing D3D11?

Share this post


Link to post
Share on other sites

 

Is that true? If so, Microsoft doesn't support its flagship API compared to the existing D3D11?

I haven't tried it yet.

You could always try RenderDoc in the meantime. It's better than the MS tools anyway.

 

Thanks. I briefly read through RenderDoc's introduction, it's said to support D3D11 only.

 

BTW, VS2015 actually can debug DX12 shader code if setting the device feature level as 11.

 

Though neither is perfect.

Share this post


Link to post
Share on other sites

I'm using the in-build capture mechanism in VS2015(VC2015 accurately), when I was about to open the UI for shader tracing, it showed "such functionality is not supported for DX12".

 

Is that true? If so, Microsoft doesn't support its flagship API compared to the existing D3D11?

 

D3D12 is young still -- vendors have only within the past handful of months gotten some validated drivers out. Tools always lag behind, but it just takes time. And D3D12 is pretty fundamentally different than 11, not only the API, but the also in the way that you would want to expose the API usage in the Graphics Diagnostics UI. Its a lot of work on their plate.

Edited by Ravyne

Share this post


Link to post
Share on other sites

I can access resource data but I can't do shader debugging like before. Trying to debug a compute works great:

 

http://i.imgur.com/Dw1Xx1K.png

 

Since the data visualizers seem okay that's how I've been working.

 

https://www.youtube.com/watch?v=VOTJYzOrdxY

MS demonstrated everything but the shader debugging.

 

 But we can debug on DX12 device feature level 11.

Share this post


Link to post
Share on other sites

 

I'm using the in-build capture mechanism in VS2015(VC2015 accurately), when I was about to open the UI for shader tracing, it showed "such functionality is not supported for DX12".

 

Is that true? If so, Microsoft doesn't support its flagship API compared to the existing D3D11?

 

D3D12 is young still -- vendors have only within the past handful of months gotten some validated drivers out. Tools always lag behind, but it just takes time. And D3D12 is pretty fundamentally different than 11, not only the API, but the also in the way that you would want to expose the API usage in the Graphics Diagnostics UI. Its a lot of work on their plate.

 

Truly it takes time for vendors to follow. But the thing is NVIDIA/AMD has already offered their DX12 drivers but Microsoft the DX12 creator itself hasn't finished its dev tool.

Share this post


Link to post
Share on other sites

Use HIVs debugging tools. On AMD cards GPUPerfstrudio and CodeXL have a pretty nice D3D12 support, intel GPA support is in alpha but better than nothing, while nvidia nsight support isn't that great now..

It's the shame of Microsoft the first party lagging behind its followers.

Edited by Yu Liu

Share this post


Link to post
Share on other sites
http://zerotutorials.com/DirectX12/Tutorial04

You can definitely debug the shaders, I do it all the time in d3d12 and it's pretty amazing. Once you open graphics diagnostics from debug menu, capture a frame by clicking capture or print screen. Next double click that frame and then go to view pipeline. Once in the pipeline view there will be a play button under the vertex shader view and a play button under the pixel shader view. You can also play each individual vertex by clicking the play button next to the vertex you want to debug through the shader.

Share this post


Link to post
Share on other sites

http://zerotutorials.com/DirectX12/Tutorial04

You can definitely debug the shaders, I do it all the time in d3d12 and it's pretty amazing. Once you open graphics diagnostics from debug menu, capture a frame by clicking capture or print screen. Next double click that frame and then go to view pipeline. Once in the pipeline view there will be a play button under the vertex shader view and a play button under the pixel shader view. You can also play each individual vertex by clicking the play button next to the vertex you want to debug through the shader.

 

Well, I probably realized what you said, as I may have been wrong. The "only support DX11" error message may not come from the sheet for shader debugging (well the UI looks extremely complex).

 

However one thing I still want to know is, when I debug a VS code, with the draw() call with say 6 vertices, but in the code we can just debug it with the first vertex, seems no way to jump to the execution with other vertices?

Share this post


Link to post
Share on other sites

First open graphics pipeline

[sharedmedia=gallery:images:7352]

Then click on the GREEN PLAY BUTTON on the left of the VTX you want to debug.

[sharedmedia=gallery:images:7351]
Then debug the shader VTX like you would normal code
[sharedmedia=gallery:images:7353]
 

 

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628293
    • Total Posts
      2981869
  • Similar Content

    • By lubbe75
      I am looking for some example projects and tutorials using sharpDX, in particular DX12 examples using sharpDX. I have only found a few. Among them the porting of Microsoft's D3D12 Hello World examples (https://github.com/RobyDX/SharpDX_D3D12HelloWorld), and Johan Falk's tutorials (http://www.johanfalk.eu/).
      For instance, I would like to see an example how to use multisampling, and debugging using sharpDX DX12.
      Let me know if you have any useful examples.
      Thanks!
    • By lubbe75
      I'm writing a 3D engine using SharpDX and DX12. It takes a handle to a System.Windows.Forms.Control for drawing onto. This handle is used when creating the swapchain (it's set as the OutputHandle in the SwapChainDescription). 
      After rendering I want to give up this control to another renderer (for instance a GDI renderer), so I dispose various objects, among them the swapchain. However, no other renderer seem to be able to draw on this control after my DX12 renderer has used it. I see no exceptions or strange behaviour when debugging the other renderers trying to draw, except that nothing gets drawn to the area. If I then switch back to my DX12 renderer it can still draw to the control, but no other renderers seem to be able to. If I don't use my DX12 renderer, then I am able to switch between other renderers with no problem. My DX12 renderer is clearly messing up something in the control somehow, but what could I be doing wrong with just SharpDX calls? I read a tip about not disposing when in fullscreen mode, but I don't use fullscreen so it can't be that.
      Anyway, my question is, how do I properly release this handle to my control so that others can draw to it later? Disposing things doesn't seem to be enough.
    • By Tubby94
      I'm currently learning how to store multiple objects in a single vertex buffer for efficiency reasons. So far I have a cube and pyramid rendered using ID3D12GraphicsCommandList::DrawIndexedInstanced; but when the screen is drawn, I can't see the pyramid because it is drawn inside the cube. I'm told to "Use the world transformation matrix so that the box and pyramid are disjoint in world space".
       
      Can anyone give insight on how this is accomplished? 
       
           First I init the verts in Local Space
      std::array<VPosData, 13> vertices =     {         //Cube         VPosData({ XMFLOAT3(-1.0f, -1.0f, -1.0f) }),         VPosData({ XMFLOAT3(-1.0f, +1.0f, -1.0f) }),         VPosData({ XMFLOAT3(+1.0f, +1.0f, -1.0f) }),         VPosData({ XMFLOAT3(+1.0f, -1.0f, -1.0f) }),         VPosData({ XMFLOAT3(-1.0f, -1.0f, +1.0f) }),         VPosData({ XMFLOAT3(-1.0f, +1.0f, +1.0f) }),         VPosData({ XMFLOAT3(+1.0f, +1.0f, +1.0f) }),         VPosData({ XMFLOAT3(+1.0f, -1.0f, +1.0f) }),         //Pyramid         VPosData({ XMFLOAT3(-1.0f, -1.0f, -1.0f) }),         VPosData({ XMFLOAT3(-1.0f, -1.0f, +1.0f) }),         VPosData({ XMFLOAT3(+1.0f, -1.0f, -1.0f) }),         VPosData({ XMFLOAT3(+1.0f, -1.0f, +1.0f) }),         VPosData({ XMFLOAT3(0.0f,  +1.0f, 0.0f) }) } Then  data is stored into a container so sub meshes can be drawn individually
      SubmeshGeometry submesh; submesh.IndexCount = (UINT)indices.size(); submesh.StartIndexLocation = 0; submesh.BaseVertexLocation = 0; SubmeshGeometry pyramid; pyramid.IndexCount = (UINT)indices.size(); pyramid.StartIndexLocation = 36; pyramid.BaseVertexLocation = 8; mBoxGeo->DrawArgs["box"] = submesh; mBoxGeo->DrawArgs["pyramid"] = pyramid;  
      Objects are drawn
      mCommandList->DrawIndexedInstanced( mBoxGeo->DrawArgs["box"].IndexCount, 1, 0, 0, 0); mCommandList->DrawIndexedInstanced( mBoxGeo->DrawArgs["pyramid"].IndexCount, 1, 36, 8, 0);  
      Vertex Shader
       
      cbuffer cbPerObject : register(b0) { float4x4 gWorldViewProj; }; struct VertexIn { float3 PosL : POSITION; float4 Color : COLOR; }; struct VertexOut { float4 PosH : SV_POSITION; float4 Color : COLOR; }; VertexOut VS(VertexIn vin) { VertexOut vout; // Transform to homogeneous clip space. vout.PosH = mul(float4(vin.PosL, 1.0f), gWorldViewProj); // Just pass vertex color into the pixel shader. vout.Color = vin.Color; return vout; } float4 PS(VertexOut pin) : SV_Target { return pin.Color; }  

    • By mark_braga
      I am confused why this code works because the lights array is not 16 bytes aligned.
      struct Light {     float4 position;     float radius;     float intensity; // How does this work without adding // uint _pad0, _pad1; }; cbuffer lightData : register(b0) {     uint lightCount;     uint _pad0;     uint _pad1;     uint _pad2; // Shouldn't the shader be not able to read the second element in the light struct // Because after float intensity, we need 8 more bytes to make it 16 byte aligned?     Light lights[NUM_LIGHTS]; } This has erased everything I thought I knew about constant buffer alignment. Any explanation will help clear my head.
      Thank you
    • By HD86
      I don't know in advance the total number of textures my app will be using. I wanted to use this approach but it turned out to be impractical because D3D11 hardware may not allow binding more than 128 SRVs to the shaders. Next I decided to keep all the texture SRV's in a default heap that is invisible to the shaders, and when I need to render a texture I would copy its SRV from the invisible heap to another heap that is bound to the pixel shader, but this also seems impractical because ID3D12Device::CopyDescriptorsSimple cannot be used in a command list. It executes immediately when it is called. I would need to close, execute and reset the command list every time I need to switch the texture.
      What is the correct way to do this?
  • Popular Now