• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Jason Smith
      While working on a project using D3D12 I was getting an exception being thrown while trying to get a D3D12_CPU_DESCRIPTOR_HANDLE. The project is using plain C so it uses the COBJMACROS. The following application replicates the problem happening in the project.
      #define COBJMACROS #pragma warning(push, 3) #include <Windows.h> #include <d3d12.h> #include <dxgi1_4.h> #pragma warning(pop) IDXGIFactory4 *factory; ID3D12Device *device; ID3D12DescriptorHeap *rtv_heap; int WINAPI wWinMain(HINSTANCE hinst, HINSTANCE pinst, PWSTR cline, int cshow) { (hinst), (pinst), (cline), (cshow); HRESULT hr = CreateDXGIFactory1(&IID_IDXGIFactory4, (void **)&factory); hr = D3D12CreateDevice(0, D3D_FEATURE_LEVEL_11_0, &IID_ID3D12Device, &device); D3D12_DESCRIPTOR_HEAP_DESC desc; desc.NumDescriptors = 1; desc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_RTV; desc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE; desc.NodeMask = 0; hr = ID3D12Device_CreateDescriptorHeap(device, &desc, &IID_ID3D12DescriptorHeap, (void **)&rtv_heap); D3D12_CPU_DESCRIPTOR_HANDLE rtv = ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart(rtv_heap); (rtv); } The call to ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart throws an exception. Stepping into the disassembly for ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart show that the error occurs on the instruction
      mov  qword ptr [rdx],rax
      which seems odd since rdx doesn't appear to be used. Any help would be greatly appreciated. Thank you.
       
    • By lubbe75
      As far as I understand there is no real random or noise function in HLSL. 
      I have a big water polygon, and I'd like to fake water wave normals in my pixel shader. I know it's not efficient and the standard way is really to use a pre-calculated noise texture, but anyway...
      Does anyone have any quick and dirty HLSL shader code that fakes water normals, and that doesn't look too repetitious? 
    • By turanszkij
      Hi,
      I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
    • By NikiTo
      Some people say "discard" has not a positive effect on optimization. Other people say it will at least spare the fetches of textures.
       
      if (color.A < 0.1f) { //discard; clip(-1); } // tons of reads of textures following here // and loops too
      Some people say that "discard" will only mask out the output of the pixel shader, while still evaluates all the statements after the "discard" instruction.

      MSN>
      discard: Do not output the result of the current pixel.
      clip: Discards the current pixel..
      <MSN

      As usual it is unclear, but it suggests that "clip" could discard the whole pixel(maybe stopping execution too)

      I think, that at least, because of termal and energy consuming reasons, GPU should not evaluate the statements after "discard", but some people on internet say that GPU computes the statements anyways. What I am more worried about, are the texture fetches after discard/clip.

      (what if after discard, I have an expensive branch decision that makes the approved cheap branch neighbor pixels stall for nothing? this is crazy)
    • By NikiTo
      I have a problem. My shaders are huge, in the meaning that they have lot of code inside. Many of my pixels should be completely discarded. I could use in the very beginning of the shader a comparison and discard, But as far as I understand, discard statement does not save workload at all, as it has to stale until the long huge neighbor shaders complete.
      Initially I wanted to use stencil to discard pixels before the execution flow enters the shader. Even before the GPU distributes/allocates resources for this shader, avoiding stale of pixel shaders execution flow, because initially I assumed that Depth/Stencil discards pixels before the pixel shader, but I see now that it happens inside the very last Output Merger state. It seems extremely inefficient to render that way a little mirror in a scene with big viewport. Why they've put the stencil test in the output merger anyway? Handling of Stencil is so limited compared to other resources. Does people use Stencil functionality at all for games, or they prefer discard/clip?

      Will GPU stale the pixel if I issue a discard in the very beginning of the pixel shader, or GPU will already start using the freed up resources to render another pixel?!?!



       
  • Advertisement
  • Advertisement
Sign in to follow this  

DX12 [D3D12] Render wireframe on top of solid rendering

This topic is 828 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I would like to render a wireframe on top of the normal solid rendering in DirectX 12.

Right now I am using Microsoft's MiniEngine as a base to experiment with: https://github.com/Microsoft/DirectX-Graphics-Samples/tree/master/MiniEngine

 

I actually initially needed to draw the wireframe to debug my understanding of the structure of the MiniEngine's vertex and index buffers. So initially, I created a separate vertex and index buffer for the wireframe (I know this is not necessary). I constructed the index buffer to render a line list of all the edges of the model.

This technique gave me the result I was expecting, that can be seen below. In this image, the wireframe is correctly drawn on top of the solid rendering: 

[attachment=30253:ModelWireframeIndexBufferLineList.jpg]

 

Now I want to make a more optimized version that saves space by using the original vertex/index buffers and just renders the wireframe by changing the fill mode of the rasterizer:

CD3DX12_RASTERIZER_DESC l_cWireframeRasterizer(D3D12_DEFAULT);
l_cWireframeRasterizer.FillMode = D3D12_FILL_MODE_WIREFRAME;
m_gWireframePSO.SetRasterizerState(l_cWireframeRasterizer);

The problem with this can be seen below. It seems to only render the wireframe on edges of objects, due to a depth test issue:

[attachment=30254:ModelWireframeFillModeTriangleList.jpg]

 

I'm not sure why I am having this problem, because I am using the same depth test function in both the index-buffer/line-list technique, as in the wireframe-fill-mode/triangle-list technique. Both techniques use the depth test function D3D12_COMPARISON_FUNC_GREATER_EQUAL.

 

Since this seems to be a depth fighting issue, I tried setting depth bias like:

l_cWireframeRasterizer.DepthBias = 10;

 The depth bias doesn't help and doesn't seem to have any impact. I tried values of 1, 10, 100, 1000, -1, -10. I think the correct value should be positive, but I was just trying stuff...?tongue.png

 

Does anyone know how to fix this issue? How to get the wireframe to render correctly on top of the solid rendering using the wireframe fill mode and the original vertex/index buffers?

Does the depth bias not work when you render with a wireframe fill mode?

 

To make it easier to see the difference between the two techniques, below are screenshots of the wireframe being rendered without the solid rendering first.

The expected result rendered with the line list and second index buffer:

[attachment=30255:WireframeIndexBufferLineList.jpg]

 

The incorrect result rendered with the original vertex/index buffers and the fill mode set to wireframe:

[attachment=30256:WireframeFillModeTriangleList.jpg]

 

Some additional information that might help:

In the MiniEngine the depth buffer is rendered in a separate pass. So the description of all the rendering passes is as follows:

Pass 1: Depth

Pass 2: Shadow

Pass 3: Color

Pass 4: Wireframe

 

Thanks in advance for any help.

 

EDIT:

Depth Buffer Rendering Example:

[attachment=30278:DepthExample.jpg]

 

Share this post


Link to post
Share on other sites
Advertisement
The depth bias value is documented here. As it explains, the interpretation depends on the format of the depth buffer.

You mentioned that you're using GREATER_EQUAL for your depth testing...does this mean that you're using a reversed depth range, where your near and far clip planes are swapped?

Share this post


Link to post
Share on other sites
Just to be sure, did you check that you're not accidently clearing the backbuffer between the solid and wireframe rendering?

Share this post


Link to post
Share on other sites

I had the same problem, using DX11 though..

 

I got wireframes to appear correctly by using Bias=-1000, BiasClamp=0.0001f, SlopeScaledBias=0.01f :)

The values you input are dependant on your ZNear+ZFar I think, so try messing around some more with your Bias settings (including Clamp and Slope) and see if you can reach something that looks right?

Share this post


Link to post
Share on other sites
Thanks for your replies guys.
 
Thanks MJP, I'll have a look at the Depth Bias documentation more attentively. I just skimmed it before, because it was for DirectX 11, but I don't think there is a DirectX 12 version.
 
No, I don't think the near and far planes are swapped. Although this is set up by the MiniEngine, so I would have to find where it's done to make sure. But I think the depth function compares the values of the depth buffer. And in the depth buffer far away objects have a lower-value/darker/black, while closer objects have higher-value/lighter/white. Below I presented the depth buffer to the screen as an example. It's in black and red instead of black and white because I didn't change the shaders, but you can see that closer objects are lighter than far away objects:
 
Well, I can't seem to be able to upload images in a reply, so I added the image to the first post: Depth Buffer Rendering Example.
 
So D3D12_COMPARISON_FUNC_GREATER_EQUAL will draw something if it has a higher-value/is-closer than than what is previously in the depth buffer. That's my understanding anyway.
 
Thanks for your reply cozzie. No I don't think I am accidentally clearing the backbuffer (colorbuffer) between the solid rendering pass and the wireframe rendering pass. Just as a test I tried clearing the colorbuffer between the two passes and it gives me the same result as if I didn't do the solid pass at all. Like the image above with the wireframe on the black background.
 
Thanks for your reply vinterberg. I had high hopes that these values would fix the problem, but unfortunately they did not. I'll look into the documentation more closely and experiment with different values.

?

Edited by 0xfeedbacc

Share this post


Link to post
Share on other sites
With a "standard" projection, your depth buffer will have a value of 0.0 at the near clipping plane and 1.0 at the far clipping plane. Based on your description and the image that you posted, you're definitely using a "reversed" projection. This is fine, in fact it gives you better precision for floating point depth buffers. The reason I asked, is because it means that you'll want to use a *positive* depth bias instead of the normal negative bias.

If you can't get the depth bias to work, you could always do something custom in the pixel shader. For instance, you could output SV_DepthGreaterEqual from the pixel shader, and add a small bias directly in the shader. Or alternatively, you can do a "manual" depth test using alpha blending or discard. For each pixel you could read in the depth buffer value, compare against the depth of the pixel being shaded, and set alpha to 1 if the depth is close enough within some threshold (or set it to 0 otherwise).

Share this post


Link to post
Share on other sites

Alright I fixed it. When I added a quick and dirty depth bias in the vertex shader I realized the wireframe was being rendered for back-facing triangles only and not front-facing ones. To fix it all I had to do was fix the winding order the rasterizer uses by changing FrontCounterClockwise to true:

CD3DX12_RASTERIZER_DESC l_cWireframeRasterizer(D3D12_DEFAULT);
l_cWireframeRasterizer.FillMode = D3D12_FILL_MODE_WIREFRAME;
l_cWireframeRasterizer.FrontCounterClockwise = TRUE;

FrontCounterClockwise = TRUE is the same value the MiniEngine uses. But it is different than the default value for D3D12_DEFAULT.

 

Thanks for your help guys.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement