Sign in to follow this  

DX11 Depth buffer and depth stencil state comparison function confusion

Recommended Posts

I was wondering if anyone could explain the depth buffer and the depth stencil state comparison function to me as I'm a little confused

So I have set up a depth stencil state where the DepthFunc is set to D3D11_COMPARISON_LESS, but what am I actually comparing here? What is actually written to the buffer, the pixel that should show up in the front?

I have these 2 quad faces, a Red Face and a Blue Face. The Blue Face is further away from the Viewer with a Z index value of -100.0f. Where the Red Face is close to the Viewer with a Z index value of 0.0f.

When DepthFunc is set to D3D11_COMPARISON_LESS the Red Face shows up in front of the Blue Face like it should based on the Z index values. BUT if I change the DepthFunc to D3D11_COMPARISON_LESS_EQUAL the Blue Face shows in front of the Red Face. Which does not make sense to me, I would think that when the function is set to D3D11_COMPARISON_LESS_EQUAL the Red Face would still show up in front of the Blue Face as the Z index for the Red Face is still closer to the viewer

Am I thinking of this comparison function all wrong?

Vertex data just in case


//Vertex date that make up the 2 faces
Vertex verts[] = {

  		//Red face
		Vertex(Vector4(0.0f, 0.0f,     0.0f), Color(1.0f, 0.0f, 0.0f)),
		Vertex(Vector4(100.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)),
		Vertex(Vector4(100.0f, 0.0f,   0.0f), Color(1.0f, 0.0f, 0.0f)),
		Vertex(Vector4(0.0f, 0.0f,     0.0f), Color(1.0f, 0.0f, 0.0f)),
		Vertex(Vector4(0.0f, 100.0f,   0.0f), Color(1.0f, 0.0f, 0.0f)),
		Vertex(Vector4(100.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)),
  		
  		//Blue face
		Vertex(Vector4(0.0f, 0.0f,     -100.0f), Color(0.0f, 0.0f, 1.0f)),
		Vertex(Vector4(100.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)),
		Vertex(Vector4(100.0f, 0.0f,   -100.0f), Color(0.0f, 0.0f, 1.0f)),
		Vertex(Vector4(0.0f, 0.0f,     -100.0f), Color(0.0f, 0.0f, 1.0f)),
		Vertex(Vector4(0.0f, 100.0f,   -100.0f), Color(0.0f, 0.0f, 1.0f)),
		Vertex(Vector4(100.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)),
	};

 

Share this post


Link to post
Share on other sites

Screen space (post-projection / NDC) Z goes from 0 to 1 (or 0 to w in post-projection). Your vertex shader transforms your vertex data to this space, usually using a projection matrix.

Maybe your projection matrix is squashing your Z range so that both triangles end up at NDCz=0

Share this post


Link to post
Share on other sites
On 11/18/2017 at 1:33 AM, noodleBowl said:

but what am I actually comparing here?

You're comparing the value thats already in the depth buffer to the z value of the fragment you want to write to the screen.

edit - did I get that backwards...

Edited by Infinisearch

Share this post


Link to post
Share on other sites
9 hours ago, Infinisearch said:

You're comparing the value thats already in the depth buffer to the z value of the fragment you want to write to the screen.

This is what I thought, which was why I was really confused. Turns out my problem had nothing to do with my depth buffer/depth stencil state. Turns out the viewport has MinDepth and MaxDepth properties. I never set these values for my viewport after setting these everything seems to work correctly depth wise

 

Side question about the Z near and Z far plans for projection matrices. If I have a right handed coord system, so positive Z points towards the Viewer (or me) and negative Z goes further into the screen does this mean my Z near plan should be a positive value and my Z far plan should be negative value? Does this change between different projection matrix types such as Perspective vs Orthographic?

Share this post


Link to post
Share on other sites

That depends on your function or library that you're using for creating your project matrix. If you're using D3DX or DirectXMath functions for creating a "right-handed" perspective projection matrix, then you'll specify your near and far clipping plane parameters as the absolute distance from the camera to the plane. So they would both be positive, typically with zFar > zNear (although you can flip them if you want a reversed Z buffer, which can be be useful for working with floating point depth buffer formats). This is the same for the orthographic projections in both of those libraries. If you're using a different library, you should check the documentation to see what values they expect, although it's very likely to be the same as the D3DX/DirectXMath functions.

FYI the documentation for the D3DX matrix functions can be helpful to look at, since they show you exactly how the matrix is constructed (you can also look at the actual implementations for the DirectXMath functions, since it's all in inline header files). If  you look at the RH projection functions, you'll see that they essentially end up negating the Z value so that it ends up positive in clip space, since this is what D3D expects.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628682
    • Total Posts
      2984198
  • Similar Content

    • By lawnjelly
      It comes that time again when I try and get my PC build working on Android via Android Studio. All was going swimmingly, it ran in the emulator fine, but on my first actual test device (Google Nexus 7 2012 tablet (32 bit ARM Cortex-A9, ARM v7A architecture)) I was getting a 'SIGBUS illegal alignment' crash.
      My little research has indicated that while x86 is fine with loading 16 / 32 / 64 bit values from any byte address in memory, the earlier ARM chips may need data to be aligned to the data size. This isn't a massive problem, and I see the reason for it (probably faster, like SIMD aligned loads, and simpler for the CPU). I probably have quite a few of these, particular in my own byte packed file formats. I can adjust the exporter / formats so that they are using the required alignment.
      Just to confirm, if anyone knows this, is it all 16 / 32 / 64 bit accesses that need to be data size aligned on early android devices? Or e.g. just 64 bit size access? 
      And is there any easy way to get the compiler to spit out some kind of useful information as to the alignment of each member of a struct / class, so I can quickly pin down the culprits?
      The ARM docs (http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.faqs/ka15414.html) suggest another alternative is using a __packed qualifier. Anyone used this, is this practical?
    • By ADDMX
      Hi
      Just a simple question about compute shaders (CS5, DX11).
      Do the atomic operations (InterlockedAdd in my case) should work without any issues on RWByteAddressBuffer and be globaly coherent ?
      I'v come back from CUDA world and commited fairly simple kernel that does some job, the pseudo-code is as follows:
      (both kernels use that same RWByteAddressBuffer)
      first kernel does some job and sets Result[0] = 0;
      (using Result.Store(0, 0))
      I'v checked with debugger, and indeed the value stored at dword 0 is 0
      now my second kernel
      RWByteAddressBuffer Result;  [numthreads(8, 8, 8)] void main() {     for (int i = 0; i < 5; i++)     {         uint4 v0 = DoSomeCalculations1();         uint4 v1 = DoSomeCalculations2();         uint4 v2 = DoSomeCalculations3();                  if (v0.w == 0 && v1.w == 0 && v2.w)             continue;         //    increment counter by 3, and get it previous value         // this should basically allocate space for 3 uint4 values in buffer         uint prev;         Result.InterlockedAdd(0, 3, prev);                  // this fills the buffer with 3 uint4 values (+1 is here as the first 16 bytes is occupied by DrawInstancedIndirect data)         Result.Store4((prev+0+1)*16, v0);         Result.Store4((prev+1+1)*16, v1);         Result.Store4((prev+2+1)*16, v2);     } } Now I invoke it with Dispatch(4,4,4)
      Now I use DrawInstancedIndirect to draw the buffer, but ocassionaly there is missed triangle here and there for a frame, as if the atomic counter does not work as expected
      do I need any additional synchronization there ?
      I'v tried 'AllMemoryBarrierWithGroupSync' at the end of kernel, but without effect.
      If I do not use atomic counter, and istead just output empty vertices (that will transform into degenerated triangles) the all is OK - as if I'm missing some form of synchronization, but I do not see such a thing in DX11.
      I'v tested on both old and new nvidia hardware (680M and 1080, the behaviour is that same).
       
    • By Josheir
      In the following code:

       
      Point p = a[1]; center of rotation for (int i = 0; I<4; i++) { int x = a[i].x - p.x; int y = a[i].y - p.y; a[i].x = y + p.x; a[i].y = - x + p.y; }  
      I am understanding that a 90 degree shift results in a change like:   
      xNew = -y
      yNew = x
       
      Could someone please explain how the two additions and subtractions of the p.x and p.y works?
       
      Thank you,
      Josheir
    • By Doggolainen
      Hello, 
      I am, like many others before me, making a displacement map tesselator. I want render some terrain using a quad, a texture containing heightdata and the geometryshader/tesselator.
      So far, Ive managed the utilize the texture on the pixelshader (I return different colors depending on the height). I have also managed to tesselate my surface, i.e. subdivided my quad into lots of triangles .
       
      What doesnt work however is the sampling step on the domain shader. I want to offset the vertices using the heightmap.
      I tried calling the same function "textureMap.Sample(textureSampler, texcoord)" as on the pixelshader but got compiling errors. Instead I am now using the "SampleLevel" function to use the 0 mipmap version of the input texture.
      But yeah non of this seem to be working. I wont get anything except [0, 0, 0, 0] from my sampler.
      Below is some code: The working pixelshader, the broken domain shader where I want to sample, and the instanciations of the samplerstates on the CPU side.
      Been stuck on this for a while! Any help would be much appreciated!
       
       
      Texture2D textureMap: register(t0); SamplerState textureSampler : register(s0); //Pixel shader float4 PS(PS_IN input) : SV_TARGET {     float4 textureColor = textureMap.Sample(textureSampler, input.texcoord);     return textureColor; } GS_IN DS(HS_CONSTANT_DATA input, float3 uvwCoord : SV_DomainLocation, const OutputPatch<DS_IN, 3> patch) {     GS_IN output;     float2 texcoord = uvwCoord.x * patch[0].texcoord.xy + uvwCoord.y * patch[1].texcoord.xy + uvwCoord.z *                    patch[2].texcoord.xy;     float4 textureColor = textureMap.SampleLevel(textureSampler, texcoord.xy, 0);      //fill  and return output....  }             //Sampler             SharpDX.Direct3D11.SamplerStateDescription samplerDescription;             samplerDescription = SharpDX.Direct3D11.SamplerStateDescription.Default();             samplerDescription.Filter = SharpDX.Direct3D11.Filter.MinMagMipLinear;             samplerDescription.AddressU = SharpDX.Direct3D11.TextureAddressMode.Wrap;             samplerDescription.AddressV = SharpDX.Direct3D11.TextureAddressMode.Wrap;             this.samplerStateTextures = new SharpDX.Direct3D11.SamplerState(d3dDevice, samplerDescription);             d3dDeviceContext.PixelShader.SetSampler(0, samplerStateTextures);             d3dDeviceContext.VertexShader.SetSampler(0, samplerStateTextures);             d3dDeviceContext.HullShader.SetSampler(0, samplerStateTextures);             d3dDeviceContext.DomainShader.SetSampler(0, samplerStateTextures);             d3dDeviceContext.GeometryShader.SetSampler(0, samplerStateTextures);  
    • By alex1997
      Hey, I've a minor problem that prevents me from moving forward with development and looking to find a way that could solve it. Overall, I'm having a sf::VertexArray object and looking to reander a shader inside its area. The problem is that the shader takes the window as canvas and only becomes visible in the object range which is not what I'm looking for.. 
      Here's a stackoverflow links that shows the expected behaviour image. Any tips or help is really appreciated. I would have accepted that answer, but currently it does not work with #version 330 ...
  • Popular Now