# DX11 [DX11] problem with depth buffer getting no writes

## Recommended Posts

i already posted this on beginners forum, seems to me in is pout of place there... since i wrote to a couple of forum mods, and the thread hasnt been moved, ill repost here...
link to the old post: [url="http://www.gamedev.net/topic/607898-directx-11-depth-buffer-problem/"]http://www.gamedev.n...buffer-problem/[/url] if any mod would be so kind as to delete that

hi im kinda new to these forums so not sure if this is the correct board for posting this, but i have a problem with my depth buffer not working (properly)...

i have set it up like this (following the code from [url="http://www.rastertek.com/dx11tut03.html"]http://www.rastertek.../dx11tut03.html[/url]):

[code]
// Initialize the description of the depth buffer.
D3D11_TEXTURE2D_DESC dbDesc;
ZeroMemory(&dbDesc, sizeof(dbDesc));

// Set up the description of the depth buffer.
dbDesc.Width = SCREEN_WIDTH;
dbDesc.Height = SCREEN_HEIGHT;
dbDesc.MipLevels = 1;
dbDesc.ArraySize = 1;
dbDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
dbDesc.SampleDesc.Count = 1;
dbDesc.SampleDesc.Quality = 0;
dbDesc.Usage = D3D11_USAGE_DEFAULT;
dbDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL;
dbDesc.CPUAccessFlags = 0;
dbDesc.MiscFlags = 0;

// Create the texture for the depth buffer using the filled out description.
hr = m_mainPtrs.m_pDev->CreateTexture2D(&dbDesc, NULL, &m_depthStencilBuffer);
if (FAILED(hr)) {SHOWERRORMSG("Creating depth stencil buffer target failed"); return FALSE;}

// Initialize the description of the stencil state.
D3D11_DEPTH_STENCIL_DESC dsDesc;
ZeroMemory(&dsDesc, sizeof(dsDesc));

// Set up the description of the stencil state.
dsDesc.DepthEnable = true;
dsDesc.DepthFunc = D3D11_COMPARISON_LESS;

dsDesc.StencilEnable = false;

// Stencil operations if pixel is front-facing.
dsDesc.FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
dsDesc.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_INCR;
dsDesc.FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
dsDesc.FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS;

// Stencil operations if pixel is back-facing.
dsDesc.BackFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
dsDesc.BackFace.StencilDepthFailOp = D3D11_STENCIL_OP_DECR;
dsDesc.BackFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
dsDesc.BackFace.StencilFunc = D3D11_COMPARISON_ALWAYS;

// Create the depth stencil state.
hr = m_mainPtrs.m_pDev->CreateDepthStencilState(&dsDesc, &m_depthStencilState);
if (FAILED(hr)) {SHOWERRORMSG("Creating depth stencil state description failed"); return FALSE;}

// Set the depth stencil state.
m_mainPtrs.m_pDevCon->OMSetDepthStencilState(m_depthStencilState, 1);

// Initailze the depth stencil view.
D3D11_DEPTH_STENCIL_VIEW_DESC dsvDesc;
ZeroMemory(&dsvDesc, sizeof(dsvDesc));

// Set up the depth stencil view description.
dsvDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
dsvDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
dsvDesc.Texture2D.MipSlice = 0;

// Create the depth stencil view.
hr = m_mainPtrs.m_pDev->CreateDepthStencilView(m_depthStencilBuffer, &dsvDesc, &m_mainPtrs.m_depthStencilView);
if (FAILED(hr)) {SHOWERRORMSG("Creating depth stencil view description failed"); return FALSE;}

// Bind the render target view and depth stencil buffer to the output render pipeline.
m_mainPtrs.m_pDevCon->OMSetRenderTargets(1, &m_mainPtrs.m_pBackbuffer, m_mainPtrs.m_depthStencilView);
[/code]
of course i clear the depth buffer with
[code]
mptrs.m_pDevCon->ClearDepthStencilView(mptrs.m_depthStencilView, D3D11_CLEAR_DEPTH|D3D11_CLEAR_STENCIL, 1.0f, 0);
[/code]

now strange enough, depth testing isnt working... the code compiles correctly, i also checked if the used DXGI_FORMAT_D24_UNORM_S8_UINT format is supported for D3D11_FORMAT_SUPPORT_DEPTH_STENCIL, and it is. Further i tried clearing only the depth stencil, turning stencil test off and various depth functions.

the different depth functions also dont give results as expected;
i draw two intersecting objects, always one of them is completely obscuring the other, dependent on the depth comparison function (1 means obscures 2, - means nothing is drawn , obj1 is sent to the pipeline first):
depth buffer cleared to 1.0f:
[code]
LESS LEQ ALWAYS NEVER GRTR GREQ EQ
Obj1 1 2 2 _ _ _ _
Obj2 2 1 1
[/code]

depth buffer cleared to 0.0f:
[code]
LESS LEQ ALWAYS NEVER GRTR GREQ EQ
Obj1 1 2 2 _ _ _ 2
Obj2 2 1 1 1
[/code]

EDIT: hm seems that monospace copy & paste didnt quite pan out on those tables^^, but well you get what is should be where i think

it looks like the whole geometry is collapsed to the near clipping plane. can it be that a projection matrix collapses geometry to the near clipping plane (i use D3DPerspectivesomething to calculate my projection matrix)? never thought about that... and if so why does the LESS depth func act different? obj1 is still getting sent to pipeline first, so it should be below obj2, but its not.

also, obj1 should be partly behind obj2 (since their geometry intersects) however always one is completely drawn above the other....

ive been sitting at this for hours! pls help!!!

system: Windows7 (obviously with dx11)
compiler: VS 10 express

EDIT: update: by passing the vertex shaders output position to the pixel shader i tried drawing the actual depth values in color. the pixelshader now just returns
[code]
float4((depthvalue-0.9)*10.0f,0,0,1)
[/code]
, where depthvalue is position.z/position.w. the subtraction by 0.9 and multiplication by 10 is just so the range 0.9 to 1 gets a better resolution.
i can see clearly that my depth values are calculated correctly, they simply aren't written to the depthbuffer properly :///

[code]

cbuffer pfConst
{
matrix matView;
matrix matProj;
}

cbuffer poConst
{
matrix matWorld;
}

struct VIn
{
float4 position : Position;
float3 normal : Normal;
float2 tex : TexCoord0;
float4 tangent : Tangent0;
};

struct PIn
{
float4 position : SV_POSITION;
float2 tex : TEXCOORD0;
float3 lightDirSurf : TEXCOORD1;
float4 lightParam : TEXCOORD2;
};

sampler texSamp;
Texture2D <float4> colorTex;

{
PIn output;
input.position.w = 1.0f;
float4 worldPos = mul(input.position,matWorld);
float4 viewPos = mul(worldPos,matView);
output.position = mul(viewPos,matProj);
output.tex = input.tex; //ignore this
output.lightDirSurf = float4(0.0,0.0,0.0,0.0); //ignore this
output.lightParam = output.position; //ignore this
// output.lightParam = float4(lightDist, LightPos.w, 0.0f, 0.0f); //used this texccord for debugging depth
return output;
}

{
float4 color = colorTex.Sample( texSamp, input.tex );
// float depthValue;
// float4 color;
// depthValue = input.lightParam.z / input.lightParam.w;
// color = float4((depthValue-0.9)*10.0f, 0.0f, 0.0f, 1.0f);

return color;
}
[/code]
here are 2 screenshots (ignore the ugly texture in the textured.jpg, its drawn as expected)
[url="http://imageshack.us/g/200/depthq.jpg/"]http://imageshack.us/g/200/depthq.jpg/[/url]
brighter red areas are drawn in front of darker red areas, as seen in textured.jpg,though they should actually be hidden by the black heightfield geometry. depth.jpg was taken with exactly the same setting as textured.jpg, only changed the comments in the pixel shader i can see clearly that my depth values are calculated correctly, they simply aren't written to the depthbuffer properly :///

##### Share on other sites
actually, from what i found out so far, the title should be: all my geometry is doing 0.0f depthbuffer writes, regardless of the z and w coordinate output by the vertex shader.

##### Share on other sites
Are you setting the viewport correctly?

##### Share on other sites
[quote name='MJP' timestamp='1312743136' post='4845857']
Are you setting the viewport correctly?
[/quote]

.....
for *$§%$%&"\$% sake
how could i forget that

of course, that solved it, thanks

EDIT: seriously, i spent like 8hrs searching for my problem... then again, now i know everything about depth buffering there is to know^^, even know all enums pertaining to depth in some way by heart =)

##### Share on other sites
Well then at least at least now you're an expert on depth buffers.

## Create an account

Register a new account

• ## Partner Spotlight

• ### Forum Statistics

• Total Topics
627689
• Total Posts
2978648
• ### Similar Content

• hi,
i have read very much about the binding of a constantbuffer to a shader but something is still unclear to me.
e.g. when performing :   vertexshader.setConstantbuffer ( buffer,  slot )
is the buffer bound
or
b. to the VertexShader that is currently set as the active VertexShader
Is it possible to bind a constantBuffer to a VertexShader e.g. VS_A and keep this binding even after the active VertexShader has changed ?
I mean i want to bind constantbuffer_A  to VS_A, an Constantbuffer_B to VS_B  and  only use updateSubresource without using setConstantBuffer command every time.

Look at this example:
perform drawcall       ( buffer_A is used )

perform drawcall   ( buffer_B is used )
perform drawcall   (now which buffer is used ??? )

I ask this question because i have made a custom render engine an want to optimize to
the minimum  updateSubresource, and setConstantbuffer  calls

• I got a quick question about buffers when it comes to DirectX 11. If I bind a buffer using a command like:
IASetVertexBuffers IASetIndexBuffer VSSetConstantBuffers PSSetConstantBuffers  and then later on I update that bound buffer's data using commands like Map/Unmap or any of the other update commands.
Do I need to rebind the buffer again in order for my update to take effect? If I dont rebind is that really bad as in I get a performance hit? My thought process behind this is that if the buffer is already bound why do I need to rebind it? I'm using that same buffer it is just different data

• I am really stuck with something that should be very simple in DirectX 11.
1. I can draw lines using a PC (position, colored) vertices and a simple shader just fine.
2. I can draw 3D triangles using PCN (position, colored, normal) vertices just fine (even transparency and SpecularBlinnPhong shaders).

However, if I'm using my 3D shader, and I want to draw my PC lines in the same scene how can I do that?

If I change my lines to PCN and pass them to the 3D shader with my triangles, then the lighting screws them all up.  I only want the lighting for the 3D triangles, but no SpecularBlinnPhong/Lighting for the lines (just PC).
I am sure this is because if I change the lines to PNC there is not really a correct "normal" for the lines.
I assume I somehow need to draw the 3D triangles using one shader, and then "switch" to another shader and draw the lines?  But I have no clue how to use two different shaders in the same scene.  And then are the lines just drawn on top of the triangles, or vice versa (maybe draw order dependent)?
I must be missing something really basic, so if anyone can just point me in the right direction (or link to an example showing the implementation of multiple shaders) that would be REALLY appreciated.

I'm also more than happy to post my simple test code if that helps as well!

• By Reitano
Hi,
I am writing a linear allocator of per-frame constants using the DirectX 11.1 API. My plan is to replace the traditional constant allocation strategy, where most of the work is done by the driver behind my back, with a manual one inspired by the DirectX 12 and Vulkan APIs.
In brief, the allocator maintains a list of 64K pages, each page owns a constant buffer managed as a ring buffer. Each page has a history of the N previous frames. At the beginning of a new frame, the allocator retires the frames that have been processed by the GPU and frees up the corresponding space in each page. I use DirectX 11 queries for detecting when a frame is complete and the ID3D11DeviceContext1::VS/PSSetConstantBuffers1 methods for binding constant buffers with an offset.
The new allocator appears to be working but I am not 100% confident it is actually correct. In particular:
1) it relies on queries which I am not too familiar with. Are they 100% reliable ?
2) it maps/unmaps the constant buffer of each page at the beginning of a new frame and then writes the mapped memory as the frame is built. In pseudo code:
BeginFrame:
page.data = device.Map(page.buffer)
device.Unmap(page.buffer)
RenderFrame
Alloc(size, initData)
...
memcpy(page.data + page.start, initData, size)
Alloc(size, initData)
...
memcpy(page.data + page.start, initData, size)
(Note: calling Unmap at the end of a frame prevents binding the mapped constant buffers and triggers an error in the debug layer)
Is this valid ?
3) I don't fully understand how many frames I should keep in the history. My intuition says it should be equal to the maximum latency reported by IDXGIDevice1::GetMaximumFrameLatency, which is 3 on my machine. But, this value works fine in an unit test while on a more complex demo I need to manually set it to 5, otherwise the allocator starts overwriting previous frames that have not completed yet. Shouldn't the swap chain Present method block the CPU in this case ?
4) Should I expect this approach to be more efficient than the one managed by the driver ? I don't have meaningful profile data yet.
Is anybody familiar with the approach described above and can answer my questions and discuss the pros and cons of this technique based on his experience ?
For reference, I've uploaded the (WIP) allocator code at https://paste.ofcode.org/Bq98ujP6zaAuKyjv4X7HSv.  Feel free to adapt it in your engine and please let me know if you spot any mistakes
Thanks
Stefano Lanza

• Hey all. I've been working with compute shaders lately, and was hoping to build out some libraries to reuse code. As a prerequisite for my current project, I needed to sort a big array of data in my compute shader, so I was going to implement quicksort as a library function. My implementation was going to use an inout array to apply the changes to the referenced array.

I spent half the day yesterday debugging in visual studio before I realized that the solution, while it worked INSIDE the function, reverted to the original state after returning from the function.

My hack fix was just to inline the code, but this is not a great solution for the future.  Any ideas? I've considered just returning an array of ints that represents the sorted indices.

• 13
• 14
• 12
• 10
• 12