Followers 0

# DX11 DX11 - Depth Mapping - How?

## 5 posts in this topic

Hi guys,

right now I'm having a small issue, the issue might be small, but it's blocking my development  , which is about depth mapping:

Screenshot:

As you can see, there are some problems...

The shader(This shader is not optimized in any way as that is not my current goal, depth mapping occurs in this statement: "if (state == 5 || state == 2)" ):

cbuffer ConstantObjectBuffer : register (b0)
{
matrix worldMatrix;
matrix viewMatrix;
matrix projectionMatrix;

float state;
float _instance;
float _alphamap;
float _diffusealpha;
};

struct VOut
{
float4 position : SV_POSITION;
float4 normal : NORMAL;
float2 texcoord : TEXCOORD;
float4 depthPosition : TEXTURE0;
};

Texture2D t_alphamap;
Texture2D t_dffalpha;
SamplerState ss;

VOut VShader(float4 position : POSITION, float4 normal : NORMAL, float2 texcoord : TEXCOORD, float3 instancePosition : INSTANCEPOS)
{
VOut output;

if (_instance == 1)
{
position.x += instancePosition.x;
position.y += instancePosition.y;
position.z += instancePosition.z;
}

position.w = 1.0f;
output.texcoord = texcoord;

// Calculate the position of the vertex against the world, view, and projection matrices.
output.position = mul(position, worldMatrix);
output.position = mul(output.position, viewMatrix);
output.position = mul(output.position, projectionMatrix);

output.normal = normal;
output.depthPosition = output.position;

return output;
}

{
float4 color = float4(1,1,1,1);

if (state == 5 || state == 2)
{
float depthValue;
depthValue = input.depthPosition.z / 25.0f;
color = float4(depthValue, depthValue, depthValue, 1.0f);
}
else if (state == 6)
{
float3 viewSpaceNormalizedNormals = input.normal; //0.5 * normalize (input.normal) + 0.5
color = float4(viewSpaceNormalizedNormals, 1);
}

if (_alphamap == 1)
{
color.a *= t_alphamap.Sample(ss, input.texcoord).a;
}

if (_diffusealpha == 1)
{
color.a *= t_dffalpha.Sample(ss, input.texcoord).a;
}

return color;
}


Now what on earth did I do wrong?

0

##### Share on other sites

I'm not really sure what your exact problem is or what you're trying to accomplish here. However I can tell you that dividing post-perspective z by 25.0 is not going to give you anything meaningful. Normally you would divide by w in order to get the same [0, 1] depth value that's stored in the depth buffer. However this value isn't typically useful for visualizing, since it's non-linear. Instead you usually want to use your view-space z value (which is the w component of mul(position, projectionMatrix), AKA depthPosition.w) and divide it by your far-clip distance. This gives you a linear [0, 1] value.

2

##### Share on other sites

Objective: To visualize depth in a texture

___

So instead I should do the following:

depthValue = input.depthPosition.z / (input.depthPosition.w / 1000.0f);


or

depthValue = (input.depthPosition.w / 1000.0f);


Is this correct?

1

##### Share on other sites

why not pass the depth of the vertex for the vertex shader in the color to the pixel shader?

vertex()

{

out.color = posttransformvertex.z;

}

frag()

{

return in.color;

}

0

##### Share on other sites

You want the second one: depthPosition.w / 1000.0

1

##### Share on other sites

Thank you, the depth buffer now works perfectly.

Sorry for bringing in an old topic, about ssao. The depth problem initially lied in SSAO, and I believe that the normals are being outputted correctly as well.

Texture2D t_depthmap : register(t0);
Texture2D t_normalmap : register(t1);
Texture2D t_random : register(t2);
SamplerState ss;

cbuffer SSAOBuffer : register(c0)
{
float g_scale;
float g_bias;
float g_intensity;
float ssaoIterations;
float3 pppspace;

matrix view;
};

struct VS_Output
{
float4 Pos : SV_POSITION;
float2 Tex : TEXCOORD0;
};

{
VS_Output Output;
Output.Tex = float2((id << 1) & 2, id & 2);
Output.Pos = float4(Output.Tex * float2(2,-2) + float2(-1,1), 0, 1);
return Output;
}

// Helper for modifying the saturation of a color.
{
// The constants 0.3, 0.59, and 0.11 are chosen because the
// human eye is more sensitive to green light, and less to blue.
float grey = dot(color, float3(0.3, 0.59, 0.11));

return lerp(grey, color, saturation);
}

// Ambient Occlusion Stuff --------------------------------------------------

float3 getPosition(in float2 uv)
{
return mul(t_depthmap.Sample(ss, uv).xyz, view);
}

float3 getNormal(in float2 uv)
{
return normalize(mul(t_normalmap.Sample(ss, uv).xyz * 2.0f - 1.0f, view));
}

float2 getRandom(in float2 uv)
{
//return normalize(t_random.Sample(ss, uv ).xy * 2.0f - 1.0f); // ~100FPS
return normalize(t_random.Sample(ss, float2(600, 800) * uv / float2(60, 60)).xy * 2.0f - 1.0f);
}

float doAmbientOcclusion(in float2 tcoord,in float2 uv, in float3 p, in float3 cnorm)
{
float3 diff = getPosition(tcoord + uv) - p;
const float3 v = normalize(diff);
const float d = length(diff)*g_scale;
return max(0.0,dot(cnorm,v)-g_bias)*(1.0/(1.0+d))*g_intensity;
}

// End

{
const float2 vec[4] = {float2(1,0),float2(-1,0),
float2(0,1),float2(0,-1)};

float3 p = getPosition(input.Tex);
float3 n = getNormal(input.Tex);
float2 rand = getRandom(input.Tex);

float ao = 0.0f;

//**SSAO Calculation**//
int iterations = 4;
for (int j = 0; j < iterations; ++j)
{
float2 coord2 = float2(coord1.x*0.707 - coord1.y*0.707,
coord1.x*0.707 + coord1.y*0.707);

ao += doAmbientOcclusion(input.Tex, coord1*0.25, p, n);
ao += doAmbientOcclusion(input.Tex, coord2*0.5, p, n);
ao += doAmbientOcclusion(input.Tex, coord1*0.75, p, n);
ao += doAmbientOcclusion(input.Tex, coord2, p, n);
}
ao/=(float)iterations*4.0;

return ao;
}


The ssao looks as the following:

But on the bottom there is a plane, which is being rendered, but the SSAO seems to be affected by depth somehow, but why? And I have no idea if those values I used are correct, please say if their not.

Edited by Migi0027
0

## Create an account

Register a new account

Followers 0

• ### Similar Content

• I am working on a game (shameless plug: Cosmoteer) that is written in a custom game engine on top of Direct3D 11. (It's written in C# using SharpDX, though I think that's immaterial to the problem at hand.)
The problem I'm having is that a small but understandably-frustrated percentage of my players (about 1.5% of about 10K players/day) are getting frequent device hangs. Specifically, the call to IDXGISwapChain::Present() is failing with DXGI_ERROR_DEVICE_REMOVED, and calling GetDeviceRemovedReason() returns DXGI_ERROR_DEVICE_HUNG. I'm not ready to dismiss the errors as unsolveable driver issues because these players claim to not be having problems with any other games, and there are more complaints on my own forums about this issue than there are for games with orders of magnitude more players.
My first debugging step was, of course, to turn on the Direct3D debug layer and look for any errors/warnings in the output. Locally, the game runs 100% free of any errors or warnings. (And yes, I verified that I'm actually getting debug output by deliberately causing a warning.) I've also had several players run the game with the debug layer turned on, and they are also 100% free of errors/warnings, except for the actual hung device:
[MessageIdDeviceRemovalProcessAtFault] [Error] [Execution] : ID3D11Device::RemoveDevice: Device removal has been triggered for the following reason (DXGI_ERROR_DEVICE_HUNG: The Device took an unreasonable amount of time to execute its commands, or the hardware crashed/hung. As a result, the TDR (Timeout Detection and Recovery) mechanism has been triggered. The current Device Context was executing commands when the hang occurred. The application may want to respawn and fallback to less aggressive use of the display hardware). So something my game is doing is causing the device to hang and the TDR to be triggered for a small percentage of players. The latest update of my game measures the time spent in IDXGISwapChain::Present(), and indeed in every case of a hung device, it spends more than 2 seconds in Present() before returning the error. AFAIK my game isn't doing anything particularly "aggressive" with the display hardware, and logs report that average FPS for the few seconds before the hang is usually 60+.
So now I'm pretty stumped! I have zero clues about what specifically could be causing the hung device for these players, and I can only debug post-mortem since I can't reproduce the issue locally. Are there any additional ways to figure out what could be causing a hung device? Are there any common causes of this?
Here's my remarkably un-interesting Present() call:
SwapChain.Present(_vsyncIn ? 1 : 0, PresentFlags.None); I'd be happy to share any other code that might be relevant, though I don't myself know what that might be. (And if anyone is feeling especially generous with their time and wants to look at my full code, I can give you read access to my Git repo on Bitbucket.)
1. The errors happen on all OS'es my game supports (Windows 7, 8, 10, both 32-bit and 64-bit), GPU vendors (Intel, Nvidia, AMD), and driver versions. I've been unable to discern any patterns with the game hanging on specific hardware or drivers.
2. For the most part, the hang seems to happen at random. Some individual players report it crashes in somewhat consistent places (such as on startup or when doing a certain action in the game), but there is no consistency between players.
3. Many players have reported that turning on V-Sync significantly reduces (but does not eliminate) the errors.
4. I have assured that my code never makes calls to the immediate context or DXGI on multiple threads at the same time by wrapping literally every call to the immediate context and DXGI in a mutex region (C# lock statement). (My code *does* sometimes make calls to the immediate context off the main thread to create resources, but these calls are always synchronized with the main thread.) I also tried synchronizing all calls to the D3D device as well, even though that's supposed to be thread-safe. (Which did not solve *this* problem, but did, curiously, fix another crash a few players were having.)
5. The handful of places where my game accesses memory through pointers (it's written in C#, so it's pretty rare to use raw pointers) are done through a special SafePtr that guards against out-of-bounds access and checks to make sure the memory hasn't been deallocated/unmapped. So I'm 99% sure I'm not writing to memory I shouldn't be writing to.
6. None of my shaders use any loops.
Thanks for any clues or insights you can provide. I know there's not a lot to go on here, which is part of my problem. I'm coming to you all because I'm out of ideas for what do investigate next, and I'm hoping someone else here has ideas for possible causes I can investigate.
Thanks again!

• By thmfrnk
Hello,
I am working on a Deferred Shading Engine, which actually uses MSAA for Antialising. Apart from the big G-Buffer ressources its working fine. But the intention of my engine is not only realtime-rendering as also render Screenshots as well as Videos. In that case I've enough time to do everything to get the best results. While using 8x MSAA, some scenes might still flicker.. especially on vegetations. Unfortunately 8x seems to be the maximum on DX11 Hardware, so there is no way to get better results, even if don't prefer realtime.
So finally I am looking for a solution, which might offer an unlimited Sample count. The first thing I thought about was to find a way to manually manipulate MSAA Sample locations, in order to be able to render multiple frames with different patterns and combining them. I found out that NVIDIA did something equal with TXAA. However, I only found a solution to use NVAPI, in order to change sample locations. https://mynameismjp.wordpress.com/2015/09/13/programmable-sample-points/
While I am working on .NET and SlimDX I've no idea how hard it would to implement the NVIDIA API and if its possible to use it together with SlimDX. And this approach would be also limited to NV.
Does anyone have an idea or maybe a better approach I could use?
Thanks, Thomas

• For vector operations which mathematically result in a single scalar f (such as XMVector3Length or XMPlaneDotCoord), which of the following extractions from an XMVECTOR is preferred:
1. The very explicit store operation
const XMVECTOR v = ...; float f; XMStoreFloat(&f, v); 2. A shorter but less explicit version (note that const can now be used explicitly)
const XMVECTOR v = ...; const float f = XMVectorGetX(v);

• Hi guys,
this is a exam question regarding alpha blending, however there is no official solution, so i am wondering  whether my solution is right or not... thanks in advance...

my idea:
BS1:
since BS1 with BlendEnable set as false, just write value into back buffer.
-A : (0.4, 0.4, 0.0, 0.5)
-B : (0.2, 0.4, 0.8, 0.5)

BS2:

backbuffer.RGB: = (0.4, 0.0, 0.0) * 1 + (0.0, 0.0, 0.0) * (1-0.5)      = ( 0.4, 0.0, 0.0)
backbuffer.Alpha = 1*1 + 0*0   =1

A.RGB = (0.4, 0.4, 0.0)* 0.5 + (0.4, 0.0, 0.0)* ( 1-0.5)   = (0.4,0.2,0.0)
A.Alpha=0.5*1+1*(1-0.5) = 1

B.RGB = (0.2, 0.4, 0.8) * 0.5 + (0.4, 0.2, 0.0) * (1-0.5)  = (0.3, 0.3, 0.4)
B.Alpha = 0.5 * 1 + 1*(1-0.5)  = 1

==========================
BS3:

backbuffer.RGB = (0.4, 0.0, 0.0) + (0.0, 0.0, 0.0)  = (0.4, 0.0, 0.0)
backbuffer.Alpha = 0

A.RGB = (0.4, 0.4, 0.0) + (0.4, 0.0, 0.0) = (0.8, 0.4, 0.0)
A.Alpha = 0

B.RGB = (0.2, 0.4, 0.8) + (0.8, 0.4, 0.0) = (1.0, 0.8, 0.8)
B.Alpha = 0

• Hi Guys,
I am revisiting an old DX11 framework I was creating a while back and am scratching my head with a small issue.
I am trying to set the pixel shader resources and am getting the following error on every loop.
As you can see in the below code, I am clearing out the shader resources as per the documentation. (Even going overboard and doing it both sides of the main PSSet call). But I just can't get rid of the error. Which results in the render target not being drawn.
ID3D11ShaderResourceView* srv = { 0 }; d3dContext->PSSetShaderResources(0, 1, &srv); for (std::vector<RenderTarget>::iterator it = rtVector.begin(); it != rtVector.end(); ++it) { if (it->szName == name) { //std::cout << it->srv <<"\r\n"; d3dContext->PSSetShaderResources(0, 1, &it->srv); break; } } d3dContext->PSSetShaderResources(0, 1, &srv);
I am storing the RT's in a vector and setting them by name. I have tested the it->srv and am retrieving a valid pointer.
At this stage I am out of ideas.
Any help would be greatly appreciated

• 10
• 11
• 19
• 14
• 23