DX store normalized depth in texture

Started by
2 comments, last by JohnnyCode 4 years, 6 months ago

Hi!

I'm sort of stuck on this one issue for a while and can't find a solution.

So, i'm currently working with DirectX and want to write the depth-value of the geometry into a texture. (For shadow mapping)

The code currently looks like this:


vec4 object_space_pos = vec4( in_Position.x, in_Position.y, in_Position.z, 1.0);
gl_Position = gm_Matrices[MATRIX_WORLD_VIEW_PROJECTION] * object_space_pos;//vertex position
depth = (gm_Matrices[MATRIX_WORLD_VIEW] * object_space_pos).z;//depth in view space

Now, this works as expected. The issue is that storing the view depth isn't as efficient. I would like to normalize it into a 0-1 range so that i can encode it into an RGB texture. (Don't ask. It's a limitation of the system i'm working with.)

Note that i'm using an orthographic projection and while it might look like GLSL, the backend is still running on DirectX.

Now, from my understanding i can use the NDC coordinates and write them into the depth like this:


depth = gl_Position.z/gl_Position.w;

In another shader (during shadowmap occlusion testing) i'm reading the pixel and i want to reconstruct the view-space depth for comparison.

Now, here is the issue. I'm not sure i'm doing it right. The code looks like this:


float getViewZ_from_NdcZ(float ndcZ,float zNear,float zFar){
    return zNear + ndcZ * (zFar-zNear);
}

float texDepth = convertRGBtoFloat(texture2D(sSunDepth, deptCoords.xy).rgb);//returns ndc space (0-1).
texDepth = getViewZ_from_NdcZ(texDepth,-10000.0,10000.0); //znear and zFar are the zNear and zFar values of the orthographic projection of the sun.

Am i overlooking something in the normalization of the depth values? Any hints/direction would be greatly appreciated.

 

Advertisement

What is a view space depth and why do you need it? Do you mean linear depth? But you don't need linear depth for shadow mapping. You can use the native depth buffer. You say you write depth like this: 

9 hours ago, Lewa said:

depth = gl_Position.z/gl_Position.w;

If this is in the pixel shader and write this to a render target, you would have a native depth buffer written into a texture that you can use for shadow mapping straight away. If this is orthographics projection, you don't even need to divide by .w.

With shadow mapping, you transform the pixel world position (from the camera perspective) by the light's ViewProjection matrix to get the native depth in light space, and compare that value against the corresponding value in the shadow map texture that was rendered from the light perspective.

If you are using directx10 or newer API, you don't need to write the depth into a separate texture, you can use the depth buffer as a shader resource view and read in shaders as texture straight away.

Post your orthographic projection matrix construction code.

This topic is closed to new replies.

Advertisement