Hi!
I'm sort of stuck on this one issue for a while and can't find a solution.
So, i'm currently working with DirectX and want to write the depth-value of the geometry into a texture. (For shadow mapping)
The code currently looks like this:
vec4 object_space_pos = vec4( in_Position.x, in_Position.y, in_Position.z, 1.0);
gl_Position = gm_Matrices[MATRIX_WORLD_VIEW_PROJECTION] * object_space_pos;//vertex position
depth = (gm_Matrices[MATRIX_WORLD_VIEW] * object_space_pos).z;//depth in view space
Now, this works as expected. The issue is that storing the view depth isn't as efficient. I would like to normalize it into a 0-1 range so that i can encode it into an RGB texture. (Don't ask. It's a limitation of the system i'm working with.)
Note that i'm using an orthographic projection and while it might look like GLSL, the backend is still running on DirectX.
Now, from my understanding i can use the NDC coordinates and write them into the depth like this:
depth = gl_Position.z/gl_Position.w;
In another shader (during shadowmap occlusion testing) i'm reading the pixel and i want to reconstruct the view-space depth for comparison.
Now, here is the issue. I'm not sure i'm doing it right. The code looks like this:
float getViewZ_from_NdcZ(float ndcZ,float zNear,float zFar){
return zNear + ndcZ * (zFar-zNear);
}
float texDepth = convertRGBtoFloat(texture2D(sSunDepth, deptCoords.xy).rgb);//returns ndc space (0-1).
texDepth = getViewZ_from_NdcZ(texDepth,-10000.0,10000.0); //znear and zFar are the zNear and zFar values of the orthographic projection of the sun.
Am i overlooking something in the normalization of the depth values? Any hints/direction would be greatly appreciated.