Hi Everybody,

I'm in a long journey of implementing deferred shading. I've got a clear idea of the algorithm, but have some troubles reconstructing object position. I've followed this article http://mynameismjp.w...ion-from-depth/ but still easily get lots. There are too many uncertainties. So I want to go over it step by step and make that I did not make any mistakes in previous steps.

Storing the depth into a texture:

// Depth Pass Vertex Shader
varying float depth;
void main (void)
{
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
vec4 pos = gl_ModelViewMatrix * gl_Vertex;
depth = pos.z;
}

// Depth Pass Fragment Shader
varying float depth;
uniform float FarClipPlane;
void main (void)
{
float d = - depth / FarClipPlane;
gl_FragColor.r = d;
}

My understanding of this is that I do store the liner depth value into the texture. Is that correct?

Thank you,

Ruben

Also, I'm easily getting lost when it gets to the perspective division. Can somebody suggest some article where i can learn about it.

Ashaman73 is correct, however you seem to be having trouble with the theory itself.

before reading take a look at this:

http://mynameismjp.wordpress.com/2010/09/05/position-from-depth-3/namely method #2

Ok so you store the depth like this:

[vertex shader]
varying float depth;
const float far_plane_distance = 1000.0f; //far plane is at -1000
[...]
//transform the input vertex to view (or eye) space
vec4 viewspace_position = modelview_matrix * input_vertex;
//this is not perspective division but scaling the depth value.
//initially you'd have depth values in the range [-near ... -far distance]
//but you have to linearize them, so that you can get more
//precision
//this is why you divide by the far plane distance
//and you divide by the negative one, so that you can
//check the result in the texture
depth = viewspace_position.z / -far_plane_distance;
[pixel shader]
varying float depth;
out vec4 out_depth;
[...]
//store the depth into the R channel of the texture
//assuming you chose a R16F or R31F format
out_depth.x = depth;

and retrieve (decode) the depth like this:

[vertex shader]
varying vec4 viewspace_position;
varying vec2 texture_coordinates;
[...]
//store the view space position of the far plane
//this gets interpolated as you could see in the mjp drawing
viewspace_position = modelview_matrix * input_vertex;
texture_coordinates = input_texture_coordinates;
[pixel shader]
varying vec4 viewspace_position;
varying vec2 texture_coordinates;
uniform sampler2D depth_texture;
const float far_plane_distance = 1000.0f;
[...]
//sample depth from the texture's R channel
//this will be in range [0 ... 1]
float linear_depth = texture(depth_texture, texture_coordinates).x;
//you have to construct the view vector at the extrapolated viewspace position
//and scale it by the far / z, but this is unnecessary if you use the far plane as the input viewspace position
//since the division will give you 1
//the z coordinate will be the far distance (negative, since we're in viewspace)
vec3 view_vector = vec3(viewspace_position.xy * (-far_plane_distance / viewspace_position.z), -far_plane_distance);
//scaling the view vector with the linear depth will give the true viewspace position
vec3 reconstructed_viewspace_position = view_vector * linear_depth;

hope this helped

I had trouble with reconstructing position so search for it on gamedev.