I'm using linear depth rendered into a texture (I'm using 32 bit floating-point texture, so it's a little overhead reduction). From linear depth it's easy to calculate the world space position with the following steps:
- calculate an eye ray, which points from the eye position to the proper position on the far plane
- when you have this ray, you can simply calculate world space position by eyePosition + eyeRay * depth, if the depth is in [0, 1] range.
This method is the same as Styves mentioned in the post above. There are some modifications of this technique, when you use linear depth in range [0, far - near] or [near, far] or something like that, but the "algorithm" is the same.
However, the "basic" stored depth is exponential, so if you'd like to use that, there's a really simple (but not so cost-effective) method to do that:
- if you have a texcoord in range [0, 1], you have to convert it into [-1, 1] by "texcoord.xy * 2 - 1"
- you set the depth value to the z coordinate
- then apply a homogenous matrix transformation with the inverse of the view * projection
Something like (GLS - NOTE: I didn't test it, just write it here) :
// read depth at the coordinate
float depth = getDepthValue(texcoord);
// get screen-space position
pos.xy = texcoord * 2.0 - 1.0;
pos.z = depth;
pos.w = 1.0;
// get world-space position
pos = invViewProj * pos; // or pos * mat, depends on the matrix-representation
pos /= pos.w;
vec3 worldPos = pos.xyz;
Since you have to do this for each pixel, this method can be slower than the previous one.
So I tried your method because it was written in GLSL, so I understood it the best and it was fairly simple to understand, but I got some odd results. When I move the camera, the position seems to change a bit, I don't really know how to describe it. Heres my code:
float depth = texture2D(positionMap,texcoord).r;
//linearize the depth
//depth = ((2.0 + 0.1) / (1000.0 + 0.1 - depth * (1000.0 - 0.1)));
vec4 position = vec4(texcoord*2.0 - 1.0,depth,1.0);
position = ((inverseView*inverseProj)*position);
and on the CPU:
glUniformMatrix4fv(glGetUniformLocation(LightingShader.Shader, "inverseProj"), 1, GL_FALSE, &glm::inverse(projection));
glUniformMatrix4fv(glGetUniformLocation(LightingShader.Shader, "inverseView"), 1, GL_FALSE, &glm::inverse(modelView));
I know its not the most efficient method, but Im not using linear depth, so I want to try this first to before I mess with linear depth.
Stupid me. I was multiplying the matrixes wrong. It should be this on the CPU:
glUniformMatrix4fv(glGetUniformLocation(LightingShader.Shader, "inverseProj"), 1, GL_FALSE, &glm::inverse(projection*modelView));
and then on the CPU
position = (inverseProj*position)
also I found somewhere that I also had to do this:
float depth = texture2D(positionMap,texcoord).r * 2.0 - 1.0;
And it works great. Thanks everyone!
Edited by BlueSpud, 03 August 2014 - 08:33 AM.