Jump to content

  • Log In with Google      Sign In   
  • Create Account

Interested in a FREE copy of HTML5 game maker Construct 2?

We'll be giving away three Personal Edition licences in next Tuesday's GDNet Direct email newsletter!

Sign up from the right-hand sidebar on our homepage and read Tuesday's newsletter for details!


We're also offering banner ads on our site from just $5! 1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Reconstructing Position From Depth Buffer


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
8 replies to this topic

#1 BlueSpud   Members   -  Reputation: 426

Like
0Likes
Like

Posted 28 July 2014 - 06:39 PM

Hey,

So I'm trying to optimize my G-buffer for my deferred shading. Right now what I'm doing is rendering color, normal and position all to their own attachments of a frame buffer. I've seen it hinted in places that you can reconstruct the position from the depth buffer, which would reduce the overhead of the G-Buffer by about a 3rd. I haven't really been able to find anything that explains it well, just code, and I want to know the actual math behind it. If anyone could point me in the right direction or some tips, they would be much appreciated.

 

Thanks.



Sponsor:

#2 REF_Cracker   Members   -  Reputation: 489

Like
4Likes
Like

Posted 28 July 2014 - 07:28 PM

Hi BlueSpud,

 

It's actually quite simple to understand if you think of it this way to retrieve a viewspace coordinate.

 

- the screen coordinate comes in as 0 to 1 in x and y
- you remap that to -1 to 1 like so.... screencoord.xy * 2 - 1 (migth have to flip the sign of the results depending on API)
- you now have an xy value you can picture as laying on the near plane of your camera....so the z coordinate of the vector is whatever your near plane value is.
- you now have to figure out how to scale the -1 to 1 xy values properly to represent the dimensions of the near plane "quad" in view space... this is easy
- you just use some trig to figure out the width and height of the "quad" at the near plane.... basically it's .... tan(FOV * 0.5) * near
- also you'll have to multiply that by the aspect ratio for x probably.

- now after you do all this you will have calculated a vector from the eye to the near plane.
- you just have to now scale this vector by whatever the depth of that pixel is..... you'll have to account for the ratio of how close your near plane is

basically that's how you think of it... if you draw some pictures you should be able to get it..
in order to recover in another space... it's basically the same... but you move along the axis directions in x/y/z then add the eye position.


Edited by REF_Cracker, 28 July 2014 - 09:17 PM.

Check out my project @ www.exitearth.com


#3 Styves   Members   -  Reputation: 1052

Like
1Likes
Like

Posted 29 July 2014 - 04:32 AM

To go with the explanation above, here's some HLSL example code on how to construct the frustum corners in the vertex-shader. This is just quick pseudo-code, so it may have an issue or two. It's also better to compute this on CPU and pass it in as a constant to the shader instead of computing it per vertex (you can also reuse it for all quads in a frame).

float3 vFrustumCornersWS[8] =
{
	float3(-1.0,  1.0, 0.0),	// near top left
	float3( 1.0,  1.0, 0.0),	// near top right
	float3(-1.0, -1.0, 0.0),	// near bottom left
	float3( 1.0, -1.0, 0.0),	// near bottom right
	float3(-1.0,  1.0, 1.0),	// far top left
	float3( 1.0,  1.0, 1.0),	// far top right
	float3(-1.0, -1.0, 1.0),	// far bottom left
	float3( 1.0, -1.0, 1.0)		// far bottom right
};

for(int i = 0; i < 8; ++i)
{
	float4 vCornerWS = mul(mViewProjInv, float4(vFrustumCornersWS[i].xyz, 1.0));
	vFrustumCornersWS[i].xyz = vCornerWS.xyz / vCornerWS.w;
}

for(int i = 0; i < 4; ++i)
{
	vFrustumCornersWS[i + 4].xyz -= vFrustumCornersWS[i].xyz;
}

// Passes to the pixel shader - there are two ways to do this:
// 1. Pass the corner with the vertex of your quad as part of the input stage.
// 2. If you're quad is created dynamically with SV_VertexID, as in this example, just pick one from a constant array based on the position.
float3 vCamVecLerpT = (OUT.vPosition.x>0) ? vFrustumCornersWS[5].xyz : vFrustumCornersWS[4].xyz;
float3 vCamVecLerpB = (OUT.vPosition.x>0) ? vFrustumCornersWS[7].xyz : vFrustumCornersWS[6].xyz;	
OUT.vCamVec.xyz = (OUT.vPosition.y<0) ? vCamVecLerpB.xyz : vCamVecLerpT.xyz;

To get position in the pixel shader:

float3 vWorldPos = fLinearDepth * IN.vCamVec.xyz + vCameraPos;

Edited by Styves, 29 July 2014 - 06:39 AM.


#4 BlueSpud   Members   -  Reputation: 426

Like
0Likes
Like

Posted 29 July 2014 - 02:38 PM

Hi BlueSpud,

 

It's actually quite simple to understand if you think of it this way to retrieve a viewspace coordinate.

 

- the screen coordinate comes in as 0 to 1 in x and y
- you remap that to -1 to 1 like so.... screencoord.xy * 2 - 1 (migth have to flip the sign of the results depending on API)
- you now have an xy value you can picture as laying on the near plane of your camera....so the z coordinate of the vector is whatever your near plane value is.
- you now have to figure out how to scale the -1 to 1 xy values properly to represent the dimensions of the near plane "quad" in view space... this is easy
- you just use some trig to figure out the width and height of the "quad" at the near plane.... basically it's .... tan(FOV * 0.5) * near
- also you'll have to multiply that by the aspect ratio for x probably.

- now after you do all this you will have calculated a vector from the eye to the near plane.
- you just have to now scale this vector by whatever the depth of that pixel is..... you'll have to account for the ratio of how close your near plane is

basically that's how you think of it... if you draw some pictures you should be able to get it..
in order to recover in another space... it's basically the same... but you move along the axis directions in x/y/z then add the eye position.

 

I've been doing my lighting in world space, would it be easier to get the position in world space? Or will I just have to multiply the view space by the inverse camera matrix to get it in world space?



#5 csisy   Members   -  Reputation: 256

Like
0Likes
Like

Posted 30 July 2014 - 07:47 AM

I'm using linear depth rendered into a texture (I'm using 32 bit floating-point texture, so it's a little overhead reduction). From linear depth it's easy to calculate the world space position with the following steps:

- calculate an eye ray, which points from the eye position to the proper position on the far plane

- when you have this ray, you can simply calculate world space position by eyePosition + eyeRay * depth, if the depth is in [0, 1] range.

 

This method is the same as Styves mentioned in the post above. There are some modifications of this technique, when you use linear depth in range [0, far - near] or [near, far] or something like that, but the "algorithm" is the same.

 

However, the "basic" stored depth is exponential, so if you'd like to use that, there's a really simple (but not so cost-effective) method to do that:

- if you have a texcoord in range [0, 1], you have to convert it into [-1, 1] by "texcoord.xy * 2 - 1"

- you set the depth value to the z coordinate

- then apply a homogenous matrix transformation with the inverse of the view * projection

 

Something like (GLS - NOTE: I didn't test it, just write it here) :

// read depth at the coordinate
float depth = getDepthValue(texcoord);

// get screen-space position
vec4 pos;
pos.xy = texcoord * 2.0 - 1.0;
pos.z = depth;
pos.w = 1.0;

// get world-space position
pos = invViewProj * pos; // or pos * mat, depends on the matrix-representation
pos /= pos.w;

vec3 worldPos = pos.xyz;

Since you have to do this for each pixel, this method can be slower than the previous one.


Edited by csisy, 30 July 2014 - 07:50 AM.

sorry for my bad english :)

#6 Samith   Members   -  Reputation: 2260

Like
1Likes
Like

Posted 30 July 2014 - 09:34 AM


Since you have to do this for each pixel, this method can be slower than the previous one.

 

There's no need for an entire matrix mul per pixel if you do it correctly.

 

I've explained reconstructing linear z from the depth buffer a few times before. You can look at this thread from a month ago that explains the impetus behind the math required to reconstruct linear z from a depth buffer. I've used this method in my deferred renderers in the past and it's always worked well.



#7 Kaptein   Prime Members   -  Reputation: 2174

Like
0Likes
Like

Posted 31 July 2014 - 10:19 AM

This is also a solution:

http://www.opengl.org/wiki/Compute_eye_space_from_window_space#Optimized_method_from_XYZ_of_gl_FragCoord

 

read from your depth-texture, then:

vec3 eyePos = eye_direction * linearDepth(texCoord);  // which uses window-space depth-texture as source

 

from eye to world (which can probably be optimized if matview is very specific):

vec3 wpos = (vec4(eyePos, 1.0) * matview).xyz;

 

and the camera to point direction is simply:

vec3 ray = normalize(wpos);

 

from this you can do cool stuff, like adding suncolor to fog:

float sunAmount = max(0.0, dot(ray, sunDirection));
 



#8 BlueSpud   Members   -  Reputation: 426

Like
0Likes
Like

Posted 02 August 2014 - 09:47 PM

I'm using linear depth rendered into a texture (I'm using 32 bit floating-point texture, so it's a little overhead reduction). From linear depth it's easy to calculate the world space position with the following steps:

- calculate an eye ray, which points from the eye position to the proper position on the far plane

- when you have this ray, you can simply calculate world space position by eyePosition + eyeRay * depth, if the depth is in [0, 1] range.

 

This method is the same as Styves mentioned in the post above. There are some modifications of this technique, when you use linear depth in range [0, far - near] or [near, far] or something like that, but the "algorithm" is the same.

 

However, the "basic" stored depth is exponential, so if you'd like to use that, there's a really simple (but not so cost-effective) method to do that:

- if you have a texcoord in range [0, 1], you have to convert it into [-1, 1] by "texcoord.xy * 2 - 1"

- you set the depth value to the z coordinate

- then apply a homogenous matrix transformation with the inverse of the view * projection

 

Something like (GLS - NOTE: I didn't test it, just write it here) :

// read depth at the coordinate
float depth = getDepthValue(texcoord);

// get screen-space position
vec4 pos;
pos.xy = texcoord * 2.0 - 1.0;
pos.z = depth;
pos.w = 1.0;

// get world-space position
pos = invViewProj * pos; // or pos * mat, depends on the matrix-representation
pos /= pos.w;

vec3 worldPos = pos.xyz;

Since you have to do this for each pixel, this method can be slower than the previous one.

So I tried your method because it was written in GLSL, so I understood it the best and it was fairly simple to understand, but I got some odd results. When I move the camera, the position seems to change a bit, I don't really know how to describe it. Heres my code:

vec4 getWorldSpacePositionFromLinearDepth()
    {
        float depth = texture2D(positionMap,texcoord).r;
        
        //linearize the depth
        //depth = ((2.0 + 0.1) / (1000.0 + 0.1 - depth * (1000.0 - 0.1)));
        
        vec4 position = vec4(texcoord*2.0 - 1.0,depth,1.0);
        
        position =  ((inverseView*inverseProj)*position);
        return position/position.w;
    }

and on the CPU:

 RenderEngine.translateToCameraSpace(Player->PlayerCamera);
        
        glm::mat4 modelView,projection;
        glGetFloatv(GL_MODELVIEW_MATRIX, &modelView[0][0]);
        glGetFloatv(GL_PROJECTION_MATRIX, &projection[0][0]);
glUniformMatrix4fv(glGetUniformLocation(LightingShader.Shader, "inverseProj"), 1, GL_FALSE, &glm::inverse(projection)[0][0]);
        glUniformMatrix4fv(glGetUniformLocation(LightingShader.Shader, "inverseView"), 1, GL_FALSE, &glm::inverse(modelView)[0][0]);

I know its not the most efficient method, but Im not using linear depth, so I want to try this first to before I mess with linear depth.

 

EDIT:

Stupid me. I was multiplying the matrixes wrong. It should be this on the CPU:

glUniformMatrix4fv(glGetUniformLocation(LightingShader.Shader, "inverseProj"), 1, GL_FALSE, &glm::inverse(projection*modelView)[0][0]);

and then on the CPU

 position =  (inverseProj*position)

also I found somewhere that I also had to do this:

float depth = texture2D(positionMap,texcoord).r * 2.0 - 1.0;

And it works great. Thanks everyone!


Edited by BlueSpud, 03 August 2014 - 08:33 AM.


#9 csisy   Members   -  Reputation: 256

Like
0Likes
Like

Posted 05 August 2014 - 08:39 AM

 


Since you have to do this for each pixel, this method can be slower than the previous one.

 

There's no need for an entire matrix mul per pixel if you do it correctly.

 

I've explained reconstructing linear z from the depth buffer a few times before. You can look at this thread from a month ago that explains the impetus behind the math required to reconstruct linear z from a depth buffer. I've used this method in my deferred renderers in the past and it's always worked well.

 

 

Nice one :) I've talked about the simpler solution (which requires to do that calculation per pixel), but of course you can "extract" the "behind the scenes" math as you did :) Btw, thanks for the information, I'll store it on my HDD :D

 

@BlueSpud:

Sorry, I wasn't here for some days, but I'm glad that it's working :)

 

You should also check Samith's post about reconstructing linear depth from the depth buffer. With linear depth, the world-space position reconstruction is faster, since you just need the eye ray and the eye position (and of course the linear depth).


sorry for my bad english :)




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS