Reconstructing World Position from Depth

Started by
4 comments, last by Davidtse 11 years, 3 months ago

Hello, I am in the process of converting to a deferred renderer, and I am a little stuck at position reconstruction from depth. I have been reading alot about it. I have read all of MJP's blog posts and the thread that started them, and feel like I have a somewhat solid understanding of how it works, but my implementation has some issues. If some of you could give me some insight into my problems I would appreciate it.

I have tried many variations on the way I am at right now, but this one gets the closest results to what it should be.

First I get the frustum points in I think camera space:


Vector3f NearCenterPosition = Look * nearplane;
Vector3f FarCenterPosition = Look * farplane;
float angle = fov*0.0174532925; 
float NearHeight = 2*(tan((angle)/2)*nearplane);
float NearWidth = NearHeight *  (aspectratio);
float FarHeight = 2*(tan((angle)/2)*farplane);
float FarWidth = FarHeight * (aspectratio);  
FrustumPoints[0] = cml::normalize(FarCenterPosition - Right*(FarWidth/2) - Up*(FarHeight/2));
FrustumPoints[1] = cml::normalize(FarCenterPosition - Right*(FarWidth/2) + Up*(FarHeight/2));
FrustumPoints[2] = cml::normalize(FarCenterPosition + Right*(FarWidth/2) - Up*(FarHeight/2));
FrustumPoints[3] = cml::normalize(FarCenterPosition + Right*(FarWidth/2) + Up*(FarHeight/2)); 

Then I pass those to my lighting shader as a uniform. Then I give each vertex of the full screen quad an index into these frustum points


Vector3f FullScreenVert1 = Vector3f(1.0,   -1.0, 0.0);
int FullScreenVert1FurstumIndex = 1;

Vector3f FullScreenVert2 = Vector3f(1.0,   1.0, 0.0);
int FullScreenVert2FurstumIndex = 0;

Vector3f FullScreenVert3 = Vector3f(-1.0,   1.0, 0.0);
int FullScreenVert3FurstumIndex = 2;

Vector3f FullScreenVert4 = Vector3f(-1.0,   -1.0, 0.0);
int FullScreenVert4FurstumIndex = 3;

In the lighting vert shader I just index into the frustum points and pass it into the pixel shader.

Lighting Vert Shader:


out vec3 CameraRay;
void main(void)
{  
	CameraRay =  FrustumPoints[ index ];   
         ...
}

Then in the lighting frag shader I first convert the depth to a linear using:


float DepthToLinear(float depth)
{ 
	vec2 g_ProjRatio   = vec2( ViewClipFar / (ViewClipFar-ViewClipNear), ViewClipNear / (ViewClipNear-ViewClipFar) );
	return g_ProjRatio.y/(depth-g_ProjRatio.x);
}

Finally I get the world position by multiplying the interpolated camera ray with the linear depth and the view clip far.


vec3 WorldPosition = CameraPosition - ( CameraRay * (LinearDepth*ViewClipFar ) ) ;

I know your supposed to add the camera position, but subtracting like this gets closest to the desired results. I am comparing it to just outputting the pixel position in the G-buffer pass.

Here are some screenshots showing the comparisons.

The Correct Results, what I am expecting:

http://farm9.staticflickr.com/8075/8301716099_44e9f527dc_k.jpg

Depth reconstructed results:

http://farm9.staticflickr.com/8213/8301716005_9d86ec6cc4_k.jpg

Also when I move the camera higher up the z value of all the world positions increases turning the green light blue and the yellow white, ect. When I turn the camera up the "horizon line" where the z value changes from 0 to 1 moves down, and when I look down it moves up. Then when I move the camera in the x and y cross slowly creeps in the oposite direction of the movement. I thought this might be because of the subtracting the cam pos, but when I add the camera position it moves twice as fast in the other direction. If you need more information on the behavior of the implementation just let me know. Its hard to explain and show in screenshots, but hopefully you can see what I'm doing wrong from the code. Any help is greatly appreciated.

I should also mention that my engine uses Z as the up axis.

Thanks,

David

I'm working on a first person zombie shooter that's set in a procedural open world. I'm using a custom game engine created from scratch with opengl and C++. Follow my development blog for updates on the game and the engine. http://www.Subsurfacegames.com

Advertisement

Any Ideas?

I'm working on a first person zombie shooter that's set in a procedural open world. I'm using a custom game engine created from scratch with opengl and C++. Follow my development blog for updates on the game and the engine. http://www.Subsurfacegames.com

Hi Davidtse, your scene is projected on the near quad but you are computing the far quad:



  1. FrustumPoints[0] = cml::normalize(FarCenterPosition - Right*(FarWidth/2) - Up*(FarHeight/2));
  2. FrustumPoints[1] = cml::normalize(FarCenterPosition - Right*(FarWidth/2) + Up*(FarHeight/2));
  3. FrustumPoints[2] = cml::normalize(FarCenterPosition + Right*(FarWidth/2) - Up*(FarHeight/2));
  4. FrustumPoints[3] = cml::normalize(FarCenterPosition + Right*(FarWidth/2) + Up*(FarHeight/2));

It should be:

  1. FrustumPoints[0] = cml::normalize(NearCenterPosition - Right*(NearWidth/2) - Up*(NearHeight/2));
  2. FrustumPoints[1] = cml::normalize(NearCenterPosition- Right*(NearWidth/2) + Up*(NearHeight/2));
  3. FrustumPoints[2] = cml::normalize(NearCenterPosition+ Right*(NearWidth/2) - Up*(NearHeight/2));
  4. FrustumPoints[3] = cml::normalize(NearCenterPosition+ Right*(NearWidth/2) + Up*(NearHeight/2));

I am not sure if you have more errors but this one is an important error, don't forget to add the cameraPosition at the end (not substract).

Thank You Marcel,

I dont really know what you mean by my scene is being projected onto the near plane. Do you mean the final render is on the near plane? All the things I have been reading say to use the far plane points. I tried with the near plane as you suggested but pretty much get the same results. Also I know I am supposed to be adding the camera position, but when I do the center point (where all the colors converge) moves in the opposite direction I move the camera. I have been trying for awhile now to get this to work, but I still cant figure out what i'm doing wrong.

Thanks, David

I'm working on a first person zombie shooter that's set in a procedural open world. I'm using a custom game engine created from scratch with opengl and C++. Follow my development blog for updates on the game and the engine. http://www.Subsurfacegames.com

You can use both (far and near plane), but you need to transform the depth in a different way for each one. If CameraRay is a normalized vector then you can use any one.

vec3 WorldPosition = CameraPosition + ( CameraRay * (LinearDepth*ViewClipFar ) ) ;

You have (LinearDepth*ViewClipFar ) so I think LinearDepth is in [0,1] and (LinearDepth*ViewClipFar ) should be in [0,ViewClipFar], then CameraRay needs to be a normalized vector. (I don't know if it is)

Did you checked if FrustumPoints are correct? You can also unproject them if you have the model matrix and projection matrix (an easy way to do it http://www.opengl.org/sdk/docs/man2/xhtml/gluUnProject.xml).

The CameraRay is the normalized Frustum point interpolated from the vertex shader from here:

  1. FrustumPoints[0] = cml::normalize(FarCenterPosition - Right*(FarWidth/2) - Up*(FarHeight/2));
  2. FrustumPoints[1] = cml::normalize(FarCenterPosition - Right*(FarWidth/2) + Up*(FarHeight/2));
  3. FrustumPoints[2] = cml::normalize(FarCenterPosition + Right*(FarWidth/2) - Up*(FarHeight/2));
  4. FrustumPoints[3] = cml::normalize(FarCenterPosition + Right*(FarWidth/2) + Up*(FarHeight/2));

and now that I think about it since the frustum points are normalize it doesn't matter if its the near or far planes are being used. I am pretty sure my frustum points are correct because I use them for my cascaded shadow maps which work fine, and I visualized them while I was working on that and they looked correct.

I'm working on a first person zombie shooter that's set in a procedural open world. I'm using a custom game engine created from scratch with opengl and C++. Follow my development blog for updates on the game and the engine. http://www.Subsurfacegames.com

This topic is closed to new replies.

Advertisement