Hello, I am in the process of converting to a deferred renderer, and I am a little stuck at position reconstruction from depth. I have been reading alot about it. I have read all of MJP's blog posts and the thread that started them, and feel like I have a somewhat solid understanding of how it works, but my implementation has some issues. If some of you could give me some insight into my problems I would appreciate it.
I have tried many variations on the way I am at right now, but this one gets the closest results to what it should be.
First I get the frustum points in I think camera space:
Vector3f NearCenterPosition = Look * nearplane;
Vector3f FarCenterPosition = Look * farplane;
float angle = fov*0.0174532925;
float NearHeight = 2*(tan((angle)/2)*nearplane);
float NearWidth = NearHeight * (aspectratio);
float FarHeight = 2*(tan((angle)/2)*farplane);
float FarWidth = FarHeight * (aspectratio);
FrustumPoints[0] = cml::normalize(FarCenterPosition - Right*(FarWidth/2) - Up*(FarHeight/2));
FrustumPoints[1] = cml::normalize(FarCenterPosition - Right*(FarWidth/2) + Up*(FarHeight/2));
FrustumPoints[2] = cml::normalize(FarCenterPosition + Right*(FarWidth/2) - Up*(FarHeight/2));
FrustumPoints[3] = cml::normalize(FarCenterPosition + Right*(FarWidth/2) + Up*(FarHeight/2));
Then I pass those to my lighting shader as a uniform. Then I give each vertex of the full screen quad an index into these frustum points
Vector3f FullScreenVert1 = Vector3f(1.0, -1.0, 0.0);
int FullScreenVert1FurstumIndex = 1;
Vector3f FullScreenVert2 = Vector3f(1.0, 1.0, 0.0);
int FullScreenVert2FurstumIndex = 0;
Vector3f FullScreenVert3 = Vector3f(-1.0, 1.0, 0.0);
int FullScreenVert3FurstumIndex = 2;
Vector3f FullScreenVert4 = Vector3f(-1.0, -1.0, 0.0);
int FullScreenVert4FurstumIndex = 3;
In the lighting vert shader I just index into the frustum points and pass it into the pixel shader.
Lighting Vert Shader:
out vec3 CameraRay;
void main(void)
{
CameraRay = FrustumPoints[ index ];
...
}
Then in the lighting frag shader I first convert the depth to a linear using:
float DepthToLinear(float depth)
{
vec2 g_ProjRatio = vec2( ViewClipFar / (ViewClipFar-ViewClipNear), ViewClipNear / (ViewClipNear-ViewClipFar) );
return g_ProjRatio.y/(depth-g_ProjRatio.x);
}
Finally I get the world position by multiplying the interpolated camera ray with the linear depth and the view clip far.
vec3 WorldPosition = CameraPosition - ( CameraRay * (LinearDepth*ViewClipFar ) ) ;
I know your supposed to add the camera position, but subtracting like this gets closest to the desired results. I am comparing it to just outputting the pixel position in the G-buffer pass.
Here are some screenshots showing the comparisons.
The Correct Results, what I am expecting:
http://farm9.staticflickr.com/8075/8301716099_44e9f527dc_k.jpg
Depth reconstructed results:
http://farm9.staticflickr.com/8213/8301716005_9d86ec6cc4_k.jpg
Also when I move the camera higher up the z value of all the world positions increases turning the green light blue and the yellow white, ect. When I turn the camera up the "horizon line" where the z value changes from 0 to 1 moves down, and when I look down it moves up. Then when I move the camera in the x and y cross slowly creeps in the oposite direction of the movement. I thought this might be because of the subtracting the cam pos, but when I add the camera position it moves twice as fast in the other direction. If you need more information on the behavior of the implementation just let me know. Its hard to explain and show in screenshots, but hopefully you can see what I'm doing wrong from the code. Any help is greatly appreciated.
I should also mention that my engine uses Z as the up axis.
Thanks,
David