Sign in to follow this  
Prompt

[SOLVED] Crytek approach, 3d pixel reconstruction

Recommended Posts

Hi there! I'm trying to reconstruct 3d pixel position from depth. I read many threads and many papers since many weeks and I want to make a little demo. I believe you can help me, I really appreciate your help. I have implemented 2 situation: - Comparation in world space: deferred pass, I store world pixel position and in light pass I compare with light position in world space. 1 texture of my G-Buffer for that, xyz. - Comparation in view-space: the same as below but in view-space, I need to do mModelView * lightPosition for to compare. And now I try to implement Crytek or policarpo approach that you comment in many threads and in MJP's blog. At first I save depth in W or A in my color buffer: float pixelDepth = (viewPosition.z - zNear) / (zFar - zNear); Later in my application I put in multiTexCoord1 far plane like your code, extracting it from projection matrix like MJP say in a comment: top-left: -500, 500, 1000 top-right: 500, 500, 1000 bottom-left: -500, -500, 1000 bottom-right: 500, -500, 1000 In my light pass I try to restore 3D pixel position: // Restore depth float pixelDepthView = ((zFar - zNear) * viewPosition.w) + zNear; // This is definitely wrong vec3 pixelPosition = eyePosition + (farPlaneCoord.xyz * viewPosition.w); I tried other things like trace a ray from pixel, apply modelView matrix (only rotation)... I need help at this point. Thanks for your time. [Edited by - Prompt on July 24, 2009 11:13:31 AM]

Share this post


Link to post
Share on other sites
Hi
Crytek describes a technique in this paper : http://developer.amd.com/assets/D3DTutorial_Crytek.pdf

In Vertex Shader you create a ray by using coordinate and inverse viewproj
matrice. Something like:
float3 start = float3(input.uv, 0.f) * InvViewProj;
ouput.ray = start - ViewPos;
ouput.ray.Normalize();

or

float3 start = float3(input.uv, 0.f) * InvViewProj;
float3 end = float3(input.uv, 1.f) * InvViewProj;
ouput.ray = end - start;
ouput.ray.Normalize();

In pixel shader you just have to do :
float depth = tex2D(...); //the depth need to be the distance between the camera and the objet in linear (not like in the Depth buffer)
float3 pos = input.ray * depth;

I never did it but I think it's something like that

good luck

Share this post


Link to post
Share on other sites
Yes it's so simple, but I have something wrong in my code.

How can I test my farPlane?

I test world position rendering a red line like this:
if ( pixelPosition.x > 0.0 && pixelPosition.x < 10.0 )
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
else
discard;

I need to be sure that my farPlane coords are ok, but how?

I use this method:
glMultiTexCoord2i(GL_TEXTURE0, 0, 1); glMultiTexCoord3f(GL_TEXTURE1, _texCoord[0].x, _texCoord[0].y, _texCoord[0].z); glVertex2i(0, 0);
glMultiTexCoord2i(GL_TEXTURE0, 1, 1); glMultiTexCoord3f(GL_TEXTURE1, _texCoord[1].x, _texCoord[1].y, _texCoord[1].z); glVertex2i(1, 0);
glMultiTexCoord2i(GL_TEXTURE0, 1, 0); glMultiTexCoord3f(GL_TEXTURE1, _texCoord[2].x, _texCoord[2].y, _texCoord[2].z); glVertex2i(1, 1);
glMultiTexCoord2i(GL_TEXTURE0, 0, 0); glMultiTexCoord3f(GL_TEXTURE1, _texCoord[3].x, _texCoord[3].y, _texCoord[3].z); glVertex2i(0, 1);


vertex are in correct OpenGL order:
0, 0
1, 0
1, 1
0, 1

texCoords are in other order for to be Z looking camera:
0, 1
1, 1
1, 0
0, 0

And far clip plane:
+[0] { x=-557.75 y= 557.75 z=1000.0 }
+[1] { x= 557.75 y= 557.75 z=1000.0 }
+[2] { x= 557.75 y=-557.75 z=1000.0 }
+[3] { x=-557.75 y=-557.75 z=1000.0 }

I think this it's ok too, I'm wrong?

Share this post


Link to post
Share on other sites
This is my code for FarPlane point calculation

float hFovTan = (float)Math.Tan(Camera.FOV*0.5f);
float H = hFovTan*Camera.Clip.Y;//Far (eg. 10000)
float W = H*Camera.Aspect;//>1

FarPoint[0] = Vector3.Transform(new Vector3(-W, H, -Camera.Clip.Y), Camera.RotationMatrix);
FarPoint[1] =Vector3.Transform( new Vector3(W, H,-Camera.Clip.Y), Camera.RotationMatrix);
FarPoint[2] =Vector3.Transform( new Vector3(W, -H,-Camera.Clip.Y), Camera.RotationMatrix);
FarPoint[3] = Vector3.Transform(new Vector3(-W, -H, -Camera.Clip.Y), Camera.RotationMatrix);

in shader:
Decoding:
CameraPos.xyz + tex2D(DepthTexture, UV.xy).r*InterpolatedCorners.xyz;

Encoding:
(vertex shader)
OUT.Depth = mul(float4(IN.Position.xyz),1), WorldViewProj).w*Clip.w;//w = 1/far
W value is more "stable" than Z (in my case)

Share this post


Link to post
Share on other sites
Quote:
Original post by kociolek
This is my code for FarPlane point calculation

float hFovTan = (float)Math.Tan(Camera.FOV*0.5f);
float H = hFovTan*Camera.Clip.Y;//Far (eg. 10000)
float W = H*Camera.Aspect;//>1

FarPoint[0] = Vector3.Transform(new Vector3(-W, H, -Camera.Clip.Y), Camera.RotationMatrix);
FarPoint[1] =Vector3.Transform( new Vector3(W, H,-Camera.Clip.Y), Camera.RotationMatrix);
FarPoint[2] =Vector3.Transform( new Vector3(W, -H,-Camera.Clip.Y), Camera.RotationMatrix);
FarPoint[3] = Vector3.Transform(new Vector3(-W, -H, -Camera.Clip.Y), Camera.RotationMatrix);

in shader:
Decoding:
CameraPos.xyz + tex2D(DepthTexture, UV.xy).r*InterpolatedCorners.xyz;

Encoding:
(vertex shader)
OUT.Depth = mul(float4(IN.Position.xyz),1), WorldViewProj).w*Clip.w;//w = 1/far
W value is more "stable" than Z (in my case)


Yes this is correct :)

Some documentation about my implementation based on your code:
- In application:
float eyeFOV = m_pActiveCamera->getFov( );
float aspectRatio = m_pActiveCamera->getAspectRatio( );

float radFOV = eyeFOV * Math::PI / 180.0f;
float hFovTan = tan(radFOV * 0.5f); // Note: need radians here!
float H = hFovTan * zFar; // Far (eg. 10000)
float W = H * aspectRatio; // >1

Matrix4 modelViewInvRotation = viewMatrix.inverse( ); // ModelView Inverse is needed
modelViewInvRotation.translate( 0.0f, 0.0f, 0.0f ); // Only rotation
texCoord[0] = Vector3(-W, H, -zFar) * modelViewInvRotation;
texCoord[1] = Vector3( W, H, -zFar) * modelViewInvRotation;
texCoord[2] = Vector3( W, -H, -zFar) * modelViewInvRotation;
texCoord[3] = Vector3(-W, -H, -zFar) * modelViewInvRotation;



I have this camera config:
fovX = 45
zFar = 1e+006
zNear = 15

And testing some thing i use in my color buffer W, depth / zFar, before I used:
float pixelDepth = (viewPosition.z - zNear) / (zFar - zNear);

But for me this is the correct way:
float pixelDepth = viewPosition.z / zFar

So in deferred pass:
- vertex shader:
positionView = gl_ModelViewMatrix * positionWorld;

- fragment shader:
gl_FragData[1] = vec4(normal.xyz, viewPosition.z / zFar);

In light pass:
- vertex shader:
farPlaneCoord = gl_MultiTexCoord1;

- fragment shader:
vec3 lightPosition = lightPos.xyz;
vec3 pixelPosition = eyePosition.xyz - viewPosition.w * farPlaneCoord.xyz;

This is so rare thing... I need to deduct eyePosition, I need to debug:
Matrix4 modelViewInvRotation = viewMatrix.inverse( );
modelViewInvRotation.translate( 0.0f, 0.0f, 0.0f );

Because is so strange, maybe is for to use Inverse of the modelview matrix.

Thanks for all kociolek and Squallc

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this