# About reconstruct position from pixel

This topic is 2982 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I read a lot of thread about reconstructing view sapce position from pixel based on frustum corners. but i'm stupid. I don't understand why "fPixelDepth * vFrustumRay" is the final result? fPixelDepth is only the z coordinate in view space(certainly divided by fFarPlane, range from 0.0 to 1.0). Is vFrustumRay a vector that point from origin to frustum farplane corner(which in four corners?)? Is that vFrustumRay eqaul to line(vector?) from camera's pos(0, 0, 0) to farplane corner? finally, why these two things multiply produce the position in view space. what's the math theory under that formula? Through the "Picking" example in DX9, I thought to compute a point in view space, we must have a unit vector(vRay) pointing to that point and the length(fLength) from origin to that point. The point = vRay * fLength. But I really don't understand why fPixelDepth*vFrustumRay is correct. what's vector math under that formula. Thanks All. I can't speek a little english please don't mind......

##### Share on other sites
Quote:
 Original post by Game_XinBingfPixelDepth is only the z coordinate in view space(certainly divided by fFarPlane, range from 0.0 to 1.0).Is vFrustumRay a vector that point from origin to frustum farplane corner(which in four corners?)?

When you apply a post processing pass you normaly render a fullscreen quad. The four corners of the frustum will be assigned to the according four vertices of the fullscreen quad. Then a linear interpolation of these four frustum corner vectors, in other word the frustum ray, will be available in the pixel shader.

Quote:
 Is that vFrustumRay eqaul to line(vector?) from camera's pos(0, 0, 0) to farplane corner?

Yes.

Quote:
 finally, why these two things multiply produce the position in view space. what's the math theory under that formula?

It's some kind of reengineering. When the grafics API like DX9 or OpenGL renders a triangle, the depth of this pixel, or in other words, the distance from the near plane to the rendered pixel normalized by the farsest displayed depth (=far plane), will be written to a depth buffer.

If you revert this process you have a normalized value between 0..1 , the depth, and a ray , the frustum ray, on which all points in view space exists, which would be rendered to the current pixel. The depth value just tell you which point it is, so multiplying depth with frustumRay will give you the desired point in view space.

There're some more details and traps (considering near plane or none-linear z-buffer etc.), but basic idea is quite simple.

##### Share on other sites
Thank you.
I only wonder why vFrustumRay.x*fPixelDepth is the x component of point in view space, why vFrustumRay.y*fPixelDepth is the y component of point in view space.
What is the theory?

1. 1
2. 2
Rutin
18
3. 3
4. 4
5. 5

• 14
• 12
• 9
• 12
• 37
• ### Forum Statistics

• Total Topics
631428
• Total Posts
3000027
×

## Important Information

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!