Sadly, it seems like a lot of the threads on this are old, so I opted to open a new topic about it. I'm working on deferred rendering currently, and I wanted to be able to use the trick to get position from the depth value. I had this working using this method:
// Calculate Depth
Project = mul(Position, Transform);
Depth = Project.z / Project.w;
// Calculate Position
Project.xy = mathamagical;
Project.z = Depth;
Position = mul(Project, InvTransform);
The problem is that this creates the extra matrix multiplication each time I want to reconstruct the position (and the mathamgical part for calculating the XY based on the overlay texture coordinates is annoying as well and adds extra code). Instead of this, I opted to use the linear depth method:
// Calculate Depth
ViewSpace = mul(Position, View);
Depth = ViewSpace.z / FarClip
// Calculate Position
Position = ((ViewDir * Depth) + CameraPosition);
This works fine if my camera is not tilted (the rotation is the identity matrix), but if I try to turn, my lighting turns with it. Here's some screenshots:
Correct,
Wonky. I'm using code that MJP posted in another thread to calculate the corners of my far frustum plane, then passing those in as texture coordinates for my ViewDir value. Anyone have any idea why it's off like this?
EDIT: Here's two screens showing the positions from the above images:
Correct,
Wonky. You can see that the values are literally turning with the camera, but I don't understand why (I assume I have to adjust my frustum corners perhaps?).
[Edited by - xycsoscyx on December 31, 2007 3:46:29 PM]