Sign in to follow this  

Getting view space coordinates for a pre-transformed vertex

This topic is 4201 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi there! I´m currently trying to get a deferred renderer working, but I seem to have serious problems getting the correct view space coordinates for the lightning pass. In the first pass I fill a texture with view space depth (z component of points in view space) values, but seem to fail miserably in reconstructing the view space coordinates in the second pass, as enabling my light shader results in a pitch black scene. As the z component is retrieved from the texture filled in the first pass, I really hope it´s the x and y component being wrong. So as far as I understand it, I should be able to compute the coordinates of one point along the ray through the current pixel in view space (let´s call that point p) and then do the following:
p.xy = (p.xy / p.z) * depthFromTexture;
p.z = depthFromTexture;
After that p should contain the coordinates I need. However I seem unable to calculate an initial set of values for p on that ray. So my question is: How can I calculate the viewspace coordinates of a pre-transformed vertex for example at the near plane which I can use in the above calculation?

Share this post


Link to post
Share on other sites
I think it is a bit more complex than that.

To convert the coordinate back I think you will need to multiply them out through the inverse projection matrix.

The logic should be something like the following
// normalized window coords (might already have these in the shader)
nx = (nx - view_x) / view_w;
ny = (ny - view_y) / view_h;

// remap coords to -1..1 range (including depth)
nx = nx * 2 - 1;
ny = ny * 2 - 1;
d = d * 2 - 1;

// multiply by inverse projection matrix
pos = Multiply(InverseProjection, vec4(nx, ny, d, 1.0));

// final 3d vector position
pos.x /= pos.w;
pos.y /= pos.w;
pos.z /= pos.w;

I haven't implemented this myself (will soon though) so I may be incorrect (logic from memory), someone correct me if I am wrong.

Hope that helps

Share this post


Link to post
Share on other sites
I guess you´re right partially.
Mapping relative window coordinates to projection space and using the inverted projection matrix should work to get one point along the ray in view space.
However I don´t understand why I should map the depth from [0,1] to [-1,1] as the depth I stored is not in projection space, it´s the depth in view space, which I normalized.
I use these equations to normalize and to restore the view space depth:

//map depth to [0,1]
texValue = (depth - fNear) * (fFar - fNear);

//restore depth
depth = texValue * (fFar-fNear) + fNear;


Using that value for depth I just thought that I could find the point in view space along the ray that goes through the current pixel by taking the point on that ray with the distance 'depth' from the camera, which would be:

//windowPos is a float2 containing relative window positions in [0,1]
float3 p = float3(windowPos *2 - 1, 0.0);
//now viewPos is a point on the ray from the camera "through the pixel"
float3 viewPos = mul(p, inv_ProjectionMatrix);
//following the ray to the point with stored depth
viewPos.xy = viewPos.xy * depth / viewPos.z;
viewPos.z = depth;

However it seems that I don´t have to do that "follow the ray" thing. When I set the z-coordinate of p to 0.0 as I did above, I can just replace the z-component of viewPos with the depth stored in the texture. So I end up doing that:

/windowPos is a float2 containing relative window positions in [0,1]
float3 p = float3(windowPos *2 - 1, 0.0);
//now viewPos is a point on the ray from the camera "through the pixel"
float3 viewPos = mul(p, inv_ProjectionMatrix);
viewPos.z = depth;


At least that works on the CPU, where I can debug stuff and print the results to a file. Even though it works on the CPU I don´t have a clue why, as I don´t use orthogonal projection and therefore shouldn´t be able to just replace the z-component of viewPos to get the correct results.
It doesn´t seem to work on the GPU as I get wrong lighting results when I use that viewPos.

So I´ll give your suggestion a try and remap my stored depth value to [-1,1] and multiply that with the inverted projection matrix, perhaps it works :)

Share this post


Link to post
Share on other sites
still haven´t got it to work properly.
The problem still remains that I seem to be unable to calculate the view space coordinates of a point visible at a pixel. Just a recap of the problem:
I got the x and y coordinates of the pixel in projection space (in the range of [-1,1] that is), and need to find the view space coordinates of the corresponding point in view space. The z-coordinate of the point in view space is stored in a texture and available.

Any help? I think this problem has been solved before many times, but I can´t find a good explanation for the problem. See above posts for my thoughts so far, which all seem to be miserably wrong :(
Thx for help and reading!

Share this post


Link to post
Share on other sites
Sry for posting once again, but I´m even more confused than I was yesterday.
Until today I´ve thought that coordinates in projection space should be in the range of [-1,1] for x and y components and [0,1] for z component. But I tried the following:
I defined a single point in world space, multiplied it with my view matrix, which yields reasonable results for the used eye and lookAt vectors. The results from this multiplication (i.e. the point in view-space) I multiply with my projection matrix (which corresponds to the projection matrix described in the DXSDK docs, topic "Projection Transform"). Now the coordinates should be in the domains I mentioned above, but they are not. They are way beyond that, e.g. (32.404, 102.221, 1246.950) for a point that is definitely in view.
Still, using those matrices unchanged for the transformation pipeline works and gives results I would expect.

Now, because the coordinates in projection-space don´t seem to be in this nice [-1,1] domain for x and y, it´s no wonder my projection space to view space transformation yields erroneous results.
It seems I first need to find the right coordinates in projection space for a pre-transformed vertex.
Can anybody help me and clear that up?

Share this post


Link to post
Share on other sites

This topic is 4201 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this