Why use viewspace normals for deferred shading

Started by
8 comments, last by B_old 14 years, 11 months ago
I recently read a little in the paper that explains some things about the deferred shading used in Stalker. It states that it has obvious benefits to use viewspace normals. Why is it obvious?
Advertisement
I believe their engine stores/stored viewspace XYZ positions, in which case it would make sense to also store viewspace normals.

The only other reason I can think of to use viewspace normals is so that you can store X and Y and then reconstruct Z by assuming it is always negative (or positive, depending on your handedness). However this leads to artifacts, since this won't always be true with a perspective projection. You can also store viewspace or worldspace normals as spherical coordinates, and just assume that rho = 1 (which is always true).
Quote:Original post by MJP
The only other reason I can think of to use viewspace normals is so that you can store X and Y and then reconstruct Z by assuming it is always negative (or positive, depending on your handedness).
that's the point, gbuffer size is usually an issue and you usually try to fit as much as possible in as few space as possible.
Quote:
However this leads to artifacts, since this won't always be true with a perspective projection.

that's not about perspective projection, it's not possible to point in the other Z direction, as that's how you determine a backface, which is not visible.

the problem arise as normals from normalmaps might face to the negative z direction, because they do not represent real geometry facing. one simple way to hide this issue to some degree is du clamp the viewspace z of the normal to 0.f|1.f and normalize it again before you store xy. you'll barelly notice that.


Quote:Original post by Krypt0n
that's the point, gbuffer size is usually an issue and you usually try to fit as much as possible in as few space as possible.


Right of course, but Stalker doesn't do that. They store it as XYZ in a R16G16B16A16F surface. Plenty of details here.

Quote:Original post by Krypt0n
that's not about perspective projection, it's not possible to point in the other Z direction, as that's how you determine a backface, which is not visible.


It is absolutely possible. Insomniac talks a bit about the issue here, starting on page 11.

Also the face normal isn't what used for backface culling, it's the winding order of the triangle's vertices.


Quote:Original post by MJP...

oh, sorry dude, you've been right in both points.

but, if they store the complete normal, I dont really see a benefit of doing it in viewspace. worldspace would be the way I'd go in this case.
well, positions are easier reconstructed in view space. So it follows that incident vectors are easier in view space too, hence the need for view space normals.

In the end, it doesn't matter at all. Pick a space, and stick to it. All the rest is optimization, and is best done at the end, when you see all the implications.
Quote:Original post by harveypekar
well, positions are easier reconstructed in view space. So it follows that incident vectors are easier in view space too, hence the need for view space normals.

In the end, it doesn't matter at all. Pick a space, and stick to it. All the rest is optimization, and is best done at the end, when you see all the implications.


is there a way to reconstruct position in screen space?
Do you mean camera space or post projective space? Both are possible, the second one allows the use of hardware depth buffers.

Check here
that's exactly what I was looking for! thank you my friend!
Ok, thanks for the answers!

This topic is closed to new replies.

Advertisement