(Dis-)advantages of view space vs. world space in a deferred renderer

Started by
7 comments, last by swiftcoder 11 years, 3 months ago

I was talking to a colleague about view space vs. world space storage in a deferred renderer and he asked me whether I know about an objective discussion of the two approaches. I have only gathered small bits from different sources and some points have come up in this recent thread.

I have found examples for both approaches used in productive environments.

View space

World space

I haven't bothered to look for more examples, but I'm under the impression that view space is more popular. I find it interesting though that both UE4 and Cry Engine 3 chose world space.

Here are the infos I found most important, described from a "pro view space" point of view.

View space

+

  • Slightly faster to reconstruct position from depth
  • Better possibilities to compress normals
  • Smaller values for position benefit from floating point precision

-

  • View space normal not suitable for cubemap look-up, e.g. IBL/light probes of some sort

Seeing as all calculations are possible in world space I guess it boils down to deciding if the potential view space advantages outweigh the fact, that you have to fall back to world space normals in some cases.

Do you know some sources that discuss these questions? Can you think of more arguments for world space normals that seem to be favored by UE4/Cry Engine?

Advertisement

What do you mean by approach ? I think that all engines uses both ways, whatever is suitable best for a certain feature.

E.g. the g-buffer reconstructs the position from the depth only, why should you save the world space position (3 components vs 1 component) ? But if you save the position in view space, why should I save the normal in world space ?

Some features might use the world space normal i.e. to traverse through voxels given in worldspace (both, udk and ce use voxel based GI approaches), I think that in this case they just convert the necessary vectors into world space to handle voxels better. Therefor I think, that there's no general rule other then, that a view space gbuffer is often more suiteable (more compact, easier to hide normal artifact, faster to decode) and that world space vs view space calculation depends on each single goal algorithm and might change from shader to shader.

In the elemental presentation they talk only about world space particles and voxels (as far as I can see), because these data structures are often more stable and survive most camera changes until you need to rebuild/modify them.

Reconstructing the world-space position or the view-space position from depth both take the same time (one is a mul and one is a madd).

[quote name='B_old' timestamp='1357813275' post='5019821']View space normal not suitable for cubemap look-up, e.g. IBL/light probes of some sort[/quote]Yeah, so if the majority of your lighting requires world-space IBL lookups (and the rest can also be done in world-space), then storing world-space normals saves you from converting from view-to-world per pixel.

Likewise, if all your lighting operations can be done in view-space, then storing view-space normals lets you take advantage of the compression benefits.

If you've got some operations that have to be done in one space, and some that have to be done in another space, then you're stuck with a conversion per-pixel either way you go (unless you instead choose to spend more bandwidth to eliminate the conversion...), in which case I'd go with the one that saves the most bandwidth by default.

Any opinion on precision of world space position vs. view space positions? I assume this is only noticeable with rather large numbers.

Yeah in a space simulator you might notice the precision (but those games are already at such a scale where you don't really have a world-space -- you're forced to use many different spaces already to avoid precision issues). Note that positions are mainly used for attenuation calculations in lighting though, whereas normals are used to determine the shape of fiddly specular highlights, which would be more sensitive to loss of precision.

If you're working on an architecture where you can do fp16 math faster than fp32, then that would exaggerate the benefits of view-space's compacted positions.

I think I read in one of Crytek's papers that although they use world-space normals, they actually move the positional origin to be centred at the camera. So it's not really "world-space"; it's the rotational basis from world-space, and the translation from view-space -- which gives them the same positional precision benefits.

Also, IIRC, Crytek use 24-bit "best fit normal" storage, whereas many of the view-space schemes store normals in two 16-bit components, which is actually larger!

That's some interesting information.

Makes it even harder to decide.

[quote name='Hodgman' timestamp='1357823173' post='5019853']
I think I read in one of Crytek's papers that although they use world-space normals, they actually move the positional origin to be centred at the camera. So it's not really "world-space"; it's the rotational basis from world-space, and the translation from view-space -- which gives them the same positional precision benefits.[/quote]

This is roughy what I do for rendering space/planet scenes. The ranges are such that you really need the extra precision for positions, while normal compression buys me a bit of (much needed) bandwidth.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

This is roughy what I do for rendering space/planet scenes. The ranges are such that you really need the extra precision for positions, while normal compression buys me a bit of (much needed) bandwidth.

May I ask in what space and what compression you store your normals?

[quote name='B_old' timestamp='1357825068' post='5019858']
May I ask in what space and what compression you store your normals?[/quote]

Currently it's just view-space X,Y (with Z reconstructed on the fly). There are a few edge cases to the Z reconstruction though, so I'll have to switch to a fancier scheme at some point.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

This topic is closed to new replies.

Advertisement