Jump to content

  • Log In with Google      Sign In   
  • Create Account


(Dis-)advantages of view space vs. world space in a deferred renderer


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
8 replies to this topic

#1 B_old   Members   -  Reputation: 644

Like
0Likes
Like

Posted 10 January 2013 - 04:21 AM

I was talking to a colleague about view space vs. world space storage in a deferred renderer and he asked me whether I know about an objective discussion of the two approaches. I have only gathered small bits from different sources and some points have come up in this recent thread.

I have found examples for both approaches used in productive environments.

 

 

View space

World space

 

I haven't bothered to look for more examples, but I'm under the impression that view space is more popular. I find it interesting though that both UE4 and Cry Engine 3 chose world space.

 

Here are the infos I found most important, described from a "pro view space" point of view.

 

View space

+

  • Slightly faster to reconstruct position from depth
  • Better possibilities to compress normals
  • Smaller values for position benefit from floating point precision 

-     

  • View space normal not suitable for cubemap look-up, e.g. IBL/light probes of some sort

Seeing as all calculations are possible in world space I guess it boils down to deciding if the potential view space advantages outweigh the fact, that you have to fall back to world space normals in some cases. 

 

Do you know some sources that discuss these questions? Can you think of more arguments for world space normals that seem to be favored by UE4/Cry Engine?


Edited by B_old, 10 January 2013 - 06:27 AM.


Sponsor:

#2 Ashaman73   Crossbones+   -  Reputation: 6735

Like
0Likes
Like

Posted 10 January 2013 - 05:12 AM

What do you mean by approach ? I think that all engines uses both ways, whatever is suitable best for a certain feature.

E.g. the g-buffer reconstructs the position from the depth only, why should you save the world space position (3 components vs 1 component) ? But if you save the position in view space, why should I save the normal in world space ?

 

Some features might use the world space normal i.e. to traverse through voxels given in worldspace (both, udk and ce use voxel based GI approaches), I think that in this case they just convert the necessary vectors into world space to handle voxels better. Therefor I think, that there's no general rule other then, that a view space gbuffer is often more suiteable (more compact, easier to hide normal artifact, faster to decode) and that world space vs view space calculation depends on each single goal algorithm and might change from shader to shader.

 

In the elemental presentation they talk only about world space particles and voxels (as far as I can see), because these data structures are often more stable and survive most camera changes until you need to rebuild/modify them.



#3 Hodgman   Moderators   -  Reputation: 27841

Like
0Likes
Like

Posted 10 January 2013 - 05:26 AM

Reconstructing the world-space position or the view-space position from depth both take the same time (one is a mul and one is a madd).

View space normal not suitable for cubemap look-up, e.g. IBL/light probes of some sort

Yeah, so if the majority of your lighting requires world-space IBL lookups (and the rest can also be done in world-space), then storing world-space normals saves you from converting from view-to-world per pixel.

 

Likewise, if all your lighting operations can be done in view-space, then storing view-space normals lets you take advantage of the compression benefits.

 

If you've got some operations that have to be done in one space, and some that have to be done in another space, then you're stuck with a conversion per-pixel either way you go (unless you instead choose to spend more bandwidth to eliminate the conversion...), in which case I'd go with the one that saves the most bandwidth by default.



#4 B_old   Members   -  Reputation: 644

Like
0Likes
Like

Posted 10 January 2013 - 06:45 AM

Any opinion on precision of world space position vs. view space positions? I assume this is only noticeable with rather large numbers.



#5 Hodgman   Moderators   -  Reputation: 27841

Like
1Likes
Like

Posted 10 January 2013 - 07:06 AM

Yeah in a space simulator you might notice the precision (but those games are already at such a scale where you don't really have a world-space -- you're forced to use many different spaces already to avoid precision issues). Note that positions are mainly used for attenuation calculations in lighting though, whereas normals are used to determine the shape of fiddly specular highlights, which would be more sensitive to loss of precision.

 

If you're working on an architecture where you can do fp16 math faster than fp32, then that would exaggerate the benefits of view-space's compacted positions.

 

I think I read in one of Crytek's papers that although they use world-space normals, they actually move the positional origin to be centred at the camera. So it's not really "world-space"; it's the rotational basis from world-space, and the translation from view-space -- which gives them the same positional precision benefits.

 

Also, IIRC, Crytek use 24-bit "best fit normal" storage, whereas many of the view-space schemes store normals in two 16-bit components, which is actually larger!


Edited by Hodgman, 10 January 2013 - 07:08 AM.


#6 B_old   Members   -  Reputation: 644

Like
0Likes
Like

Posted 10 January 2013 - 07:23 AM

That's some interesting information.

Makes it even harder to decide.



#7 swiftcoder   Senior Moderators   -  Reputation: 9637

Like
0Likes
Like

Posted 10 January 2013 - 07:33 AM

I think I read in one of Crytek's papers that although they use world-space normals, they actually move the positional origin to be centred at the camera. So it's not really "world-space"; it's the rotational basis from world-space, and the translation from view-space -- which gives them the same positional precision benefits.

This is roughy what I do for rendering space/planet scenes. The ranges are such that you really need the extra precision for positions, while normal compression buys me a bit of (much needed) bandwidth.


Tristam MacDonald - Software Engineer @Amazon - [swiftcoding]


#8 B_old   Members   -  Reputation: 644

Like
0Likes
Like

Posted 10 January 2013 - 07:37 AM

This is roughy what I do for rendering space/planet scenes. The ranges are such that you really need the extra precision for positions, while normal compression buys me a bit of (much needed) bandwidth.

May I ask in what space and what compression you store your normals?



#9 swiftcoder   Senior Moderators   -  Reputation: 9637

Like
0Likes
Like

Posted 10 January 2013 - 10:51 AM

May I ask in what space and what compression you store your normals?

Currently it's just view-space X,Y (with Z reconstructed on the fly). There are a few edge cases to the Z reconstruction though, so I'll have to switch to a fancier scheme at some point.


Tristam MacDonald - Software Engineer @Amazon - [swiftcoding]





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS