Lighting space

Started by
6 comments, last by turanszkij 6 years, 2 months ago

From a theoretical point of view, it does not really matter in which space, or coordinate system, you evaluate the rendering equation. Without introducing artificial spaces, you can chose among object, world, camera and light spaces. Since the number of objects and lights in typical scenes is way larger than the number of cameras, object and light spaces can be skipped. If you selected object space, all the lights (of which the number of lights is dynamic) need to be transformed to object space in the pixel shader. If you selected light space, all objects need to be transformed to each light space in the pixel shader. Both cases, clearly waste precious resources on "useless" transformations on a per fragment level.

So world and camera space remain. Starting with a single camera in the scene, camera space seemed the most natural choice. Lights can be transformed just once in advance. Objects can be transformed inside the vertex shader (on a per vertex level). Furthermore, positions in camera space are always equal to the inverse lighting direction used in BRDFs. So no offsetting calculations need to be performed, since the camera will always be located at the origin in its own space.

Given that you can use multiple cameras, each having its own viewport, the process repeats itself, but now for a different camera space. The data used in the shaders must be updated accordingly. For example: the object-to-camera transformation matrix should reflect the right camera. This implies many map/unmap invocations for object data. If however, lighting is performed in world instead of camera space, I could just allocate a constant buffer per object and update all the object data once per frame and bind it multiple times in case of multiple passes. Finally given my current and possible future voxelization problems, world space seems more efficient than camera space. It is possible to use a hybrid of camera and world space, but this will involve many "useless" transforms back and forth, so I rather stick to one space.

Given all this, I wonder if camera space is still appealing? Even for LOD purposes, length(p_view) and length(p_world - eye_world) are pretty much equivalent with regard to performance.

🧙

Advertisement

I agree with your conclusions and think that's a pretty common way to go. 

If you're dealing with a very large world, float starts to get really bad precision a few kilometres away from the origin, enough to cause moving lights to flicker as their positions are quantised. In that case, it's common to use an offset-world-space, where you move the "shading origin" to the cameras position from time to time. 

Also, going from object, to world, to camera has worse precision when going from object to camera directly. In something like a planetary / solar scale renderer, you would notice this, so would be back to updating constant data per object per frame just to get stable vertex positions, in which case you may as well do camera space shading. 

Another older convention you missed is tangent space, where +Z is the surface normal and X/Y are the surface tangents. This was popular during the early normal mapping era, as you could transform your light and view directions into tangent space in the vertex shader, and then in the pixel shader the raw normal map is the shading normal (no TBN matrix multiply required per pixel). 

12 minutes ago, Hodgman said:

Another older convention you missed is tangent space, where +Z is the surface normal and X/Y are the surface tangents. This was popular during the early normal mapping era, as you could transform your light direction into tangent space in the vertex shader, and then in the pixel shader the raw normal map is the shading normal (no TBN matrix multiply required per pixel). 

Oh yeah missed that one. Though, I currently use this to apply tangent-space normal mapping (with the surface normal in view space). Though, without pre-computation; just on the fly in the PS.

12 minutes ago, Hodgman said:

Also, going from object, to world, to camera has worse precision when going from object to camera directly. In something like a planetary / solar scale renderer, you would notice this, so would be back to updating constant data per object per frame. 

Currently, I have a separate object-to-camera and camera-to-projection inside my vertex shader. But if I am going to switch, object-to-world and world-to-projection are the "obvious" replacements. Though for precision, I best leave the camera-to-projection separately. So then I am back at the object-to-world, world-to-camera, camera-to-projection chain :)

 

What do you advise for the typical gamma of "Unity3D-kind-of" games? All my code uses camera space, though I become tempted to use world space (which can also easier be changed later on to offset-world-space, I guess)? Especially, the reduction in map/unmap invocations seems like a holy grail at the moment. Not that I experience any bottlenecks so far, but it still seems wasteful in case of multiple passes.

🧙

Correct me if I am wrong (since I haven't worked with all of them) but I found on the web that Unity3DUnreal Engine 4 and CryEngine 3 all perform shading in world space. Blender uses camera space, but has an explicit option to switch to world space shading to be compatible with other engines.

Also, camera space can be better for compressing normals in the GBuffer since all normals face the camera. On the other hand, you can probably see flickering specular highlights when using camera space. World space seems somehow by default temporally more coherent and robust.

 

🧙

9 minutes ago, matt77hias said:

Also, camera space can be better for compressing normals in the GBuffer since all normals face the camera. .

I think it's no longer true though if you are using perspective projection, smooth normal or normal mapping.

For the topic though I would always go with world space instead of view space, because this way you can use the global set of light data for all passes using different cameras. 

An other contender would be texture space shading with a completely different approach, but it can decouple shading from screen resolution and update/render loop frequency as well, see this: https://gpuopen.com/texel-shading/

21 minutes ago, turanszkij said:

For the topic though I would always go with world space instead of view space, because this way you can use the global set of light data for all passes using different cameras. 

Idd. Though, on the other hand, you can apply light culling per camera. Furthermore, shadow maps, tiling and clustering need to be re-generated anyway per camera.

But I was already convinced :) (Though, I would like to have rather woke up instead of gone to sleep with this idea at the prospect of the many refactorings waiting ahead :o

3 minutes ago, matt77hias said:

Furthermore, shadow maps, tiling and clustering need to be re-generated anyway per camera.

An indirection with a light index list (tiled and clustered shading) makes actually lots of sense. You just need to regenerate the index list. Without an index list, you need to remove the culled lights.

(Note that this is a funny aspect of gamedev.net: I have a quote to myself in the same post :D )

🧙

7 minutes ago, matt77hias said:

An indirection with a light index list (tiled and clustered shading) makes actually lots of sense. You just need to regenerate the index list. Without an index list, you need to remove the culled lights.

Actually, right now I like to use light indices for every kind of shading (deferred, forward, tiled...). I can update a huge entity array once (which also contains lights), then every shader which needs to use any entity just need to have an index offset, not update a whole structure. Though from an old-school deferred shader standpoint for example, the shader would be faster if it loaded from a small constant buffer instead of a huge entity array with an index. Still need to make a decision on that, but at least this way all the shading paths are a bit more unified and easier to keep them managed.

This topic is closed to new replies.

Advertisement