Questions about advanced lighting techniques (IBL, SH)

Started by
1 comment, last by Lightness1024 11 years, 3 months ago
Hey guys,
The last couple of days I've been reading about Image based lighting and spherical harmonics and stuff like precomputed radiance transfer and trying to get a grip on how these techniques are used and implemented.

Now as I seem to get closer to understanding how all of this works there are a few questions popping up for me.
And I'd hoped you guys could help me with those...

So here I go:

1. If I understood correctly, both of these general ideas try to approximate indirect lighting from the environment.
But then when I read about spherical harmonic lighting, people still talk about using an environment map to precompute parts of the light equation and storing them via SH. Which is then what you'd call spherical harmonic lighting. But if you use an "image" to compute the incoming radiance, where on the other hand is the difference to "image-based lighting", or is it the same ? Or a combination of both ? Then there's also the talk about "prefiltering" an environment map(?)

2. When people talk about IBL, mostly they refer to indirect lighting using an environment probe. But this is where some confusion comes in.
Since the meaning of talking about IBL seems to differ quite a lot. There's distant IBL and local IBL. I understand that using a distant IBL would only make sense when rendering only the hemisphere (the sky) to simulate the sky lighting on the scenery. Another meaning of IBL is having "glossy reflections" or so, but as far as I can tell, isn't that just good old environment mapping ? Or is it special because of the "local" technique being used ? If not how does it differ ? And how does it work?

3. Now for something more practically, if I were to implement image-based lighting in my application, the first thing to do would be placing environment probes in the scene. Now those are rendering the final scene image of the surrounding area into 6 render targets as a precompute pass, correct ? That basically means that the reflections I'm getting for everything will be completely static or not ?
I've read somewhere that you could render a single probe on your player's position (in real-time?) for a solution on this problem.
But then where exactly would I place this ? On top of the player's head ?

4. This is a design question. In my deferred renderer I have a handful material types predefined, e.g. default, skin, sky,...
Would I just assume that everything is going to be reflective to some degree ? Or would I need to define a material for this ? If yes then I'd also need to have another variable telling the lighting shader how reflective this material has to be. Is there any good way around this ? As I've never seen any g-buffer layout having this kind of parameter packed somewhere.

5. Finally which technique of advanced lighting or indirect lighting approx. techniques would you recommend, to further read about and try to implement? Is PRT worth getting into ? I'm trying to implement dynamic daytime change later on. So something dynamic, at least for the sky would be quite nice.
Advertisement
1. SH can be used as a structure to store data that varies around a sphere, much like a cube map. It's basically a 'frequency space cube map'.
Because of the above, it's not explicitly tied to any kind of lighting algorithm (you can use cube maps for infinite different purposes). It turns out that SH sometimes happens to be a good tool to store light in an IBL system.

2. Yes, you can think of traditional 'environment mapping' as a way to produce Phong-specular lighting via IBL. IBL is any technique where your light is sourced from an image.

3. Yes, precomputed lighting will be static. For dynamic probes you'd usually use the centre of the object that you're collecting light for.

4. Your 'specular mask' (usually present in a GBuffer) is a reflectance coefficient.

5. At the moment, I'm excited by deferred irradiance volumes:
http://codeflow.org/entries/2012/aug/25/webgl-deferred-irradiance-volumes/

[quote name='lipsryme' timestamp='1357075631' post='5016475']
1. If I understood correctly, both of these general ideas try to approximate indirect lighting from the environment.
But then when I read about spherical harmonic lighting, people still talk about using an environment map to precompute parts of the light equation and storing them via SH. Which is then what you'd call spherical harmonic lighting. But if you use an "image" to compute the incoming radiance, where on the other hand is the difference to "image-based lighting", or is it the same ? Or a combination of both ? Then there's also the talk about "prefiltering" an environment map(?)

[/quote]

Like Hodgman said, SH is a tool. it happens in this case to not only be efficient at storing the result but also being efficient at doing the convolution.

The filter you are talking about, is a necessary calculation to perform the average (weighted by lambert term) of light arriving at a point with a given surface normal.

for example a flat ground receives energy from the whole upper hemisphere, however the point that matters the most is the point on the hemisphere that is exactly on the top pole. But roughly, to light the ground you need to sum up all points of the hemisphere. This is the convolution that SH are capable of very quickly. There is a tool to do this convolution: AmdCubeMapGen, you will see it is extremely slow when performing on spatial space, but much faster in frequency space (in the version modified by Lagarde).

So, you can indeed to IBL with env maps or SH indifferently, however if you don't convolve you cannot do diffuse lighting, you can only do pure reflective lighting. (perfect speculars)

That is one huge disadvantage of precomputed method, you cannot have easily varying specular coefficients, basically in the literature we find mostly preparations for diffuse surfaces. (therefore the lambert term in the convolution)


3. Now for something more practically, if I were to implement image-based lighting in my application, the first thing to do would be placing environment probes in the scene. Now those are rendering the final scene image of the surrounding area into 6 render targets as a precompute pass, correct ? That basically means that the reflections I'm getting for everything will be completely static or not ?

[/quote]

it is correct, but you can re-render that at any time, once per frame for example, or by tiles over a few frames...


I've read somewhere that you could render a single probe on your player's position (in real-time?) for a solution on this problem.
But then where exactly would I place this ? On top of the player's head ?
[/quote]

At camera position is a common technique, but using that env map for objects, particularly the distant ones, gives very false results. however it has the advantage of requiring only one cube map, which allows dynamic re-rendering, but if every object needed their own (to be realist), it could only be static because there would be too many to re-render.


5. Finally which technique of advanced lighting or indirect lighting approx. techniques would you recommend, to further read about and try to implement? Is PRT worth getting into ? I'm trying to implement dynamic daytime change later on. So something dynamic, at least for the sky would be quite nice.
[/quote]

Screen Space Local Reflections, Sparse Voxel Cone Tracing, Light Propagation Volumes, Radiance Hints, Imperfect Shadow Maps and many more

This topic is closed to new replies.

Advertisement