Difference between IBL and old-school environment mapping?

Started by
12 comments, last by LeGreg 8 years, 7 months ago

I was just going through some old-school OpenGL articles[0], and I wondered what was the relationship between old full-mirror cubemap enviroment mapping is to contemporary IBL. Is it simply that nowadays we use mipmaps to light rough objects whereas back in the day everything was lit full-mirror? Is it that the environment maps are now 'pre-convolved' (not entirely sure what that means, seems to be integrating)?

Or in modern IBL are there many local cubemaps sampled instead of one? Or is it that back in the day they used the raw cubemap pixels as-is and nowadays we use the cubemap pixels as input to conventional specular-diffuse lighting models? I am truly confused!

I guess what I'm getting at, is why does old-school environment mapping look so crappy and Marmoset toolbag looks so great? They seem to be using the same raw materials of environment cubemaps.

Thanks!

[0] http://www.nvidia.com/object/cube_map_ogl_tutorial.html

Advertisement

Well, there's specular and diffuse lighting. You can't use an "old full-mirror cubemap" to do diffuse lighting as it only stores the lighting in an environment at one angle. It's used for specular, and you can utilize the different mip levels to add roughness to your specular reflections.

Diffuse however requires you gather all the incoming light and integrate it using a function called a BDRF. This is what convolving is. It's an expensive process as each pixel on the irradiance map factors in every pixel in the radiance map (unless you use importance sampling). Preconvolving is just performing the process offline rather than in realtime.

Here's a shot from a GPU gems article that shows a radiance map (left) and the generated irradiance map:

10_irradiance_02a.jpg

I'm relatively new to the whole topic so some details above may be incorrect, but that's my understanding.

Should you use two cubemaps, one for the diffuse and one for the specular?

Yes. While it may look like the irradiance map is just a mip-mapped/blurred radiance map this is not the case. Each spot on the irradiance map was determined using a BDRF function ran factoring in all light from the radiance map. Since they are quite different and have different functions you'd have to keep them separate.

You don't necessarily have to ask your users for two maps though. You can always take the radiance map and generate an irradiance map from it.

This page from Unreal says:

The blurry versions are computed in a way that they can be used for specular lighting with varying glossiness (sharp vs. blurry reflections) and they can be used for diffuse lighting as well.

It's strange wording but my guess would be a technical writer didn't put a lot of effort into that article. You might be able to get an acceptable result from using lower mip levels of your radiance map, but it won't be physically accurate.

You can store preconvolved lighting info for specular in mips 0 through MipCount-2, and then store diffuse lighting into in MipCount-1.

Often diffuse light is stored in another type of data strucure altogether though - e.g. as sperical harmonics instead of a cubemap.


As for the difference between old and new - it's the difference between an empirical model and a theoretical model.
In an empirical model, you use the knowledge gathered by your senses to intuit how something would look, and distil that down into a formula, which seems to look correct.
In a theoretical model, you use the laws of physics to predict how light will behave, and then distill that down into a formula, which will always be correct within your assumptions, but hopefully your assumptions match the real world enough for it to look right.
Lastly, in an experimental model, you use a gonioreflectometer to collect a massive amount of data on how light actually does reflect off a particular material, and use that to build a gigantic look-up table, etc.

Experimental models are often used to validate theoretical models.

Old school reflection mapping is empirical - it's an obvious approach from intuition and mostly looks ok.
Modern IBL is theoretical - there's a massive amount of physics and math at its foundations, which allows for a lot of flexibility in how it's used. Instead of it stemming from our senses, it stems from knowledge of the microscopic world.

You can store preconvolved lighting info for specular in mips 0 through MipCount-2, and then store diffuse lighting into in MipCount-1.

Interesting, I've never heard of this. Can you clarify what you mean by preconvolved specular? I was under the impression no algorithms needed to be ran for radiance, you just need to render the map and sample it at varying mip levels for roughness.

Regarding storing difffuse in low mip level, why would you use such a low resolution diffuse lighting? At that point you'd be down to 2x2, that's not a lot of data for your diffuse ambient term. Is this in the context of probes or something? I'm just thinking for something like a skylight 2x2 diffuse ambient is not far off from just using a flat ambient term.

Can you clarify what you mean by preconvolved specular? I was under the impression no algorithms needed to be ran for radiance, you just need to render the map and sample it at varying mip levels for roughness.

It's exactly what you mentioned above -- that the mips have to be carefully generated with regards to the BRDF.
The state of the art in games at the moment seems to be Epic's approach of preconvolving radiance with only the 'D' term of the specular BRDF, and then trying to correct the sampled values with a look-up-table. It's impossible to (efficiently) pre-convolve the irradiance with the full specular BRDF because there's too many variables -- not just roughness, but also viewing angle, so Epic's approach is a very nice approximation.

Regarding storing difffuse in low mip level, why would you use such a low resolution diffuse lighting? At that point you'd be down to 2x2, that's not a lot of data for your diffuse ambient term.

The Lambert diffuse BRDF doesn't change with roughness, so there's no need to have multiple mip levels in that cubemap. It also varies very slowly, due to the fact that it samples from a 180deg cone, so it copes fairly well with lower resolutions.
10 years ago, the state of the art in realtime was 1pixel cube-map (Valve's "ambient cube" -- 6 RGB values). You can of course use higher resolutions, but you don't gain a lot after a point. We did just ship a game that uses a 64px cube-map for preconvolved diffuse, but that was pretty wasteful...

Lots of games also use Spherical Harmonics instead of cube-maps -- second order spherical harmonics uses 9 RGB values and gives very good results for diffuse lighting. Some games also use first-order spherical harmonics, which only requires 4 RGB values.

Can you clarify what you mean by preconvolved specular? I was under the impression no algorithms needed to be ran for radiance, you just need to render the map and sample it at varying mip levels for roughness.

It's exactly what you mentioned above -- that the mips have to be carefully generated with regards to the BRDF.
The state of the art in games at the moment seems to be Epic's approach of preconvolving radiance with only the 'D' term of the specular BRDF, and then trying to correct the sampled values with a look-up-table. It's impossible to (efficiently) pre-convolve the irradiance with the full specular BRDF because there's too many variables -- not just roughness, but also viewing angle, so Epic's approach is a very nice approximation.

Aww, I actually wasn't aware that the mip levels of the radiance map were generated with a BRDF though it makes perfect sense. I thought only the irradiance map was done that way. Good to know, thanks for clarifying.

This topic is closed to new replies.

Advertisement