Global illumination techniques

Started by
60 comments, last by FreneticPonE 9 years, 11 months ago

Hmm, you're right. It looks like they're using cascaded shadow maps for both the static and dynamic geometry, which is interesting. I assume they bake only the indirect lighting and then just add in the direct lighting on the fly. If nothing else, it's probably easier to implement than storing the contribution of direct light onto static geometry.

Guys, I understand the part with shadows. It's not interesting if they are using static shadow maps for static level geometry. I don't think they just bake the indirect lighting and that's it. The actors and other objects moving through the level receive indirect lighting as well. I have a feeling they have some sort of lightmap on static levels and also have some "fill lights" placed here and there simulate bounced light and to illuminate dynamic objects, that move around.

Advertisement

My guess would be that they are using baked irradiance volumes for the indirect part. I haven't seen the game in action yet, though.

Last of us have quite good looking dynamic ambient occlusion technique. It's just a bit too dark but don't have any screen space limitations. I am thinking it could be Ambient Occlusion Fields http://www.gdcvault.com/play/1015320/Ambient-Occlusion-Fields-and-Decals but that paper say that sourcec have to be rigid so maybe they have something completly different.

Hmm, you're right. It looks like they're using cascaded shadow maps for both the static and dynamic geometry, which is interesting. I assume they bake only the indirect lighting and then just add in the direct lighting on the fly. If nothing else, it's probably easier to implement than storing the contribution of direct light onto static geometry.

Guys, I understand the part with shadows. It's not interesting if they are using static shadow maps for static level geometry. I don't think they just bake the indirect lighting and that's it. The actors and other objects moving through the level receive indirect lighting as well. I have a feeling they have some sort of lightmap on static levels and also have some "fill lights" placed here and there simulate bounced light and to illuminate dynamic objects, that move around.

It's fairly common to bake ambient lighting into probes located throughout the level, and then have dynamic objects sample from those probes as they move through the level.

Hmm, you're right. It looks like they're using cascaded shadow maps for both the static and dynamic geometry, which is interesting. I assume they bake only the indirect lighting and then just add in the direct lighting on the fly. If nothing else, it's probably easier to implement than storing the contribution of direct light onto static geometry.

Guys, I understand the part with shadows. It's not interesting if they are using static shadow maps for static level geometry. I don't think they just bake the indirect lighting and that's it. The actors and other objects moving through the level receive indirect lighting as well. I have a feeling they have some sort of lightmap on static levels and also have some "fill lights" placed here and there simulate bounced light and to illuminate dynamic objects, that move around.

You're right that I got sidetracked.

I think what you're suggesting ("fill lights") probably most closely resembles "virtual point lights" which is something that is sometimes used in radiosity/deferred lighting systems. I've toyed with it a bit and found that it is probably best suited to non-realtime/"interactive" rendering, as it takes a fairly large number of VPLs as well as some kind of shadowing/occlusion to make it work well. Like I said, I've only played with this a bit, so there might well be some interesting optimizations/approximations that I'm not aware of.

That said,, MJP suggested baking light probes, which is a fairly similar idea. I'm not an expert on the subject, but I'll fill in the details to the best of my ability. I think that the three most common ways of achieving this are using probes that store just color information (with no directional information), spherical harmonic lights, or irradiance maps.

They all typically involve interpolating between the probes based on the position of the dynamic actor that is being rendered.

In the first case (color only), we have stored the amount/color of light that has reached a given probe, so that the probes make something that resembles a point cloud of colors, or a (very coarse) 3D texture. When we want to light a dynamic object, we just interpolate between the colors (based on position) and use that color as the indirect term.

In the second and third cases (irradiance or spherical harmonic), the probes also define a way of looking up the light color based on the normal of the dynamic object we are rendering; aside from that, the idea is the same: we interpolate between the probes based on the dynamic object's position, and then do the look-up, only with the normal as input.

These links might help you understand how to compute the probes for those cases. They're not explicitly geared toward pre-computing multiple probes (but rather computing irradiance for a single point efficiently) but they might help to point you in the right direction: http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter10.html and http://codeflow.org/entries/2011/apr/18/advanced-webgl-part-3-irradiance-environment-map/

I invite anyone to correct my imperfect understanding of the subject.

-~-The Cow of Darkness-~-

Yeah that's the gist of it: you store ambient diffuse lighting encoded in some basis at sample locations, and interpolate between the samples based on the position of the object sampling the probes as well as the structured of the probes themselves (grid, loose points, etc.). Some common based used for encoding diffuse lighting:

  • Single value - store a single diffuse color, use it for all normal directions. Makes objects look very flat in ambient lighting, since the ambient diffuse doesn't change with normal direction. There's no way to actually compute diffuse lighting without a surface normal, so typically it will be the average of the computed diffuse in multiple directions.
  • Hemispherical - basically you compute diffuse lighting for a normal pointed straight up, and one for a normal pointed straight down. Then you interpolate between the two value using the Y component of the surface normal used for rendering.
  • Ambient cube - this is what Valve used in HL2. Similar to hemispherical, except diffuse lighting is computed and stored in 6 directions (usually aligned to world space axes). The contribution for each light is determined by taking the dot product of the surface normal with the direction for each axis of the cube.
  • Spherical harmonics - very commonly used in modern games. Basically you can store a low-frequency version of any spherical signal by projecting onto the SH basis functions up to a certain order, with the order determining how much detail will be retained as well as the number of coefficients that you need to store. SH has all kinds of useful properties, for instance they're essentially a frequency-domain representation of your signal which means you can perform convolutions with a simple multiplication. Typically this is used to convolve the lighting environment with a cosine lobe, which essentially gives you Lambertian diffuse. However you can also convolve with different kernels, which allows you to use SH with non-Lambertian BRDF's as well. You can even do specular BRDF's, however the low-frequency nature of SH typically limits you to very high roughnesses (low specular power, for Phong/Blinn-Phong BRDF's).
  • Spherical Radial Basis Functions - with these you basically approximate the full lighting environment surrounding a probe point with a set number of lobes (usually Gaussian) oriented at arbitrary directions about a sphere. These can be cool because they can let you potentially capture high-frequency lighting. However they're also difficult because you have to use a non-linear solver to "fit" a set of lobes to a lighting environment. You can also have issues with interpolation, since each probe can potentially have arbitrary lobe directions.
  • Cube Maps - this isn't common, but it's possible to integrate the irradiance for a set of cubemap texels where each texel represents a surface normal of a certain direction. This makes your shader code for evaluating lighting very simple: you just lookup into the cubemap based on the surface normal. However it's generally overkill, since something like SH or Ambient Cube can store diffuse lighting with relatively little error while having a very compact representation. Plus you don't have to mess around with binding an array of cube map textures, or sampling from them.

For all of these (with the exception of SRBF's) you can generate the probes by either ray-tracing and directly projecting onto the basis, or by rasterizing to a cubemap first and then projecting. This can potentially be very quick, in fact you could do it in real time for a limited number of probe locations. SRBF's are trickier, because of the non-linear solve which is typically an iterative process.

EDIT: I looked at those links posted, and there's two things I'd like to point out. In that GPU Gems article they evalute the diffuse from the SH lighting environment by pre-computing the set of "lookup" SH coefficients to a cube map lookup texture, but this is totally unnecessary since you can't just directly compute these coefficients in the shader. Something like this should work:


// 'lightingSH' is the lighting environment projected onto SH (3rd order in this case),
// and 'n' is the surface normal
float3 ProjectOntoSH9(in float3 lightingSH[9], in float3 n)
{
    float3 result = 0.0f;
    
    // Cosine kernel
    const float A0 = 1.0f;
    const float A1 = 2.0f / 3.0f;
    const float A2 = 0.25f;

    // Band 0
    result += lightingSH[0] * 0.282095f * A0;

    // Band 1
    result += lightingSH[1] * 0.488603f * n.y * A1;
    result += lightingSH[2] * 0.488603f * n.z * A1;
    result += lightingSH[3] * 0.488603f * n.x * A1;

    // Band 2
    result += lightingSH[4] * 1.092548f * n.x * n.y * A2;
    result += lightingSH[5] * 1.092548f * n.y * n.z * A2;
    result += lightingSH[6] * 0.315392f * (3.0f * n.z * n.z - 1.0f) * A2;
    result += lightingSH[7] * 1.092548f * n.x * n.z * A2;
    result += lightingSH[8] * 0.546274f * (n.x * n.x - n.y * n.y) * A2;

    return result;
}

This brings me to my second point, which is that the WebGL article mentions the lookup texture as a disadvantage of SH which really isn't valid since you don't need it at all. This makes SH a much more attractive option for storing irradiance, especially if your goal is runtime generation of irradiance maps since with SH you don't need an expensive convolution step. Instead your projection onto SH is basically a repeated downsampling process, which can be done very quickly. This is especially true if you use compute shaders, since you can use a parallel reduction to perform integration using shared memory with fewer steps.

For an introduction to using SH for this purpose, I would definitely recommend reading Ravi's 2001 paper on the subject. Robin Greene's paper is also a good place to start.

This would be a good thread to show off my latest progress of my voxel cone tracing engine before I created another video:

izGKVzC.png

100% fully dynamic, runs at 30~40fps @ 1024x768 on a gtx485m, depending on how many of the 3 shadow casting emissive objects I turn on.

Guys, thanks for the explanations.

I have a bit more questions, though...

Let's say I have built the SH harmonics world grid of probes. I can understand the part of rendering a cube map at every probe position and converting each cube map to a SH coefficient for storage and bandwidth reduction.

What I don't really get is how am I supposed to render level geometry with that grid if SH probes. The papers I read only explain how to render movable objects and characters.

I can understand that - they calculate the closest probe to that object and shade it with that probe. But they seams to make it per object, because the movable objects and characters are relatively small to the grid.

But I want to render level geometry and be able to beatufuly illuminate indoor scenes. In my test room, I have many light probes - the room is big and I don't think I should search for the closest light probe to the room - after all they are all contained in the room.

I need to make it per vertex , I guess. I'm I supposed to find the closest probe to every vertex and calculate lighting for that vertex based on that probes SH coefficients ? But I render a whole room with one draw call - how to pass so many ambient probes to vertex shader and in thye shader for every vertex find the closest probe ?

And what about if I want to make it per pixel ? There is definitely something I'm missing here..

Thanks in advance.

If you're using deferred shading, you can render a sphere volume in the location of your probes and pass the data in and shade the volume using the g-buffer. You can even merge the closer ones together and pass them in groups into the shader to save on fill rate if it's a problem.

Ok, thanks.

If I'm forced to choose the method you propose and the volume texture, covering the entire level. What you people would suggest ? Volume texture could be a huge memory hog. For a big level should be pretty high resolution in order to prevent noticeable errors in ambient illumination, especially ambient light(or shadow) bleeding through thin level geometry. I'm not really sure how to build that volume texture, because Direct3D 9.0 doesn't support rendering to volume texture. I could probably unwrap it to single 2d texture, but then I need to interpolate manually in the shaders between different "depth" slices, I guess.

This topic is closed to new replies.

Advertisement