• Create Account

# Global illumination techniques

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

61 replies to this topic

### #21solenoidz  Members   -  Reputation: 531

Like
0Likes
Like

Posted 10 July 2013 - 09:05 AM

Hmm, you're right. It looks like they're using cascaded shadow maps for both the static and dynamic geometry, which is interesting. I assume they bake only the indirect lighting and then just add in the direct lighting on the fly. If nothing else, it's probably easier to implement than storing the contribution of direct light onto static geometry.

Guys, I understand the part with shadows. It's not interesting if they are using static shadow maps for static level geometry. I don't think they just bake the indirect lighting and that's it. The actors and other objects moving through the level receive indirect lighting as well. I have a feeling they have some sort of lightmap on static levels and also have some "fill lights" placed here and there simulate bounced light and to illuminate dynamic objects, that move around.

Edited by solenoidz, 10 July 2013 - 09:20 AM.

### #22Tasty Texel  Members   -  Reputation: 1366

Like
0Likes
Like

Posted 10 July 2013 - 12:21 PM

My guess would be that they are using baked irradiance volumes for the indirect part. I haven't seen the game in action yet, though.

Edited by Bummel, 10 July 2013 - 12:22 PM.

### #23kalle_h  Members   -  Reputation: 1570

Like
1Likes
Like

Posted 10 July 2013 - 01:41 PM

Last of us have quite good looking dynamic ambient occlusion technique. It's just a bit too dark but don't have any screen space limitations. I am thinking it could be Ambient Occlusion Fields http://www.gdcvault.com/play/1015320/Ambient-Occlusion-Fields-and-Decals but that paper say that sourcec have to be rigid so maybe they have something completly different.

### #24MJP  Moderators   -  Reputation: 11790

Like
5Likes
Like

Posted 10 July 2013 - 03:06 PM

Hmm, you're right. It looks like they're using cascaded shadow maps for both the static and dynamic geometry, which is interesting. I assume they bake only the indirect lighting and then just add in the direct lighting on the fly. If nothing else, it's probably easier to implement than storing the contribution of direct light onto static geometry.

Guys, I understand the part with shadows. It's not interesting if they are using static shadow maps for static level geometry. I don't think they just bake the indirect lighting and that's it. The actors and other objects moving through the level receive indirect lighting as well. I have a feeling they have some sort of lightmap on static levels and also have some "fill lights" placed here and there simulate bounced light and to illuminate dynamic objects, that move around.

It's fairly common to bake ambient lighting into probes located throughout the level, and then have dynamic objects sample from those probes as they move through the level.

### #25cowsarenotevil  Crossbones+   -  Reputation: 2108

Like
6Likes
Like

Posted 13 July 2013 - 09:21 PM

Hmm, you're right. It looks like they're using cascaded shadow maps for both the static and dynamic geometry, which is interesting. I assume they bake only the indirect lighting and then just add in the direct lighting on the fly. If nothing else, it's probably easier to implement than storing the contribution of direct light onto static geometry.

Guys, I understand the part with shadows. It's not interesting if they are using static shadow maps for static level geometry. I don't think they just bake the indirect lighting and that's it. The actors and other objects moving through the level receive indirect lighting as well. I have a feeling they have some sort of lightmap on static levels and also have some "fill lights" placed here and there simulate bounced light and to illuminate dynamic objects, that move around.

You're right that I got sidetracked.

I think what you're suggesting ("fill lights") probably most closely resembles "virtual point lights" which is something that is sometimes used in radiosity/deferred lighting systems. I've toyed with it a bit and found that it is probably best suited to non-realtime/"interactive" rendering, as it takes a fairly large number of VPLs as well as some kind of shadowing/occlusion to make it work well. Like I said, I've only played with this a bit, so there might well be some interesting optimizations/approximations that I'm not aware of.

That said,, MJP suggested baking light probes, which is a fairly similar idea. I'm not an expert on the subject, but I'll fill in the details to the best of my ability. I think that the three most common ways of achieving this are using probes that store just color information (with no directional information), spherical harmonic lights, or irradiance maps.

They all typically involve interpolating between the probes based on the position of the dynamic actor that is being rendered.

In the first case (color only), we have stored the amount/color of light that has reached a given probe, so that the probes make something that resembles a point cloud of colors, or a (very coarse) 3D texture. When we want to light a dynamic object, we just interpolate between the colors (based on position) and use that color as the indirect term.

In the second and third cases (irradiance or spherical harmonic), the probes also define a way of looking up the light color based on the normal of the dynamic object we are rendering; aside from that, the idea is the same: we interpolate between the probes based on the dynamic object's position, and then do the look-up, only with the normal as input.

These links might help you understand how to compute the probes for those cases. They're not explicitly geared toward pre-computing multiple probes (but rather computing irradiance for a single point efficiently) but they might help to point you in the right direction: http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter10.html and http://codeflow.org/entries/2011/apr/18/advanced-webgl-part-3-irradiance-environment-map/

I invite anyone to correct my imperfect understanding of the subject.

-~-The Cow of Darkness-~-

### #26MJP  Moderators   -  Reputation: 11790

Like
12Likes
Like

Posted 14 July 2013 - 08:57 PM

Yeah  that's the gist of it: you store ambient diffuse lighting encoded in some basis at sample locations, and interpolate between the samples based on the position of the object sampling the probes as well as the structured of the probes themselves (grid, loose points, etc.). Some common based used for encoding diffuse lighting:

• Single value - store a single diffuse color, use it for all normal directions. Makes objects look very flat in ambient lighting, since the ambient diffuse doesn't change with normal direction. There's no way to actually compute diffuse lighting without a surface normal, so typically it will be the average of the computed diffuse in multiple directions.
• Hemispherical - basically you compute diffuse lighting for a normal pointed straight up, and one for a normal pointed straight down. Then you interpolate between the two value using the Y component of the surface normal used for rendering.
• Ambient cube - this is what Valve used in HL2. Similar to hemispherical, except diffuse lighting is computed and stored in 6 directions (usually aligned to world space axes). The contribution for each light is determined by taking the dot product of the surface normal with the direction for each axis of the cube.
• Spherical harmonics - very commonly used in modern games. Basically you can store a low-frequency version of any spherical signal by projecting onto the SH basis functions up to a certain order, with the order determining how much detail will be retained as well as the number of coefficients that you need to store. SH has all kinds of useful properties, for instance they're essentially a frequency-domain representation of your signal which means you can perform convolutions with a simple multiplication. Typically this is used to convolve the lighting environment with a cosine lobe, which essentially gives you Lambertian diffuse. However you can also convolve with different kernels, which allows you to use SH with non-Lambertian BRDF's as well. You can even do specular BRDF's, however the low-frequency nature of SH typically limits you to very high roughnesses (low specular power, for Phong/Blinn-Phong BRDF's).
• Spherical Radial Basis Functions - with these you basically approximate the full lighting environment surrounding a probe point with a set number of lobes (usually Gaussian) oriented at arbitrary directions about a sphere. These can be cool because they can let you potentially capture high-frequency lighting. However they're also difficult because you have to use a non-linear solver to "fit" a set of lobes to a lighting environment. You can also have issues with interpolation, since each probe can potentially have arbitrary lobe directions.
• Cube Maps - this isn't common, but it's possible to integrate the irradiance for a set of cubemap texels where each texel represents a surface normal of a certain direction. This makes your shader code for evaluating lighting very simple: you just lookup into the cubemap based on the surface normal. However it's generally overkill, since something like SH or Ambient Cube can store diffuse lighting with relatively little error while having a very compact representation. Plus you don't have to mess around with binding an array of cube map textures, or sampling from them.

For all of these (with the exception of SRBF's) you can generate the probes by either ray-tracing and directly projecting onto the basis, or by rasterizing to a cubemap first and then projecting. This can potentially be very quick, in fact you could do it in real time for a limited number of probe locations. SRBF's are trickier, because of the non-linear solve which is typically an iterative process.

EDIT: I looked at those links posted, and there's two things I'd like to point out. In that GPU Gems article they evalute the diffuse from the SH lighting environment by pre-computing the set of "lookup" SH coefficients to a cube map lookup texture, but this is totally unnecessary since you can't just directly compute these coefficients in the shader. Something like this should work:

// 'lightingSH' is the lighting environment projected onto SH (3rd order in this case),
// and 'n' is the surface normal
float3 ProjectOntoSH9(in float3 lightingSH[9], in float3 n)
{
float3 result = 0.0f;

// Cosine kernel
const float A0 = 1.0f;
const float A1 = 2.0f / 3.0f;
const float A2 = 0.25f;

// Band 0
result += lightingSH[0] * 0.282095f * A0;

// Band 1
result += lightingSH[1] * 0.488603f * n.y * A1;
result += lightingSH[2] * 0.488603f * n.z * A1;
result += lightingSH[3] * 0.488603f * n.x * A1;

// Band 2
result += lightingSH[4] * 1.092548f * n.x * n.y * A2;
result += lightingSH[5] * 1.092548f * n.y * n.z * A2;
result += lightingSH[6] * 0.315392f * (3.0f * n.z * n.z - 1.0f) * A2;
result += lightingSH[7] * 1.092548f * n.x * n.z * A2;
result += lightingSH[8] * 0.546274f * (n.x * n.x - n.y * n.y) * A2;

return result;
}


This brings me to my second point, which is that the WebGL article mentions the lookup texture as a disadvantage of SH which really isn't valid since you don't need it at all. This makes SH a much more attractive option for storing irradiance, especially if your goal is runtime generation of irradiance maps since with SH you don't need an expensive convolution step. Instead your projection onto SH is basically a repeated downsampling process, which can be done very quickly. This is especially true if you use compute shaders, since you can use a parallel reduction to perform integration using shared memory with fewer steps.

For an introduction to using SH for this purpose, I would definitely recommend reading Ravi's 2001 paper on the subject. Robin Greene's paper is also a good place to start.

Edited by MJP, 14 July 2013 - 09:19 PM.

### #27gboxentertainment  Members   -  Reputation: 770

Like
3Likes
Like

Posted 07 September 2013 - 08:22 PM

This would be a good thread to show off my latest progress of my voxel cone tracing engine before I created another video:

100% fully dynamic, runs at 30~40fps @ 1024x768 on a gtx485m, depending on how many of the 3 shadow casting emissive objects I turn on.

### #28solenoidz  Members   -  Reputation: 531

Like
0Likes
Like

Posted 16 September 2013 - 05:15 AM

Guys, thanks for the explanations.

I have a bit more questions, though...

Let's say I have built the SH harmonics world grid of probes. I can understand the part of rendering a cube map at every probe position and converting each cube map to a SH coefficient for storage and bandwidth reduction.

What I don't really get is how am I supposed to render level geometry with that grid if SH probes. The papers I read only explain how to render movable objects and characters.

I can understand that - they calculate the closest probe to that object and shade it with that probe. But they seams to make it per object, because the movable objects and characters are relatively small to the grid.

But I want to render level geometry and be able to beatufuly illuminate indoor scenes. In my test room, I have many light probes - the room is big and I don't think I should search for the closest light probe to the room - after all they are all contained in the room.

I need to make it per vertex , I guess. I'm I supposed to find the closest probe to every vertex and calculate lighting for that vertex based on that probes SH coefficients ? But I render a whole room with one draw call - how to pass so many ambient probes to vertex shader and in thye shader for every vertex find the closest probe ?

And what about if I want to make it per pixel ? There is definitely  something I'm missing here..

Edited by solenoidz, 16 September 2013 - 05:23 AM.

### #29Styves  Members   -  Reputation: 1078

Like
3Likes
Like

Posted 16 September 2013 - 02:07 PM

If you're using deferred shading, you can render a sphere volume in the location of your probes and pass the data in and shade the volume using the g-buffer. You can even merge the closer ones together and pass them in groups into the shader to save on fill rate if it's a problem.

### #30solenoidz  Members   -  Reputation: 531

Like
0Likes
Like

Posted 17 September 2013 - 10:59 AM

Ok, thanks.

If I'm forced to choose the method you propose and the volume texture, covering the entire level. What you people would suggest ? Volume texture could be a huge memory hog. For a big level should be pretty high resolution in order to prevent noticeable errors in ambient illumination, especially ambient light(or shadow) bleeding through thin level geometry. I'm not really sure how to build that volume texture, because Direct3D 9.0 doesn't support rendering to volume texture. I could probably unwrap it to single 2d texture, but then I need to interpolate manually in the shaders between different "depth" slices, I guess.

Edited by solenoidz, 17 September 2013 - 11:01 AM.

### #31solenoidz  Members   -  Reputation: 531

Like
0Likes
Like

Posted 29 September 2013 - 02:07 PM

If you're using deferred shading, you can render a sphere volume in the location of your probes and pass the data in and shade the volume using the g-buffer. You can even merge the closer ones together and pass them in groups into the shader to save on fill rate if it's a problem.

Thank you, but I still have some uncertainties.

Yes, i'm using a deferred renderer, so do Crytek, but their method is much more complex after generating SH harmonics data for every light probe in space. The finally end up with 3 volume textures and sample from those per pixel.

In my case, i'm not sure how big this sphere volume should be. In case of my point light, I make it as big, as the light radius, but here the light probe do not store distance from the surface, just the incomming radiance. Also, as far as I get it, I need to shade every pixel with it's the closest probe. If I simply draw shpere volumes as you suggested, I caould shade pixels that are far behind that current probe I'm rendering (in screen space) that should be rendered with a different probe behind the current one. Should I pass all the probes positions and other data, that fall in the view frustum, and in the shader, for every pixel, get it's world space position, find the closest probe and shade it with that ?

That seems a bit limiting and slow also..

Thanks.

Edited by solenoidz, 29 September 2013 - 02:08 PM.

### #32Styves  Members   -  Reputation: 1078

Like
1Likes
Like

Posted 29 September 2013 - 04:09 PM

I think you have LPV and environment probes confused. ;)

Environment probes follow the same principles as deferred lights (well.. they basically are deferred lights): you render a volume shape (quad, sphere, etc) at the position of your "probe" and render it (with culling for depth and such, to avoid shading unnecessary spaces). The size of this object is based on the radius of your probe. The shader reads the g-buffer and accesses a cubemap based on the normal and reflection vectors (for diffuse and specular shading, respectively). These objects are blended over the ambient lighting results (before lights). You need to sort them though to avoid popping from the alpha-blending, or you can come up with some form of neat blending approach that handles it (there are a few online, I think Guerilla had some ideas?).

The data for the probes is stored as cubemaps like I mentioned. A 1x1x1 cubemap stores irradiance at every direction (you can use the lowest mip-map of your specular cubemap to approximate) and an NxNxN cubemap for specular data (depends on the scene, scenes with low-glosiness objects can use a lower res cubemap for memory savings). The specular probes mip-maps contain a convolved version of the highest mip level (usually some gaussian function that matches your lighting BRDF). You index into this mip-level based on the glossiness of the pixel you're shading.

For LPV, Crytek's using a 64x64x64 volume (IIRC). They place it over the camera and shift it towards the look direction to get the maximum amount of GI around the camera (while still keeping GI from behind the camera). It's shifted by something like 20% distance. As you expect, leaking can be a problem, but there are ways around it (you really just need some occlusion information for the propagation portion of the technique). You can cascade it as well (mentioned in the paper) to get better results. The final lighting is done only once over the entire screen using a full-screen quad.

Edited by Styves, 29 September 2013 - 04:09 PM.

### #33jcabeleira  Members   -  Reputation: 694

Like
0Likes
Like

Posted 30 September 2013 - 03:30 AM

Environment probes follow the same principles as deferred lights (well.. they basically are deferred lights): you render a volume shape (quad, sphere, etc) at the position of your "probe" and render it (with culling for depth and such, to avoid shading unnecessary spaces). The size of this object is based on the radius of your probe. The shader reads the g-buffer and accesses a cubemap based on the normal and reflection vectors (for diffuse and specular shading, respectively). These objects are blended over the ambient lighting results (before lights). You need to sort them though to avoid popping from the alpha-blending, or you can come up with some form of neat blending approach that handles it (there are a few online, I think Guerilla had some ideas?).

Environment probes do not work like deferred lights. They are points in the scene where you calculate the incoming lighting in a pre-processing step.

In runtime, you apply them to an object by finding the nearest probes to that object and interpolate between them accordingly. If I'm not mistaken this is usually done in a forward rendering pass because you need to find which are the nearest probes to the object and pass them to the shader.

The data for the probes is stored as cubemaps like I mentioned. A 1x1x1 cubemap stores irradiance at every direction (you can use the lowest mip-map of your specular cubemap to approximate) and an NxNxN cubemap for specular data (depends on the scene, scenes with low-glosiness objects can use a lower res cubemap for memory savings)

A 1x1x1 cubemap will give you a very poor irradiance representation, you'll you need more resolution than that. And yes, you can use the lower mipmaps of the specular cubemap to approximate diffuse but once again the quality will be poor. The best way is to generate a cubemap for diffuse with a diffuse convolution and a separate cubemap with higher resolution for specular generated with a specular convolution. The mipmaps of the specular map can then be sampled to approximate different specular roughness as you said.

Edited by jcabeleira, 30 September 2013 - 03:39 AM.

### #34Hodgman  Moderators   -  Reputation: 31964

Like
4Likes
Like

Posted 30 September 2013 - 04:25 AM

But I want to render level geometry and be able to beatufuly illuminate indoor scenes. In my test room, I have many light probes - the room is big and I don't think I should search for the closest light probe to the room - after all they are all contained in the room.

Environment probes follow the same principles as deferred lights (well.. they basically are deferred lights): you render a volume shape (quad, sphere, etc) at the position of your "probe" and render it (with culling for depth and such, to avoid shading unnecessary spaces). The size of this object is based on the radius of your probe. The shader reads the g-buffer and accesses a cubemap based on the normal and reflection vectors (for diffuse and specular shading, respectively). These objects are blended over the ambient lighting results (before lights). You need to sort them though to avoid popping from the alpha-blending, or you can come up with some form of neat blending approach that handles it (there are a few online, I think Guerilla had some ideas?).

Environment probes do not work like deferred lights. They are points in the scene where you calculate the incoming lighting in a pre-processing step.
In runtime, you apply them to an object by finding the nearest probes to that object and interpolate between them accordingly. If I'm not mistaken this is usually done in a forward rendering pass because you need to find which are the nearest probes to the object and pass them to the shader.

There's an example here, called deferred irradiance volumes by the author, which uses typical deferred shading techniques to apply environment probes.

For each probe, he renders a sphere like you would for a typical point light in a deferred renderer. They output a colour into the RGB channels, and 1.0 into the alpha channel, but both these values are attenuated over distance.
After all probes have been drawn to the screen, the colour channel is divided by the alpha channel. This "normalizes" the results, so that if a pixel is lit by 4 of these probes, it is not 4 times as bright, and also produces smooth transitions between each probe.

After all the probes have been applied like this, he then performs regular deferred shading for the direct lights.

### #35Styves  Members   -  Reputation: 1078

Like
1Likes
Like

Posted 30 September 2013 - 07:16 AM

Environment probes do not work like deferred lights. They are points in the scene where you calculate the incoming lighting in a pre-processing step.
In runtime, you apply them to an object by finding the nearest probes to that object and interpolate between them accordingly. If I'm not mistaken this is usually done in a forward rendering pass because you need to find which are the nearest probes to the object and pass them to the shader.

Read that last line again, because you just defeated your own argument .

What I described is an alternative to picking the nearest cubemap in a forward pass, like you mentioned. The environment probes that I'm talking about are exactly like deferred lights except they don't do point-lighting in the pixel shader and instead do cubemap lighting. This is the exact same technique Crytek uses for environment lighting (in fact, much of the code is shared between them and regular lights). So, in short, they do work like deferred lights. Just not like deferred point lights.

A 1x1x1 cubemap will give you a very poor irradiance representation, you'll you need more resolution than that. And yes, you can use the lower mipmaps of the specular cubemap to approximate diffuse but once again the quality will be poor. The best way is to generate a cubemap for diffuse with a diffuse convolution and a separate cubemap with higher resolution for specular generated with a specular convolution. The mipmaps of the specular map can then be sampled to approximate different specular roughness as you said.

A diffuse convolution into a separate texture is, of course, better. I never said it wasn't (but I did forget to mention it). But those who can't/don't want to/don't know how to do this can use the lowest mip-level of their specular texture as a rough approximation, or for testing. This works well in practice even if it's not accurate. It really comes down to what kind of approximations you are willing to accept.

### #36jcabeleira  Members   -  Reputation: 694

Like
1Likes
Like

Posted 30 September 2013 - 10:13 AM

What I described is an alternative to picking the nearest cubemap in a forward pass, like you mentioned. The environment probes that I'm talking about are exactly like deferred lights except they don't do point-lighting in the pixel shader and instead do cubemap lighting. This is the exact same technique Crytek uses for environment lighting (in fact, much of the code is shared between them and regular lights). So, in short, they do work like deferred lights. Just not like deferred point lights.

You're right, I didn't know they used light probes like that. It's a neat trick, useful when you want to give GI to some key areas of the scene.

### #37solenoidz  Members   -  Reputation: 531

Like
0Likes
Like

Posted 30 September 2013 - 01:05 PM

Thank you, people. The floating point texture that stores color and number of time a pixel has beed lit + selecting closest pixels to each probe by this kind of rendering were interesting things to know.

### #38agleed  Members   -  Reputation: 342

Like
8Likes
Like

Posted 24 February 2014 - 07:15 PM

Hi, sorry for bumping this topic for attention.

I recently completed my Bachelor's thesis (in CS) where I wrote about (potential) real-time diffuse GI algorithms for fully dynamic geometry + light sources. Due to time constraints I was only able to look at baseline algorithms that could be used for future investigation. I implemented and compared (in some limited tests for performance and image quality) the original Reflective Shadow Maps algorithm, Splatting Indirect Illumination and Crytek's Cascaded Light Propagation Volumes.

I put some effort into writing the thesis so that someone could read it and - hopefully, maybe by also looking at source code for the project - implement the algorithms themselves without too much research into the original sources.

Would you guys be interested if I polished the source code up a little (practically no comments anywhere in there) and published it online along with the paper?

edit: I have uploaded the thesis and corresponding stuff (compiled win32 application, and source code) later in this thread.

Edited by agleed, 01 May 2014 - 09:39 AM.

### #39MJP  Moderators   -  Reputation: 11790

Like
0Likes
Like

Posted 25 February 2014 - 01:01 AM

Would you guys be interested if I polished the source code up a little (practically no comments anywhere in there) and published it online along with the paper?

Of course! Code is always great to have.

### #40Krypt0n  Crossbones+   -  Reputation: 2685

Like
0Likes
Like

Posted 25 February 2014 - 01:47 AM