Sign in to follow this  
solenoidz

Global illumination techniques

Recommended Posts

solenoidz    591

Hello,

Recently I was wondering how modern game engines simulate global illumination these days. I'm watching for example this walkthrough "The Last Of Us" and I had to say that inddor environment is rendered very nicely. Sun light comming from the window is illuminating the rooms naturally and  bouncing off surfaces.

Here is a video of what I'm talking about :

http://www.youtube.com/watch?v=fkjpxBS-wHk#t=12m28s

 

Is it good old lightmaps ?

Share this post


Link to post
Share on other sites
ATEFred    1700

Interesting topic, I would love to hear about different approaches as well.
Not sure what they are doing in tlou, but some possible approaches are:
voxel cone tracing (what all the cool kids are trying these days), not in any released products that I know of at least.
light propagation volumes - crytek use this. Never seen it look very good myself.

PRT probes, modulated at runtime by lighting information from the scene (far cry 3 used this)

mass virtual point lights through RSM, merged per tile / pixel. The latest ghost recon for PC did this.

then lightmaps and precomputed AO factors are a safe option for static scenes I guess.
Another one I have seen used is manually placed hundreds of fill lights. 

 

I'm sure there are plenty more of course.

Share this post


Link to post
Share on other sites
FreneticPonE    3294

For the current generation lightmaps are most definitely in, even if most have moved to spherical harmonic terms in order to get normal maps some nice directional light. Even Cryengine 3's image based lighting is very similar in result and application, though it was first used in driving games (hey, you've got a realtime updated environment map anyway.)

 

The one exception I can think of is Far Cry 3. They take a volume, apply a spherical harmonic lighting term to it, but update that term. Basically they've pre-calculated a function for light bounce for each area, encode that into a spherical harmonic probe, and use that to light. Each probe covers something like 4 meters squared of the terrain, and they load and X by X grid of probes around the player, not enough memory to extend the probes all the way to the horizon, and they update something like one probe a frame. Because the sun isn't moving too fast, it's not to noticeable to just update slowly. "Deferred radiance transfer volumes" if that's what you want to call it. http://fileadmin.cs.lth.se/cs/Education/EDAN35/lectures/L10b-Nikolay_DRTV.pdf

 

Then for the next generation. Well there's something like what Far Cry 3 does, just using more compute/memory (higher density grid? higher order spherical harmonics? etc.) Or a bunch of hack that are extremely limited and more of a neat special effect rather than actually working for lighting an entire level, a lot of attempts that produce results that aren't good enough/have too many artifacts/take up too many milliseconds/etc. And voxel cone tracing if you can actually get it working. EG

 

http://www.youtube.com/watch?v=fAsg_xNzhcQ

 

http://www.youtube.com/watch?v=tn607OoVoRw

 

One of the biggest problems I can think of is, the everything is shiny problem. Specular contribution of materials is just as prevalent as diffuse, and yet so many attempts at more realtime GI out there are diffuse only. Simply because diffuse is easier because its lower frequency, EG less memory and less samples.

Edited by Frenetic Pony

Share this post


Link to post
Share on other sites
Chris_F    3030

Last of Us appears to be using reflective shadow maps for the flashlight. Everything else is likely baked.

Share this post


Link to post
Share on other sites
Promit    13246

I believe that voxel cone tracing is state of the art if you want to do real time GI. I think Crytek and Unreal 4 have it, though I'm not sure if anybody's shipped an actual game with it yet.

http://on-demand.gputechconf.com/gtc/2012/presentations/SB134-Voxel-Cone-Tracing-Octree-Real-Time-Illumination.pdf

In terms of slightly more feasible technology, a lot of people are using prebaked SH environment probe based approaches. Essentially, set up environment probes as usual, compute SH to however many terms you feel like having, and save them off. Interpolate at runtime between nearby probes with whatever clever hacks you feel like applying and create a lighting environment like that.

Edited by Promit

Share this post


Link to post
Share on other sites
allingm    539

I believe that voxel cone tracing is state of the art if you want to do real time GI. I think Crytek and Unreal 4 have it, though I'm not sure if anybody's shipped an actual game with it yet.

http://on-demand.gputechconf.com/gtc/2012/presentations/SB134-Voxel-Cone-Tracing-Octree-Real-Time-Illumination.pdf

In terms of slightly more feasible technology, a lot of people are using prebaked SH environment probe based approaches. Essentially, set up environment probes as usual, compute SH to however many terms you feel like having, and save them off. Interpolate at runtime between nearby probes with whatever clever hacks you feel like applying and create a lighting environment like that.

Look at those numbers though.  That technique looks next-next gen.  At least, out of the box it isn't viable.

Share this post


Link to post
Share on other sites
solenoidz    591

Thanks for the replies. 

Well, in my case, I can't use pure lightmaps, since my scenes could be pretty dynamic and the gameplay will be mostly physics based. I can't even bake the lights on static geometry only, because when dynamic objects from other area is placed somewhere else, they won't contribute to lighting in that new area, etc. I'm thinking to combine lightmaps with lights. For example, to have a room with baked lightmaps and in my editor to place a deferred light by the window with the same color to illuminate my dynamic objects eventually placed by the window, that can't be affected by the lightmap. If that light cast shadows too, it could probably solve some other issues, but it seems too much of duplicating the lighting with different techniques..

Edited by solenoidz

Share this post


Link to post
Share on other sites
MJP    19756

I believe that voxel cone tracing is state of the art if you want to do real time GI. I think Crytek and Unreal 4 have it, though I'm not sure if anybody's shipped an actual game with it yet.

 

Epic has since moved away from it, they're using pre-baked lightmaps and specular probes now. Crytek was using an entirely different technique (Cascaded Light Propogation Volumes) which has a different set of tradeoffs. They shipped it for the PC version of Crysis 2 but not the console version, and I'm not sure if they used it in Crysis 3.

Share this post


Link to post
Share on other sites
GFalcon    399


Epic has since moved away from it, they're using pre-baked lightmaps and specular probes now

 

Do you know why they gave up on it ? Voxel cone tracing seemed very promising to me, even for the  incoming "next gen".

Share this post


Link to post
Share on other sites
MJP    19756

 


Epic has since moved away from it, they're using pre-baked lightmaps and specular probes now

 

Do you know why they gave up on it ? Voxel cone tracing seemed very promising to me, even for the  incoming "next gen".

 

 

I'm sure it was the performance. Per-pixel octree traversals + 3D texture lookups are not fast, even on high-end GPU's.

However I'm quite sure someone will end up shipping a game with something similar, perhaps tuned specifically to the needs of that title. In fact at E3 I noticed that Knack has dynamic reflections and soft shadows, which looked like they might be generated using voxelization.

Share this post


Link to post
Share on other sites
jcabeleira    723


In fact at E3 I noticed that Knack has dynamic reflections and soft shadows, which looked like they might be generated using voxelization.

 

If you look at the video you can see that it's screen space reflections because the reflection disappears when the reflected object goes out of screen, pay attention to the glowing lights on the wall on the right: http://www.youtube.com/watch?feature=player_detailpage&v=iG98LuaYj_g#t=267s.

 

IMO, soft shadows are probably done with a common shadow mapping technique.

Share this post


Link to post
Share on other sites
MJP    19756

 


In fact at E3 I noticed that Knack has dynamic reflections and soft shadows, which looked like they might be generated using voxelization.

 

If you look at the video you can see that it's screen space reflections because the reflection disappears when the reflected object goes out of screen, pay attention to the glowing lights on the wall on the right: http://www.youtube.com/watch?feature=player_detailpage&v=iG98LuaYj_g#t=267s.

 

IMO, soft shadows are probably done with a common shadow mapping technique.

 

Indeed, you can actually see where the reflections stop once the view angle is too oblique. I guess I wasn't paying close enough attention at the demo. :P

Share this post


Link to post
Share on other sites
Kryzon    4629

I like the way the shadows in Naughty Dog's "The Last of Us" are unified.

That is, the actors' and environment's shadows merge together like they're supposed to.

Most games just place lightmaps on the environment and then dynamic shadows from actors blend over them - that is, they further darken the lightmaps.

This is unrealistic, since both these shadow representations come from the same light source and should merge together instead of darkening one another.

 

Based on the "low-resolution" appearance of the environment shadows in this game, I can assert that they're not lightmaps but actual static shadow maps that are rendered in most likely the same pass as the actors' shadows so that they merge together.

There are most likely other lighting contributions involved in the sophisticated visuals for this game, but static shadow maps for the environment are participating.

Edited by Kryzon

Share this post


Link to post
Share on other sites
cowsarenotevil    3005

I like the way the shadows in Naughty Dog's "The Last of Us" are unified.

That is, the actors' and environment's shadows merge together like they're supposed to.

Most games just place lightmaps on the environment and then dynamic shadows from actors blend over them - that is, they further darken the lightmaps.

This is unrealistic, since both these shadow representations come from the same light source and should merge together instead of darkening one another.

 

Based on the "low-resolution" appearance of the environment shadows in this game, I can assert that they're not lightmaps but actual static shadow maps that are rendered in most likely the same pass as the actors' shadows so that they merge together.

There are most likely other lighting contributions involved in the sophisticated visuals for this game, but static shadow maps for the environment are participating.

 

I agree that this is the right effect to aim for, but I disagree that most games do it the "wrong" way -- the Unreal Engine 3, for instance, separates the "direct" component of dominant lights from the indirect component for exactly this reason (among others, such as doing different filtering on the sharp edges of direct lights* versus the smoother gradients of the indirect component); this makes it relatively efficient to combine lightmaps with dynamic shadow maps, as shadow maps only block out the direct component and leave the indirect light unchanged (which is of course an approximation itself, but one that generally is acceptable).

 

*in fact, this may be an alternative explanation for the "low-resolution" appearance of the shadows you're seeing, meaning that it might still be lightmaps rather than shadow maps - it seems weird to recalculate even just the direct light shadows for static lights/geometry every frame for no reason

Share this post


Link to post
Share on other sites
Kryzon    4629

I forgot to mention this. Part of the reason for me thinking the environment uses static shadow maps is that these environment shadows not only blend with actor shadows but they are also projected onto the actors themselves, making it look like everything is part of the "same world", belonging together.

I've taken some snapshots to illustrate (pardon the low resolution).

 

• Here the actor's shadow is merging with static environment shadows.

hc5.th.png

 

• Here you can see how the environment shadows are pixelated, and how they're projected onto the actor.

5u8.th.png

 

• Look at that partially lit seat, centered of the screen. The shadow projected on it is very pixelated, and comes from the environment. When you approach the seat, this shadow suddenly gains a higher resolution.

mo1h.th.png  crf.th.png

 

• In some parts of the levels such as behind large buildings (places where the actors are always covered in shadow) it's easier to notice that not only are there shadow maps involved but also some sort of baked AO term - you can see this below the car and below that ledge on the gray building to the left.

Aditionally, in these cases the actors do cast dynamic shadows (even darkening the environment shadows), but then they're not hard edged like when under then sun.

Instead they're very blurred up to the point of being smooth and they're made almost transparent, looking like a subtle shade.

mbd6.th.png

Edited by Kryzon

Share this post


Link to post
Share on other sites
cowsarenotevil    3005

Hmm, you're right. It looks like they're using cascaded shadow maps for both the static and dynamic geometry, which is interesting. I assume they bake only the indirect lighting and then just add in the direct lighting on the fly. If nothing else, it's probably easier to implement than storing the contribution of direct light onto static geometry.

Share this post


Link to post
Share on other sites
solenoidz    591

Hmm, you're right. It looks like they're using cascaded shadow maps for both the static and dynamic geometry, which is interesting. I assume they bake only the indirect lighting and then just add in the direct lighting on the fly. If nothing else, it's probably easier to implement than storing the contribution of direct light onto static geometry.

 

Guys, I understand the part with shadows. It's not interesting if they are using static shadow maps for static level geometry. I don't think they just bake the indirect lighting and that's it. The actors and other objects moving through the level receive indirect lighting as well. I have a feeling they have some sort of lightmap on static levels and also have some "fill lights" placed here and there simulate bounced light and to illuminate dynamic objects, that move around.

Edited by solenoidz

Share this post


Link to post
Share on other sites
Bummel    1888

My guess would be that they are using baked irradiance volumes for the indirect part. I haven't seen the game in action yet, though.

Edited by Bummel

Share this post


Link to post
Share on other sites
kalle_h    2464

Last of us have quite good looking dynamic ambient occlusion technique. It's just a bit too dark but don't have any screen space limitations. I am thinking it could be Ambient Occlusion Fields http://www.gdcvault.com/play/1015320/Ambient-Occlusion-Fields-and-Decals but that paper say that sourcec have to be rigid so maybe they have something completly different.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this