What is the best indirect lighting technique for a game?

Started by
8 comments, last by LHLaurini 8 years, 6 months ago

I'm making my own game engine and I want it to have good graphics. I thought about baking ray-traced scenes into textures in Blender, but that's not really a good idea because it takes too much time and works only for static objects. Then I discovered some indirect lighting techniques such as "Reflective Shadow Maps" and "Cascaded Light Propagation Volumes for Real Time Indirect Illumination".

So I'm wondering: what's the best indirect lighting technique for a game?

Of course, I need first to define "best technique". It should...

  • ...be efficient
  • ...be scalable (be able to increase performance by decreasing quality)
  • ...look good
  • ...be easy to implementate (not very important though - I wouldn't sacrifice quality just because it's hard to implement, but it sure counts)

Even though it may be a little opinion based, I want to know what you think, so feel free to post your thoughts. If you do suggest another technique, please put a link to documentation (articles) and, if available, a tutorial.

Advertisement

the best technique is highly dependent on your exact requirements. i.e. scene type, scale, amount of dynamic objects, lighting complexity, how many bounces you want, how many months or years you want to spend researching this, what your target HW is, etc. I think it is safe to say no GI technique is trivial to implement in a way that looks good in real life scenarios and runs fast (no light leaking, bugs, etc.) It's an area of active research, so you won't find ready made solutions.

Maybe it's best just to start by implementing some of the building blocks techniques, like basic RSM or lightmap baking, and taking it from there.
If you want to read up on modern approaches for dynamic GI, Remedy just released a paper going over their approach. Not miles apart is Ubi's solution used on Far Cry 3/4. Another trendy approach I have not seen used in commercial larger games yet is Voxel Cone Tracing. Google will lead you to a bunch of papers for all of these, but realistically this is not the type of thing where you just follow the paper and will get awesome results fast.

the best technique is highly dependent on your exact requirements. i.e. scene type, scale, amount of dynamic objects, lighting complexity, how many bounces you want, how many months or years you want to spend researching this, what your target HW is, etc. I think it is safe to say no GI technique is trivial to implement in a way that looks good in real life scenarios and runs fast (no light leaking, bugs, etc.) It's an area of active research, so you won't find ready made solutions.
Maybe it's best just to start by implementing some of the building blocks techniques, like basic RSM or lightmap baking, and taking it from there.
If you want to read up on modern approaches for dynamic GI, Remedy just released a paper going over their approach. Not miles apart is Ubi's solution used on Far Cry 3/4. Another trendy approach I have not seen used in commercial larger games yet is Voxel Cone Tracing. Google will lead you to a bunch of papers for all of these, but realistically this is not the type of thing where you just follow the paper and will get awesome results fast.

Yeah, I know I'll probably spend weeks or maybe months just to get some simple lighting working. Thanks for the tips though.

I intend to use huge scenes with not many dynamic objects (like most games), supporting high-end machines but also low-end ones.

Sadly enough a best technique which scales from low-end to high-end hardware in huge scenes which works with both static and dynamic geometry within a reasonable amount of time within a reasonable amount of memory (which is also easy to implement) does not exist.

Dynamic indirect lighting is still a ridiculously difficult problem to solve accurately in a rasterizer for real-time applications.

If you're working with scenes with static lighting for example (and really, this is just an example, not the best or most flexible solution) you can look at using light probes which is an offline process in which you can capture diffuse and specular lighting. Diffuse lighting can be stored using spherical harmonics, while specular lighting information can be captured in a cubemap which you then resolve for your specular BRDF. This gives you a fairly cheap solution from indirect lighting coming off of your environment, but dynamic objects will not influence your bounce lighting. For this you could use a local solution like reflective shadow maps.

If you do want lighting to be dynamic you'll have to do some very careful research on your exact requirements and the techniques available out there. That, or you could do your very own cutting edge research and solve this problem for all of us once and for all. We would be very grateful.

I gets all your texture budgets!

Sadly enough a best technique which scales from low-end to high-end hardware in huge scenes which works with both static and dynamic geometry within a reasonable amount of time within a reasonable amount of memory (which is also easy to implement) does not exist.

Dynamic indirect lighting is still a ridiculously difficult problem to solve accurately in a rasterizer for real-time applications.

If you're working with scenes with static lighting for example (and really, this is just an example, not the best or most flexible solution) you can look at using light probes which is an offline process in which you can capture diffuse and specular lighting. Diffuse lighting can be stored using spherical harmonics, while specular lighting information can be captured in a cubemap which you then resolve for your specular BRDF. This gives you a fairly cheap solution from indirect lighting coming off of your environment, but dynamic objects will not influence your bounce lighting. For this you could use a local solution like reflective shadow maps.

If you do want lighting to be dynamic you'll have to do some very careful research on your exact requirements and the techniques available out there.


I probably should've expected that. What I think I'm going to do is to use pre-baked light maps for low-end, RSM for middle-end and CLPV for high-end. Or maybe something like that, I don't know. I need to make a ton of research.

That, or you could do your very own cutting edge research and solve this problem for all of us once and for all. We would be very grateful.

Yeah, we all would. You shouldn't expect that from me though, but who knows? :-)

If you should decide to try Voxel Cone Tracing do it on (nested) dense grids. A Sparse Voxel Octree while not really being affordable performance-wise in a setting where you do other stuff beside dynamic GI is also a pain in the butt to implement and maintain. Trust me, I have been there. :\

Or you could do like the big boys and use Autodesk Beast, though I have no idea how much that would cost you money-wise.

If you're asking about most scalable with fast computation times that is robust - I can recommend only one thing and that is path tracing (to be precise - bi-directional path tracing with multiple importance sampling). It is scalable, robust, physically correct and also fast (actually one of the fastest ways to correctly compute GI), yet getting it real time without noise is close to impossible (with todays hardware).

Now, there are few solutions that can be used which are fast enough and give you quite cool effect (closely resembling what GI should look like) and at solid speed. For fully dynamic scene and lighting I've so far used a solution similar to reflective shadow maps. For each light in the scene you cast rays that hit the surface at some given position - this is a position where virtual point light will be placed.

After this step you generate literally a TON of virtual point lights, so some kind of algorithm to merge neighbors into one is used (you can place them into grid, for each cell average their color and intensity and use one light from F.e. cell center point). Now, you pick some (lets say N) of those VPLs (based upon distance from camera, intensity, and in general how much effect they will have onto final screen), and generate small shadow map for them (either using ray tracer or rasterization). This map is used to generate secondary shadows (it doesn't need to have high resolution, blurry is good here) - I've used VSM to keep them nice and blurry. To accelerate this process, simplified geometry of the scene can be used.

Each of that VPL shadow can be stored for next frame (unless something dynamic moved in its range), this also can be added to their importance - so in the end you will quickly have shadow casting on all VPLs (yet it will have quite large overdraw in general). This can handle diffuse-only global illumination (sorry, no caustics - there are other solutions for them, mostly pre-computed).

Advantages - no need for generation of voxels (or SVO), better secondary shadows comparing to SVO, supports fully dynamic scenes, can store shadow maps and precompute them for non-dynamic parts

Disadvantages - large overdraw, needs fast generation of shadow maps, storing shadow maps? (I've used 'texture atlas of shadow maps'), might need 2 scene representations (you can use more complex one, but your shadow map generation phase will be slow)

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

If you should decide to try Voxel Cone Tracing do it on (nested) dense grids. A Sparse Voxel Octree while not really being affordable performance-wise in a setting where you do other stuff beside dynamic GI is also a pain in the butt to implement and maintain. Trust me, I have been there. :\


Thanks. Can you repost your comment so I can upvote it? I accidentally pressed downvote on my phone. I'm so sorry.
[Nothing here]

This topic is closed to new replies.

Advertisement