Techniques used for precomputing lightmaps

Started by
2 comments, last by Xycaleth 7 years, 9 months ago

I'm currently interested in writing a lightmap generator for the old Quake3 BSP format as a side project and was wondering what techniques the latest and greatest lightmap generators are using. I've come across a few such as photon mapping of some sort (UE4), rasterization + irradiance gradients (The Witness), and as far as I can tell the map compiler for Quake3 used some radiosity-based technique.

What techniques are games using nowadays? I'm looking for something which gives fast convergence and doesn't require too much tweaking to get the desired results. I initially thought to use photon mapping and actually made a start, but I thought it would be good to get some more opinions first. I'm initially writing this for CPU, and then moving to GPU when I have a good understanding of how the algorithm works. Any advice is welcome :D

Advertisement
i think photon mapping is the simplest and most feature rich way to go. you can gather indirect illumination as well as direct illumination like projected colored light without adding complexity, as every light source can be handled in individual passes.
gathering can also easily add features like caustics, ambient occlusion without addinh complexity.

the real challenge in my experience is uv generation. in a first step you might want to skip it and use some existing tool.

check out the paper of square enix on fast global illumination baking and from last of us.

I find that path tracers are easy to understand, and can be relatively simple to implement. You can check out Physically Based Rendering if you're looking for a good book on the subject, or you can check out pbrt or Mitsuba if you'd like look at the code for a working implementation. Aside from being straightforward, path tracers have a few nice properties:

  • If they're unbiased, then adding more samples will always converge towards the correct result. So adding more rays means better quality. This is not the case for photon mapping, which is a biased rendering method.
  • You can pretty much handle any shading model, and by using importance sampling techniques you can improve convergence as well
  • Depending on how you integrate, it's possible to write a progressive renderer. This means that you can show low-quality results right away, and continuously update those results as more rays come in.

Probably the biggest downside of path tracing is that a simple implementation can be rather noisy compared to some other techniques, especially for certain scenes where the light transport is particularly complicated. At the very least you'll typically need some form of importance sampling, and more complex scenes may require bidirectional path tracing in order to converge more quickly.

Thanks for the advice guys. I'll be starting off with diffuse only reflections so that simplifies things a lot. I have a fairly basic photon mapping solution going at the moment (visualising it just by rendering the scene from some point in the map, instead of generating the lightmaps) but I'm having with this case:


+-------------------+
|               L   |
|   ----------------+
|               X   |
+------------------ +
L = Light, X = problem area

I'm getting very noisy results at X, with around 30M photons. I would guess this is because a small percentage of photons will reach is from L - is this just an inherent issue with photon mapping? At least, without increasing the number of photons even further.

I might try out path tracing as well to compare the results.

This topic is closed to new replies.

Advertisement