Irradiance environment map and visibility

Started by
3 comments, last by MJP 8 years, 11 months ago
Ok, so I have a question. If I wanted to render an object with a non-filtered environment map and wanted to take self occlusion into account I could render the object to a hemicube from a point on its surface, effectively giving me a mask that can be multiplied with the environment map. What should you do however if you are using a filtered irradiance environment map?
Advertisement

Same thing, but it's now no longer actually correct laugh.png

Another hack is to generate bent-normals from the "mask" map -- find the average unoccluded value, and get prefiltered lighting from that direction instead. Maybe also use the solid area of the largest cone you can create around that bent normal without hitting an occlusion, compared to the hemisphere's total solid area as some kind of "ao" factor.

So basically you have two signals: incoming lighting (L) and visibility (V). What you'd really want to do with this is "Integral(L * V * cos(thetaN))", but currently all you have is "Integral(L * cos(thetaN))" and "V". Probably the simplest thing to do is to also pre-integrate your visibility mask with the same cosine term, so that you end up with "Integral(L * cos(thetaN)) * Integral(V * cos(thetaN))". This is essentially the exact definition of ambient occlusion. It's not correct, but it's a decent approximation to our first equation (which wasn't really "correct" in the first place, since it ignores bounce lighting off the occluding surfaces).

Integral(L * cos(thetaN)) * Integral(V * cos(thetaN))

Unless I'm mistaken though I thought that would not give the same result as Integral(L * V * cos(thetaN)). Should have kept reading because you go on to say as much. I was hoping there would be a way to "correctly" modulate irradiance from an irradiance volume using something like a H basis encoded occlusion map.

It is possible to approximate the "proper" integral if you have both signals represented using a higher-order basis, such as spherical harmonics. With SH you normally work with "double products", where you're computing Integral(A * B). The double product is pretty simple, since it basically boils down to a dot product of the coefficients. However in your case we already have 2 terms without the visibility, which means that you need to evaluate an SH "triple product" if you want to add visibility into the mix. Triple products are doable, but more complicated and more expensive than a double product. There have been some papers that used triple products for the purpose of pre-computing light transport through a scene (PRT) and storing it as SH, so you may find some info on how to evaluate a triple product efficiently.

It's also possible to do this with spherical gaussians, which have an analytical form for a vector product. Basically you can do L (cross) V which yields a new spherical gaussian lobe, which can then be convolved with an SG approximation of the cosine term by using a scalar product. For more info on this, I would suggest looking through the paper and presentation from "All-Frequency Rendering of Dynamic, Spatially-Varying Reflectance", which you can find here. In their paper they store visibility using signed distance fields, but it would also be possible to represent the occlusion directly as a set of SG's.

This topic is closed to new replies.

Advertisement