Sign in to follow this  
maya18222

What is the difference between Final Gather and Irradiance caching?

Recommended Posts

Final gather seems to be examplained as performing an initial pass by computing indirect lighting at the eye rays first bounce by sampling incident light in number of random directions in the hemisphere above that point. The process isnt performed for every pixel, but rather more frequently in places where the the geometry is less flat. In the second pass, ray tracing is performed as usual but interpolating between the final gather points for the indirect illumination.

Irradiance caching sounds very similar. Whats the difference?

(and if anyone has a good description of Importons and Irradiance particles, that would help too)

Share this post


Link to post
Share on other sites
I think you've generally got the final gathering idea down.

Irradiance volumes/particles is pretty much the same thing, but its more of an "area" effect than a per-pixel one. There are a few variations of the irradiance thing. Pretty much you cover an area in a grid of "spheres". In every direction you sample light (from a pre-calculated "real" lighting model) and save that value onto the sphere. When you're actually rendering you just grab the nearest sphere and interpolate the nearest colors appropriate to the pixel's direction together. This allows dynamically moving objects to use them too. [url="http://docs.google.com/viewer?a=v&q=cache:0h7YyH7B0KIJ:citeseerx.ist.psu.edu/viewdoc/download%3Fdoi%3D10.1.1.7.9873%26rep%3Drep1%26type%3Dpdf+irradiance+volume&hl=en&gl=us&pid=bl&srcid=ADGEESjwVm6DjeSKjcjjExJVWQWJESRi32ryPFHY-1iNQftvIpNGO3UGuvi_MxtDqdm8-e9CE6b48vb_bDtbhJWhVeE1qo3DZwqPhbrJll0hpOJRBxyaqJwsHR7vJQoNljZI6lWgHr1P&sig=AHIEtbSmFhc8jfhFQZy-zy7fgyflFGro1g"]Here's The Paper[/url]

There are other styles of that technique that allow you to dynamically change the lighting instead of pre-computing the lighting with a real system and then sampling. I know the Battlefield 3 engine is doing it. >_<

Share this post


Link to post
Share on other sites
Hi, Irradiance caching calculates irradiance in a sparse set of world space points and interpolates between them during rendering. New irradiance cache points are created on the fly (by calculating irradiance through shooting many paths over the hemisphere) when the error from the interpolation becomes too great. The initial irradiance cache points are often created from every pixel in a down sampled image. This only works in diffuse scenes. Radiance caching is very good and more general, but harder to implement.

Final gather happens at every pixel. It effectively replaces low frequency error from methods such as radiosity or photon mapping with high frequency noise from the Monte Carlo evaluation of radiance at a point visible through the pixel, which is more visually pleasing. By the way, the error is still there, but is masked by the high frequency noise. This noise vanishes as the number of final gather rays tends to infinity, but in practice a few hundred will generally do the trick.

So the difference is that irradiance caching is NOT every pixel, otherwise its advantages would vanish, but final gather generally is.

Importons are shot from the camera, and can either be used to calculate lighting (they transport importance, which is the adjoint of radiance) or to accelerate photon mapping (although this only really works in certain scenes I have found).

To be honest, I have never heard the term "Irradiance particle". Do you mean photons (in the graphics sense)?

Share this post


Link to post
Share on other sites
Final gathering is usually used in conjunction with another illumination technique. It's where, for every 2D point in the final image, you perform high-quality ray-tracing to determine the illumination for that point.

An example use might be:
1) Perform approximate global illumination for the entire scene (including faces the camera cannot see) using path tracing, etc
2) Store the results in a cache
3) Find every point that the camera can see, trace gorillions rays out from each point and gather results from the cache.

Parts 1+2 are your quick/approximate GI solution that operates on the while scene, while part 3 is your high-quality final gather step which only operates on what the camera can.

It seems to be missing from the English wikipedia, but you can google-translate the German wikipedia for an explanation here: [url="http://de.wikipedia.org/wiki/Final_Gathering"]http://de.wikipedia....Final_Gathering[/url]

Share this post


Link to post
Share on other sites
So with final gather, when you perform the first pass of storing indirect illumination above the hemisphere for some number of points, these are the same points as the ones produced by the cameras pixel rays first bounce?

The 3ds Max renderer (Mental Ray), lets you set the point density for this step, allowing you to have a lower ratio of finalgather points to pixels, and then interpolating. In the second pass, where you trace a ray for every pixel, compute that rays intersection, and then sample surrounding final gather points, would you then have to check the path bewteen the intersection and final gather points for obstructing geometry?

If you were to combine photon tracing and final gather like Mental Ray can do, would this then be 3 passes?
- Shoot photons and store intersections in map
- Compute FG map using some pixel density, by shooting rays into hemisphere around FG point, and compute indirect illumination by interpolating between points in photon map the rays intersect
- Perform normal raytracing picking up indirect illumination from FG map

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this