What on earth is Far Cry 3's Deferred Radiance Transfer ?

Started by
11 comments, last by TheLastOfUs 10 years, 5 months ago

an someone be kind enough and explain their GDC slide to me:

http://fileadmin.cs.lth.se/cs/Education/EDAN35/lectures/L10b-Nikolay_DRTV.pdf

I just need a nice detailed description of what the system essentially is with their "probes"

From what I gathered...they use spherical harmonics to deduce irradiance in a scene, and the dynamic objects sample nearby probes - using a sort of voxel cone tracing on the camera which calculates for X+1 and Y-1 shift so that they don't have to essentially render every probe outside of players view?

Please note, I am not a game programmer..nor do I want to be one. I am just a consumer trying to gain better understanding so that I can keep up with Current Gen vs Next Gen etc

Advertisement
Probes are a view of the scene from some specific point of view. They've been extremely common in film for a long time, under the name IBL (image based lighting).

You can "compress" a probe down to SH data, which makes it smaller, but blurrier.

The innovation in FC3 is solving the problem of figuring out which probes to use when shading each pixel. The probes are arbitrarily scattered around the levels, so this is a search problem.

FC3 chose to compress the probes, and copy them into the cells of a regular 3D grid covering the view. Instead of searching for the nearest probe, you can find it instantly by reading the grid cell that you're in. Instead of searching for the nearest probe, they can always find it at a constant cost -- this allows them to use any number of probes that they like; millions across the island if need be.

This lets them add diffuse IBL extremely cheaply (the probes are pre-generated and stored on disc) to every pixel.
The downsides are that they don't support dynamic geometry (due to the pre generation), they're low res/blurry due to using SH (this means you see diffuse bounced light, but no sharp/glossy reflections) and that on 360/ps3 there was not enough memory to store much probe data, so dynamic lights are ignored by the GI system.
I.e. on current consoles, their system allows the sun and sky to reflect off objects and cause "bounce lighting", but the PC also supports "bounce lighting" caused by other light sources as well.

Hi,

Thank you SOOO much for your reply. Someone who really knows what he/she is talking about.

If I may ask - dynamic lights meaning gunfire and flashlights I suppose?

Also..could you please decipher the following from the BLACK flag team for me:

"LP: Our Global Illumination is based on previous work that was done internally at Ubisoft (deferred radiance transfer volumes), but we improved it greatly. Using the navmesh, we automatically populate our world with thousands of probes. For each probe, we then compute the irradiance for 8 different time of the day. Those computations are done on the build machine GPU, so they are really fast: we can compute thousands of probes per minute. At runtime, on the player machine, we then interpolate these data to get the proper bounce lighting for a given time of day, world position and weather. This bounce sun lighting is then combined with ambient occlusion and sky lighting to achieve a full indirect lighting and a Global Illumination solution. This system works on both current gen and next gen.""

So as I understand ...probes essentially function to hold lighting information in the form of cubemaps which are essentially textures - or are they actually equations which take light inputs from other light sources and essentially "bounce" them like a transparency reflectance object?

probes essentially function to hold lighting information in the form of cubemaps which are essentially textures

Pretty much correct, though instead of a cubemap they use spherical harmonics (fancy maths) to turn those textures into an approximation that uses a lot less memory. As for what they did for Black Flag, it sounds like they just made it even less dynamic, just figuring out what each probe looks like for a given time of day/weather, storing ALL of that on disc, and just blending between whatever probes were correct for the games current weather/time of day.

Wow - thank you both for the depth of information.

So the way the Black Flag people did it is they calculated for weather...baked it for that, and then blended?

Pand copy them into the cells of a regular 3D grid covering the view. Instead of searching for the nearest probe, you can find it instantly by reading the grid cell that you're in. Instead of searching for the nearest probe, they can always find it at a constant cost -- this allows them to use any number of probes that they like; millions across the island if need be.

So I am so glad you mentioned this - this means they made a grid that constantly evaluates probes along the viewer's camera? So it is not dependent on probes being constantly used...but rather in the direction the dynamic character faces?

Considering Far Cry 3 is a 1st person game - how does it work when say a 3rd person character walks into a set?

Pand copy them into the cells of a regular 3D grid covering the view. Instead of searching for the nearest probe, you can find it instantly by reading the grid cell that you're in. Instead of searching for the nearest probe, they can always find it at a constant cost -- this allows them to use any number of probes that they like; millions across the island if need be.

So I am so glad you mentioned this - this means they made a grid that constantly evaluates probes along the viewer's camera? So it is not dependent on probes being constantly used...but rather in the direction the dynamic character faces?

Considering Far Cry 3 is a 1st person game - how does it work when say a 3rd person character walks into a set?

It's not based on direction, but rather position. One or more nearby probes are read and blended together, depending on the XYZ of the object being rendered.

I see - still have 2 questions:

What happens when a 3rd person view of a character walks into the scene with probes

and also @ Frentic - so the way the Black Flag people did it is they calculated for weather...baked it for that, and then blended? So essentially everything for theirs is precomputed and nothing is actually very dynamic?

Pand copy them into the cells of a regular 3D grid covering the view. Instead of searching for the nearest probe, you can find it instantly by reading the grid cell that you're in. Instead of searching for the nearest probe, they can always find it at a constant cost -- this allows them to use any number of probes that they like; millions across the island if need be.


So I am so glad you mentioned this - this means they made a grid that constantly evaluates probes along the viewer's camera? So it is not dependent on probes being constantly used...but rather in the direction the dynamic character faces?

Considering Far Cry 3 is a 1st person game - how does it work when say a 3rd person character walks into a set?
The probes are all static / pre-generated. The grid is quite large and is *centered* around the camera. That means that when rotating the camera, the grid still covers your view.
It's only when the camera moves that the grid needs to be updated. If you move in the +X direction, then the colums of cells right at the -X edge are discarded and a new face full of columns are appended at the +X end of the cube. When appending these new cells, the costly "search" algorithm is run, which locates he closest probes to each new cell and merges them in order to generate he SH values to store in that cell.

can anyone explain Black Flag GI in more detail?

This topic is closed to new replies.

Advertisement