DirectX SDK PRT tools, light probes

Started by
11 comments, last by MJP 10 years, 6 months ago
Ubisoft has a very good overview of what they're doing for Far Cry 3 here: http://www.gdcvault....-Volumes-Global

They're also going with a neat trick with precomputed radiance transfer. But the basis of it is a low cost spherical harmonic probe grid.
Advertisement

I am sorry to bring this topic back, but i need some help.

If you're just starting out with light probes, then you can implement them like this:

1. Pick your probe locations throughout the scene. Easiest way is a 3D grid.
2. For each probe location render an cubemap by rendering in all 6 directions
3. Convert the cubemap to SH (you can use the D3DX utility functions for this if you'd like, but it's not too hard to do on your own)

2. Render what exactly?

3. Do you mean like D3DXSHProjectCubeMap function?


Then at runtime you just lookup and interpolate the probes...

This is for "forward" renderer? You pick some arbitrary number of closest probes and interpolate those?

If i use deferred rendering, i should render probes as "volumes" (like my other lights) in screen space and add them additively ?


...and look up the irradiance in the direction of the normal by performing an SH dot product (just make sure that you include the cosine kernel). This will give you indirect lighting, and you can add in direct lighting on top of this.

How to use those SH coefficients exactly?

How do i lookup probe cubemap, what to use for texcoords?

How is this combined together?

Thank you for your time and patience.

For the cubemap, you want to render exactly what you normally render to the screen: your scene being lit by direct light sources.

D3DXSHProjectCubeMap is indeed the function that I was referring to.

It really doesn't matter whether you use forward rendering or deferred rendering, you can use light probes in either setup. For forward rendering, you need to sample the probes in the vertex or pixel shader and add the probe contribution to the lighting that you compute for direct light sources. For deferred rendering you can do it exactly the same way if you wish, by doing the same thing during the G-Buffer pass. Or alternatively you can add in the probes in a deferred pass, as described in this thread. There are advantages and disadvantages to both approaches.

Interpolating your probes depends on how you organize them. If the probes are in a regular grid, then you can do linear interpolation by grabbing the 8 neighboring samples and blending between then just like a GPU would when sampling a volume texture (in fact you can even store probes in a volume texture and let the GPU do the interpolation for you) If you probes aren't organized into any structure (just placed at arbitrary points), then it gets a bit more complicated. The blog post that was linked in the OP actually has a good overview of different approaches.

Your SH coefficients will be a set of RGB values that you store in an array. Typically you'll work with 3rd-order, which gives you 9 coefficients for a total of 27 floating point values. These coefficients basically give you a very low-frequency version of the lighting environment at the probe location, which basically means it can tell you a very blurry version of the lighting in a particular direction. To compute the irradiance for a surface with a given normal direction, you have to construct a cosine lobe centered at the normal direction and integrate it with the lighting environment. This is actually really efficient to do in SH, it basically amounts to computing 9 coefficients with a bit of math and then performing a dot product with the lighting environment coefficients. Ravi's paper from 2001 covers all of the details, and I would suggest at least attempting to read through it a few times to become familiar with the process.

This topic is closed to new replies.

Advertisement