Theory of Light Propagation Volumes in Detail

Started by
1 comment, last by Marc J 10 years, 6 months ago

Ok, I work with Light Propagation Volumes at the moment, there are some things in the theory I dont know how to implement them correct.
This is from http://blog.blackhc.net/wp-content/uploads/2010/07/lpv-annotations.pdf

Lets start with the Reflective Shadow Map and the flux which should be stored in it.
I have a "directional" light source with an arbitrary light Color, a total incoming flux, diffuse material color , width and height of the reflective shadow map texture and an angle theta between the light direction and the normal.

My flux for this surfel of the map then should be:
fluxOut=diffuse material Color * 1/6 *1/(rsmWidth*rsmHeight)* total flux*cos (theta)

The Injection of light then is, just take the flux and divide it by Pi. for every surfel of the reflective shadow map and add up the values in the appropriate cell of the light volume. Then you have an intensity function in each cell .

Ok now we have the occlusion injection or in my case 2, one from the reflective shadow map and one from the gbuffer.
In the paper above I believe there is onle the injection from the refelective shadow map described, the formula there is
surfelArea= 4.0*distance*distance/(rsmWidth*rsmHeight), distance means the distance from the light source to the object this is clear since the area will grow with the square of the distance but where does the 4 came from??
And for the GBuffer injection I also think about what is the right way to do it. The difference is that one is a orthographic projection and one a perspective (Gbuffer).

Propagation is clear to me,

I compute which amount of flux came from which face of the 6 neighbour cells and reproject this amount and divide it by Pi to get an intensity.
This is just the short version but as I said this is clear to me.

Now I have the volume with the propagated light and now I want to use it for rendering the indirect light into the scene.

Question here is, I have intenstity(I) functions in my volumes and what I need for rendering is the radiance(L).

The function is: L= I/A, so I need either the area of the Fragment or whatever I am just rendering or i can use the intensity to compute the irradiance E: E=I/r². The LPV paper from crytek says that the are using half the cellsize as r so then I can came up with the irradiance but what I have then to do? I mean L=E/w (w the solid angle, but what is the solid angle in that case)?

If someone has some explanations I would appreciate it, or if some one has questions regarding the described theory feel free to ask.
Thanks

Advertisement


fluxOut=diffuse material Color * 1/6 *1/(rsmWidth*rsmHeight)* total flux*cos (theta)

-Why do you multiply by 1/6?

-Where is the light color?


Now I have the volume with the propagated light and now I want to use it for rendering the indirect light into the scene.
Question here is, I have intenstity(I) functions in my volumes and what I need for rendering is the radiance(L).
The function is: L= I/A, so I need either the area of the Fragment or whatever I am just rendering or i can use the intensity to compute the irradiance E: E=I/r². The LPV paper from crytek says that the are using half the cellsize as r so then I can came up with the irradiance but what I have then to do? I mean L=E/w (w the solid angle, but what is the solid angle in that case)?

The functions in the volume are spherical harmonics that represent radiance (energy flowing through space) and what you need for rendering is irradiance (incoming radiance at a point). To do that you sample the spherical harmonics using the surface normal as sampling vector and a set of special coefficients where these coefficients represent the hemisphere above the point being processed, in particular they represent the cosine convolution of that hemisphere. The area of the pixel doesn't play an important role here and it can be ignored.

BTW, please forgive my honesty but I think you're paying too much attention to the theoretical details and forgetting that the whole technique is a huge hack so mathematical correctness won't bring you the benefits you expect. There are several limitations that jeopardize your mathematical accuracy such as the low resolution of the LPV, the low resolution of the spherical harmonics (not enough bands), the limited propagation distance, the poor blocking of light, the limited bounces, etc. etc. In the end you'll probably need to disregard mathematical correctness and hand tune the effect to achieve better practical results (just like Crytek itself did).

Hello,
thanks again for your advices.


-Why do you multiply by 1/6?

-Where is the light color?

Oh I forgot the lightColor in the formula, of course it has to be in there,

According to Andreas Kirsch and his Annotations paper http://blog.blackhc.net/wp-content/uploads/2010/07/lpv-annotations.pdf, the 1/6 came from the the approximated solid angle of one surfel of the texture viewed with a 90 degree field of view, this gives aus for the solid angle approximately:

4 Pi/6 *1/(rsmwidth*rsmheight), the 4 Pi, then is canceled because the equation to that point is:
outgoing flux of a rsm surfel= diffuse material Color * roh/4Pi * total flux*cos (Theta), with roh being the solid angle described before.

I saw other formulas for the rsm flux computation as well but this was good documented and makes sense to me, if there are other recommandations I would love to here them.


The functions in the volume are spherical harmonics that represent radiance (energy flowing through space) and what you need for rendering is irradiance (incoming radiance at a point). To do that you sample the spherical harmonics using the surface normal as sampling vector and a set of special coefficients where these coefficients represent the hemisphere above the point being processed, in particular they represent the cosine convolution of that hemisphere. The area of the pixel doesn't play an important role here and it can be ignored.

Oh ok that sounds like the way I do it at the moment, create spherical Basis functions in direction of the negative normal and do a dot product with the coeffients saved in the according volume cell. The point why I was thinking about changing this was that the crytek paper points out to use the haf cell size to convert intensity to incident radiance " However, since we store intensity we need to convert it into incident radiance and due to spatial discretization we assume that the distance between the cell’s center (where the intensity is assumed to be), and the surface to be lit is half the grid size s." (Cascaded Light Propagation Volumes for Real-Time Indirect Illumination from Kaplanyan and Dachsbacher) With grid size they mean cell size. And in a way I would like to somehow use the cellsize somewhere in the process because it feels wrong not do use this, but as said I think I do it in the way you recommend right now.


BTW, please forgive my honesty but I think you're paying too much attention to the theoretical details and forgetting that the whole technique is a huge hack so mathematical correctness won't bring you the benefits you expect.

You are welcome and I really appreciate your hints and the way why I am thinking about the theory is because I want to understand it completely if I impelement it then is another question. I have already ways to tune with some factors during runtime and you are right they can make the scene visually better.

Have you any thoughts about the gbuffer occlusion injection? I do it with the squared distance to the camera and one factor I can set myself during runtime at the moment.

Thanks a lot


This topic is closed to new replies.

Advertisement