• Create Account

# MightTower2

Member Since 09 Jun 2013
Offline Last Active Oct 23 2013 05:00 AM

### In Topic: Theory of Light Propagation Volumes in Detail

04 October 2013 - 08:17 AM

Hello,

-Why do you multiply by 1/6?

-Where is the light color?

Oh I forgot the lightColor in the formula, of course it has to be in there,

According to Andreas Kirsch and his Annotations paper http://blog.blackhc.net/wp-content/uploads/2010/07/lpv-annotations.pdf, the 1/6 came from the the approximated solid angle of one surfel of the texture viewed with a 90 degree field of view, this gives aus for the solid angle approximately:

4 Pi/6 *1/(rsmwidth*rsmheight), the 4 Pi, then is canceled because the equation to that point is:
outgoing flux of a rsm surfel= diffuse material Color * roh/4Pi * total flux*cos (Theta), with roh being the solid angle described before.

I saw other formulas for the rsm flux computation as well but this was good documented and makes sense to me, if there are other recommandations I would love to here them.

The functions in the volume are spherical harmonics that represent radiance (energy flowing through space) and what you need for rendering is irradiance (incoming radiance at a point). To do that you sample the spherical harmonics using the surface normal as sampling vector and a set of special coefficients where these coefficients represent the hemisphere above the point being processed, in particular they represent the cosine convolution of that hemisphere. The area of the pixel doesn't play an important role here and it can be ignored.

Oh ok that sounds like the way I do it at the moment, create spherical Basis functions in direction of the negative normal and do a dot product with the coeffients saved in the according volume cell. The point why I was thinking about changing this was that the crytek paper points out to use the haf cell size to convert intensity to incident radiance " However, since we store intensity we need to convert it into incident radiance and due to spatial discretization we assume that the distance between the cell’s center (where the intensity is assumed to be), and the surface to be lit is half the grid size s." (Cascaded Light Propagation Volumes for Real-Time Indirect Illumination from Kaplanyan and Dachsbacher) With grid size they mean cell size. And in a way I would like to somehow use the cellsize somewhere in the process because it feels wrong not do use this, but as said I think I do it in the way you recommend right now.

BTW, please forgive my honesty but I think you're paying too much attention to the theoretical details and forgetting that the whole technique is a huge hack so mathematical correctness won't bring you the benefits you expect.

You are welcome and I really appreciate your hints and the way why I am thinking about the theory is because I want to understand it completely if I impelement it then is another question. I have already ways to tune with some factors during runtime and you are right they can make the scene visually better.

Have you any thoughts about the gbuffer occlusion injection? I do it with the squared distance to the camera and one factor I can set myself during runtime at the moment.

Thanks a lot

### In Topic: Problem with Cascaded Light Propagation Volumes

09 September 2013 - 06:08 PM

Overlapping cascade? On the cell the cascades overlap each cascade only contributes half luminance, so you sum to your full normal luminance but get a halfway transition?

Sorry I am not sure that I understand what you are saying.
In the first pictures I posted I had no overlapping cascades (the big one is left empty where the smaller one fits in, and the propagation also accounts for this), and I had propagation between cascades in it.

The new picture uses overlapping cascades (way easier to implement) where each cascade propagates independently.
Then for rendering I took (3*Cascade1Value+2*Cascade2Value+Cascade3Value)/6 for a position in the finest cascade. I know that summing up something there is physically completly wrong, just tried it to see the visual result.

At the moment I am just trying how good this cascade approach works, but I really appreciate your ideas.

### In Topic: Problem with Cascaded Light Propagation Volumes

09 September 2013 - 12:48 PM

Yes you are right more or less, I look up which is the finest cascade at the current position and use that one.

I tried your suggestion and I scaled the radiance from the biggest cascade with 3 the middle one with 2 and the smallest with 1.

This results in the following Image:

It is in a way not as bad as before but not really great. I could play with the scale factors and so on.

However you seem also too agree that the base problem is there and that it might be difficult or impossible too overcome it without "cheating" a little bit.
Other ideas someone?

### In Topic: Problem with Cascaded Light Propagation Volumes

08 September 2013 - 04:35 PM

Yes I use the same amount of iterations (propagate steps) for each cascade. And yes what you are saying about the obvious fact that light "goes further" with larger cells is exactly what I would expect. And most of the thesis about LPVs or implementations do not consider cascades (maybe for this reason??). But the orignial ones from Crytek just gave me the feeling that I miss something important and that it should just look fine or at least not as worse as it was with my implementation.

Beside that the Crytek paper mentions that they convert the intensity of the spherical harmonics to incoming radiance by using half of the cellsize but only for rendering the result but not in the propagation, therefore this can not be the solution of the problem.

I will try it with the sponza scene in the next days, at the moment I think that it might be that with centering around player ormaybe  select the cascade to use with caution, as an example first evaluate if the complet object is in that cascade et, it might not be terrible.

Thank you for your thoughts and the acknowlegement that I am not completly crazy.

30 August 2013 - 06:01 AM

Drawing is done by VBOs only (my hardware doesn't support VAOs AFAIK), in the usual way as far as I can see.

Every frame, after drawing:

```        int projectionMatrixLocation = glGetUniformLocation(programShaderID, "projectionMatrix"); // Get the location of our projection matrix in the shader
int viewMatrixLocation = glGetUniformLocation(programShaderID, "viewMatrix"); // Get the location of our view matrix in the shader
int modelMatrixLocation = glGetUniformLocation(programShaderID, "modelMatrix"); // Get the location of our model matrix in the shader

glUniformMatrix4fv(projectionMatrixLocation, 1, GL_FALSE, &projectionMatrix[0][0]); // Send our projection matrix to the shader
glUniformMatrix4fv(viewMatrixLocation, 1, GL_FALSE, &viewMatrix[0][0]); // Send our view matrix to the shader
glUniformMatrix4fv(modelMatrixLocation, 1, GL_FALSE, &modelMatrix[0][0]); // Send our model matrix to the shader
```

Here is one problem,

I guess you first do something like "glUseProgram(programShaderId)", and at the end of your drawing "glUseProgram(0)"? Or do you just use one shader? Then setting it once in the initialisation is ok.
When you set the uniforms after drawing, the shaders dont have the matrices when they draw/process everything.or if the shader is active all the time then you draw everything with the matrices from the last draw step.
The right sequence would be:

Initialisation:

2. glGetUniformLocation

Each frame:

1. set uniforms