• Advertisement
Sign in to follow this  

Progressive Refinement Radiosity Energy

This topic is 2772 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,

Intro
After looking at the Lightsprint demos and wanting to emulate them, at first I was unsuccessful in finding how they worked. Finally, after months of searching, I found http://www.cs.cmu.edu/~radiosity/emprad-tr.pdf which mentions (in passing) that they were based on progressive refinement radiosity.

Today, I hacked together a little Python class to plug into my library to execute progressive refinement radiosity. It's surprisingly speedy, even though I haven't tried to optimize it. A preliminary result (first bounce):


Problem
Right now, I'm stuck on how to get energy conservation. The (admittedly pseudocode) in the link above doesn't seem to conserve energy--or perhaps I've ported it wrong. They suggest:
for (j in patches) // Shoot from patch i to patch j
B[j] += radToShoot * FormFactor(i, j) * Vis1(i, j) * Reflectance[j];
S[j] += radToShoot * FormFactor(i, j) * Vis1(i, j) * Reflectance[j];
It seems to me that if there are many other patches, the energy of the system will increase. Indeed, I've seem this happen, as the total energy of the radiosity simulation approaches infinity.

I tried dividing the radiant exitance by the all the patches available, which seemed to help, but it gives incorrect results, as energy ought to leave the scene without hitting anything.

What's the preferred algorithm here? I.e., can someone replace this inner loop with something that works?

Thanks,
G

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by Geometrian

It seems to me that if there are many other patches, the energy of the system will increase. Indeed, I've seem this happen, as the total energy of the radiosity simulation approaches infinity.

No, this should not happen. The total number of patches is not important, only the visible number of patches, even the visible portion of a patch. If you really only consider the visible portion of patch and reflectance is less than 1 then your algorithm should approach a stable state.

10 years ago I wrote a progressive refinement radiosity program which worked like a charm. Uhmmm.. when I remember correctly I have rendered the scene from the view of the surface with the highest energy to a half cube. I encoded the surface id in the rgba channel, then read the pixels back and summed up the pixel for each receiving surface. On the other hand I calculated the number of pixels each surface theoretically covered from the given emitting which results in a visibilty function <1.

Share this post


Link to post
Share on other sites
Progressive refinement radiosity basically just uses the Jacobi method to solve an energy balance equation (which can be found here). Each "bounce" of radiosity is an iteration of the Jacobi algorithm. So the first thing to do is look at the energy transport equation and understand it.

The other thing to understand is that a function can always increase while still converging to something. Here's an example:

f(t) = 1 - exp(-t)

This always increases... but converges asymptotically to 1.

I think that with these two insights most of your cognitive dissonance should go away. :-)

Share this post


Link to post
Share on other sites
Hi,

I was having it done entirely mathematically, but evidently it's better to render the scene from the patch's perspective.

When doing this, is there any special way to count the number of pixels corresponding to another patch? Seems to me that you'd need a fairly large viewport (64x64 or 32x32) to get decent results. Is there a good way to have that done automatically? I'm assuming not, but just checking.

Thanks,
G

Share this post


Link to post
Share on other sites
Quote:

When doing this, is there any special way to count the number of pixels corresponding to another patch? Seems to me that you'd need a fairly large viewport (64x64 or 32x32) to get decent results. Is there a good way to have that done automatically? I'm assuming not, but just checking.


I've used a quite simple approach:
1. put all your patches into a simple array.
2. create secondary array of same size which will hold the sum of visible pixels
3. encode patch index to RGBA (r=index & 0xff, g=(index>>8)&0xff ,...)

5. determine patch with highest energy left, if highest energy < threshold => stop
6. clear secondary array
7. render scene from patch view, flat shaded, zbuffer on and encoded index as color
8. read buffer back, iterate over each pixel
9. for each pixel, reconstruct index and incr. pixel sum at according index (secondary array)
10. transfer energy from patch to visibile patches (secondary array contains at least one pixel)
11. go back to 5

As viewport size I used 128x128 or 256x256 on 10 year old hardware ! On the other hand the number of patches has increased :-) There's still enough potential to speed it up, but it worked very good for me.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement