Jump to content
  • Advertisement
Sign in to follow this  
jeremie009

Radiosity

This topic is 1274 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

If your scene is a closed box with walls that have albedo 1 and some light source inside, then of course the energy doesn't have any way of leaving and your formulas won't converge. But as long as there is something dark in the scene (outer space will do, if the scene is not completely closed), it will converge. You can also make albedo less than 1, as it should be.

Share this post


Link to post
Share on other sites
Advertisement
Iv'e had explosion issues when double buffering received light
(storing the results elsewhwere and updating all lrec values at one time before the next iteration).
I this case a bit of damping helps (receiver->lrec = accumulated light * 0.98).

After porting to GPU it's unavoidable many values update at the same time, but i never got explosions,
just fluctuations when there are heavy changes in the scene.
The highest aldebo value i use is 0.9.

Share this post


Link to post
Share on other sites
When you ported  your radiosity on the gpu, did you use some sort of hierarchy or just brute force?

In my case (rendering a view from the lumel), I started with brute force. Later I moved to an iterative lattice distribution pretty much exactly matching Hugo Elias' description of such an approach (render every forth lumel, then find those in between and if close enough lerp otherwise render, then find those in between that subset and lerp if close enough) just adding a "dominant" plane check where I just dot-prod'ed the lumel's normal against the 6 cardinal planes and classified that lumel by the best fitting plane, so far that's been sufficient.

For CPU/GPU side real-time, I cluster by dominant plane and distance "emitters" and then calculate a limited number of form factors (3 usually) for each lumel against those clusters. I then create a vertex buffer for the clusters, on the CPU I just do a simple directional light, but on the GPU I create a point cloud of multiple samples for each cluster (random distribution) and use transform feedback to calculate the results at the end of the frame (shadow maps already exist at this point), I then propogate that to the lumels, and then apply the data back to the lightmap averaging the sample values. Quite fast, sending the updated lightmap to the GPU is slower than everything else combined.

 

Rendering enviroment maps for each texel may be a good idea for performance, but for accurary you need high resolution.
The number of pixels describes both distance and area of an object, so it's a big difference if a small object with high emission
ends up taking 1 or 2 pixels in the eviroment map, resulting in banding or noise. I found 256 x 256 env maps still not good enough for my needs.
For sure that's not an issue if you handle light sources seperately
Pretty sure you have that backwards. Banding and noise don't present themselves until you start bumping up the resolution in my experience so far. I normally use 32x32 for each hemicube face.
 
As you increase the resolution you also decrease the coarseness of multiplier "map" (whether that map is a real map you precompute or you just calculate the value on the fly). Even though that map is normalized, the number of values that end up becoming significant contributing factors is substantially higher as the resolution increases. That coarseness at low resolution also really saves you when it comes to dealing with situations where the hemicube penetrates corner geometry, most methods of dealing with that result in poor occlusion at edges by offsetting the sampling point to avoid interpenetration and therefore not capturing the foreign excidence seen by the actual point (requiring an ambient occlusion pass to reacquire).
 
Penetration will generally be quite severe in one or maybe two faces, at low res that coarseness will make those penetrated values practically meaningless to the overall contribution.

I've never seen banding at 128x128 hemicube faces or lower. It does appear to be a problem for some, but I'd imagine it has more to do with a faulty lumel vs world unit ratio determination and misguided attempts at irradiance caching (do it in surface space, not world space) than the actual process of rendering from a lumel. The other villain to the approach is specular. GPU hardware texture filtering creates far more undesirable artifacts as far as I've seen.
 
The big problem of that approach is specular response. It's really easy to end up with an implementation that cannot converge (as in it'll bleach out to white). By rendering from the lumel via your normal means of rendering you're subjected to the specular response of your materials, so you have to account for that, and in doing multiple passes against a lightmapping technique that also includes specular response (ambient + directional for example) you have to account for that as well.

Almost all radiosity research and study focuses on diffuse transfer. The original theory of rendering from the lumel also assumed a purely diffuse environment.
Edited by JSandusky

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!