• Advertisement
Sign in to follow this  

Radiosity calculation question

This topic is 4769 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, i have two questions concerning radiosity calculations for my progressive refinement solution: 1. Is it worth it to use double precision for the calculations?? Has this a noticeable effect on image quality?? 2. I´m currently trying to implement adaptive subdivision (dividing patches into elements where there are high discontinuities). I´m understanding the approach, but i just can´t put it into practice. The problem is, i finally bake my radiosity results into lightmaps. So since each patch stands for a texel (or lumel) in a lightmap, it gets messy when dealing with patch elements, since these are to be remapped to a single color value for the lumel - but then i see no advantage of the adaptive subdivision... ??? Can someone give me some pseudocode for realizing this as well as sheding some light about using patch element radiosities along with lightmaps? Thanks gammastrahler!!

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by Gammastrahler
1. Is it worth it to use double precision for the calculations?? Has this a noticeable effect on image quality??

Hmm, it depends on the energy levels you're dealing with. It also depends on how far you let the solution converge: most very subtle details will only appear at a convergence of >98%. When the solution is at this point, then the transfered energy levels are very small, yet can make a difference, because there are millions of them. So in this case, going double precision can be worthwhile.

Even if you don't want to use doubles for the patch energy (memory !), you should most definitely use doubles in the form factor calculations, including the final energy * FF multiply. The formfactor can get very, very small - yet the energy can be very large. Floats are often not accurate enough to perform a precise multiply with numbers of such large dynamic range.

Quote:
Original post by Gammastrahler
2. I´m currently trying to implement adaptive subdivision (dividing patches into elements where there are high discontinuities). I´m understanding the approach, but i just can´t put it into practice. The problem is, i finally bake my radiosity results into lightmaps. So since each patch stands for a texel (or lumel) in a lightmap, it gets messy when dealing with patch elements, since these are to be remapped to a single color value for the lumel - but then i see no advantage of the adaptive subdivision... ???

Basically, forget about the storage advantages of adaptive patching with lightmaps - there is none (or better: it's not worth the trouble). See subdivision as what it really is: a way to accelerate the solution convergence. Patch elements are sometimes also used for antialias purposes, but this again is generally not worth it - just multisample the shadow queries over a subpatch grid (and only the shadow queries, not the energy transfer), and you'll be fine.

Say you have a fixed resolution patch grid, a lightmap for instance. This grid will be the largest available patch resolution, we will not go finer than that (except for shadows, as stated above, but this is different as it doesn't require extra storage). Now, create an image space quadtree for each lightmap: level 0 is your original map, level 1 is your map resolution divided by 2 (ie. one level 1 texel relate to 2*2 level 0 texels), and so on. Compute the energy transfer at a higher level first. Then, check for discontinuities: if they are over the threshold, then invalidate the result, recurse into the map, and repeat. If the discontinuities are under the threshold, propagate the energy down to hierarchy, and go to the next texel.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement