Radiosity : What Visibility Algorithm is fastest ?

Started by
11 comments, last by VladR 18 years, 10 months ago
Quote:Original post by davepermen
at least the ones that want to do realtime raytracing all use kd-trees, wich are just another bsp tree..

the stuff on www.openrt.de is all without any octree, and bsp only. with axis aligned planes, of course..

the stuff we've got seen on flipcode, too.. no octrees.

Well, I don't know about their internal structures, but according to our tests (and we have done a lot of them, since performance was one of our top most goals), tracing shadows (ie. simple intersection tracing without need to know the first intersection point) using octrees with optimized traversal is easily 50% faster than non-overlapping binary AABB trees (everything with hand optimized SIMD ASM, of course, partially processing multiple rays simultaneously). Although it could depend on the type of geometry you use.
Advertisement
Yann, while your tests showed a 50% gain of octrees over hand-optimized kd, you have to also realize that the heuristic for choosing split-plane position of a kd node has proven to have dramatic effects on traversal time.

Plain vanilla SAH was a order-of-magnitude better than median splits, but beyond that there are other clever heuristics that offer even greater benefits in the quality of the tree... early isolation of empty space, double-splits, and even optimizing the structure of the tree with adaptive termination criteria.

Now, maybe you were doing all this. But you talked just about optimizing traversal, when really that's only an incremental improvement, less significant than split-heuristic improvements.

And obviously, in-order traversal is one of the great things about KDs, so if you don't need that then... I'm curious now. It seems like octree traversal would be very similar to KD, just combining 3 binary splits into 1 8-way split step. Is it significantly different?
Quote:Original post by Yann L
You mean H(ij) ? It can get fractional with multisampling, ie. shooting several rays between two same patches (or patch groups). This allows a kind of antialias, and becomes important when you want to create very high quality images.
The same antialias could be also done if the resolution of patches was higher, right ? But it would probably be a better idea to subdivide just the patches at the shadow boundary (some threshold for difference between 2 neighbouring patches), so that we don`t loose the precious time on non-shadowed patches, right ?

Quote:Original post by Yann L
No, the octree just speeds up the raytracing process itself. You always use the hemisphere in formfactor calculation, but you might not be aware of that :) It's simply encoded in the cos(theta_i) * cos(theta_j) term.
Yeah, I already forgot that my classical equation (F i,j = ((cosQ * cosR ) / (PI*r2 )) * H i,j * dA j) is derived from the hemicube.

Quote:Original post by Yann L
By using probabilistic sampling (or other forms of path, or even volume flux tracing), you can use that hemisphere directly. The basic idea of QMC sampling is as follows: instead of blindly shooting a ray to every possible other patch, you shoot rays into certain directions over the hemisphere, and check if they intersect with another patch. The directions are not isotropic over the hemisphere, but selected by importance. This selection is done on probabilistic principles.
Interesting idea, I`ll google that up how it`s done and what`s the theory behind it.

Quote:Original post by Yann L
It wouldn't make much sense without knowing much more about the details of your implementation. In this older post, I listed various papers on the subject. You might want to take a look at the one from Bernard Kwok. Eventhough a little dated, it contains an excellent overview over the basics and some advanced concepts.
Well, I haven`t had a luck googling that paper. Maybe someone has some direct link to it ?
As for the details of my implementation, it`s a basic Progressive Refinement Shooting algorithm:
1. Divide scene (including lights) into patches (quads). I have 3 energy values per patch (R,G,B), but not limited to range 0-255
2. Assign energy to lights (usually around 1000, but with higher patch tesselation, the overall brightness decreases, because I take into account the area of patch in the Form-Factor equation, so the smaller the patch, the less light gets distributed. But this is easily fixed with higher exposure (at least I haven`t spotted yet, that the smaller patch (with same energy as bigger patch) could mean lesser quality of image).
3. Pick the patch with highest undistributed energy [repeat for R/G/B]
4. Distribute to all other patches in the scene [per particular R/G/B from above step]
5. Repeat steps 3-4 until there`s a patch with undistributed energy of 1%(or some other constant) of the very first brightest shooter.
6. Apply exposure
7. Enjoy the rendering or curse the wrongly set up lights

Quote:Original post by Yann L
That's wrong. Using the typical exponential exposure equation (1-exp(-colour*exposure)), the exposure exponent is usually somewhere between 1 and 10, typically at around 2 or 3.
I use this exposure function : (1.0f - exp (-light * K)) * 255.0f
I can`t see anything if K is 1.0 or 2.0. Doeasn`t matter if it`s one pass or 1000 passes. But values of 1.000 start showing the picture. Maybe I should assign higher energy values to lights (currently about 1000), so that the patches receive more energy - what I mean is that I`m not sure, if something doesn`t get lost this way because of floating point precision. If lights had higher intensity (100.000 for example), final radiosity values should be also higher than mine (around 0.00000014534534f) and then I might not need exposure values of 1000, right ?

Quote:Original post by Yann L
This does not impact on the exposure. I implement my GI systems using an SPD (a spectral power distribution) of usually 16 bands. But in the end, the display needs RGB data, so this SPD is integrated back into an RGB tripplet, and the well known exposure equation (and gamma correction) is applied. In your case, you would basically use a 3 band distribution (Red, green, and blue). Going to more selective wavelength bands is only useful for more precise light interaction between surfaces. It is required, if you want perfectly photorealistic results (suing spectral emission profiles for existing lightsources, such as IESNA), but it's not worth it for a game.
I see. If you say, it`s not worth it, I definitely believe you. Besides, one of the games we`re working on, needs excessivelly colorfull scenes (many different bright lights) which have nothing to do with reality - they`re just pretty to look at. Still, I`ll google some papers regarding SPD, just in case I might have a time for this in future.

EDIT: In that old thread with rendering of the room with two lights, you mentioned something like this: "The algorithm used was multi-spectral PR with transfer tracking, using an inital HDRI shooting pass for the atmospheric light." Could you please tell more what does that Initial HDRI shooting pass actually mean ? Is it some kind of a light with very high energy value ?

VladR My 3rd person action RPG on GreenLight: http://steamcommunity.com/sharedfiles/filedetails/?id=92951596

This topic is closed to new replies.

Advertisement