Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

Yann L

Wavelet radiosity

This topic is 5924 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

OK, so my current radiosity system (slightly modified progressive refinement, with adaptive substructuring) doesn't scale very well with my current project... While the visual results are good, it just takes too long to compute on a very high polycount scene. So I'm currently investigating into alternative algorithms. I just finished implementing a stochastic solver (using Monte Carlo random walks). It defintely is faster (a lot), but the visual quality isn't as good as expected. I was thinking about trying wavelet radiosity. From what I've heard/read, the visual results are supposed to be excellent, and convergence is much faster than with standard PR radiosity. But the implementation is far from trivial, and will probably take some time. So I'd be interested, if anyone here ever implemented a wavelet radiosity algorithm. What are your experiences with it, regarding quality / performance ratio (esp. on complex scenes, 1+ million faces) ? Is it worth the trouble ? The info I could find on the net is unfortunately very obfuscated, to say the least... Or does anyone know about a good site, objectively comparing the quality / performance of advanced radiosity algorithms (on a standarized model, eg. the Cornell box) ? Thanks ! / Yann [edited by - Yann L on July 28, 2002 12:56:47 PM]

Share this post


Link to post
Share on other sites
Advertisement
Apparently, no one has tried yet... Oh, well.

For those interested, I found a nice paper that compares the algorithms I mentioned.

Share this post


Link to post
Share on other sites
as i don''t see much point in radiosity as it is done today (diffuse radiosity), i haven''t ever implemented those algos. i don''t even understand most parts of them eighter..

i am for some brdf/shaders that describe the material, and you render than iteratively to cumulate incoming and outgoing light.. that way you get all features..

try montecarlo with the metropolis technique, if you get it working you''re the third person on the world

"take a look around" - limp bizkit
www.google.com

Share this post


Link to post
Share on other sites
Yann: I can''t really answer your question, but I have another one

Radiosity modells diffuse interaction. All realtime engines I know of that use radiosity precalculate radiosity lightmaps and then just add specular interaction on top. I suppose non-realtime engines use raytracing for specular calculations and then combine the results. This approach cannot bring correct results because at any given point diffuse light can become specular and specular can become diffuse (assuming you''re breaking the lighting up into the diffuse/specular extremes but I couldn''t find/think of any other way of doing it on current hardware). We''re not even close to getting rid of diffuse/specular extremes and implementing "true" raytracing where each ray correctly breaks up into many rays when it hits the surface.

Do you know of any models that have been developed to correctly merge radiosity and raytracing?

Share this post


Link to post
Share on other sites
quote:
Original post by kill
Do you know of any models that have been developed to correctly merge radiosity and raytracing?


There are plenty of good articles about this to be found in the SIGGRAPH proceedings. Do you have access to an academic library?



Don''t listen to me. I''ve had too much coffee.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
> each ray correctly breaks up into many rays when it hits the surface.

Thats pretty standard for nonrealtime lowpoly scenes.
(You already wrote "many"=simulated and not infinite (real)

Share this post


Link to post
Share on other sites
quote:

Radiosity modells diffuse interaction. All realtime engines I know of that use radiosity precalculate radiosity lightmaps and then just add specular interaction on top. I suppose non-realtime engines use raytracing for specular calculations and then combine the results.


Yep, that's a common technique.

quote:

This approach cannot bring correct results because at any given point diffuse light can become specular and specular can become diffuse (assuming you're breaking the lighting up into the diffuse/specular extremes but I couldn't find/think of any other way of doing it on current hardware).


Well, the whole 'diffuse reflector' idea is inherently flawed. You don't have 100% diffuse surfaces in reality. But it's the only way to make radiosity computable on current HW - and it makes it view-independent. There is lots of ongoing research about the specular issue. Currently, the best approach is to drop the diffuse/specular idea altogether (it was just a hack anyway), and replace the view-independent reflectance factor of every surface by a BRDF. Which, of course, makes the radiosity solution view-dependent. You also have to drop the RGB idea, and trace the complete wavelength spectrum separately, since BRDFs are highly wavelength dependent.

There is an interesting approach using recursive photonmaps and formfactor mapping. The idea is to raytrace the photon-flux through pre-computed diffuse formfactors, but by taking view-dependent properties into account. Again, it's only an approximation (and it takes huge amounts of memory), but you get radiosity quality with specular light, caustics, etc. And it's very non-realtime...

quote:

We're not even close to getting rid of diffuse/specular extremes and implementing "true" raytracing where each ray correctly breaks up into many rays when it hits the surface.


This is more or less what radiosity does, only without the specular component. If you don't use hemicubes, you'll have to shoot rays anyway. The Monte Carlo technique uses explicit raytracing/bouncing (which leads to ugly noise artifacts, if the scene is 'underbounced' by shooting to few rays). The difference is that Monte Carlo uses random bounces to simulate diffuse reflection (and iterates the correct result through probabilities), whereas the view-dependent (specular) solution would require (directional) BRDFs.

[Edit] Tons of typos. Damn, it's already 4am. I'm going to bed.

/ Yann

[edited by - Yann L on July 31, 2002 10:08:00 PM]

Share this post


Link to post
Share on other sites
I havn''t implemented a Radosity system in a long time, but they are notoroisly slow algorithms, and are not used much these days.

If you want something which performs well, you should invesitage some recent papers on ray-tracing using hardware. There are many techiniques using hardware to acheive radoisity like results. Some techniques render the scene from each ''polygons'' perspective (I had a friend once implement a radoisity engine this way). Others are more complicated. a GF4 can render 60 mil triangles a second.

Most of the current research is on acheiving realistic radosity style results in real time. These are Auther-time algos (pre-compute), and are not generally optimized for perfomance since computing power is so good these days.

And, quite frankly, most professionals don''t use this kind of radoisity anymore.





Share this post


Link to post
Share on other sites
quote:

If you want something which performs well, you should invesitage some recent papers on ray-tracing using hardware. There are many techiniques using hardware to acheive radoisity like results. Some techniques render the scene from each ''polygons'' perspective


That''s not raytracing, that''s called the hemicube method. And it''s notoriously unprecise, and no option for our requirements.

quote:

And, quite frankly, most professionals don''t use this kind of radoisity anymore


Which one ?

Actually, radiosity is one of the hottest issues in the professional visualization field, and it''s getting used more and more often. Esp. since parallel computing systems are widely available. Radiosity is the method to create really photorealistic images, there is no way around it. The current industry standard in radiosity is progressive refinement, because it''s reasonably fast and produces the most accurate results. Most professional radiosity packages use it (eg. Lightscape). Hardware assisted approximations (such as hemicubes) are not used in the professional domain, simply because they are not accurate enough.

/ Yann

Share this post


Link to post
Share on other sites

That''s not raytracing, that''s called the hemicube method. And it''s notoriously unprecise, and no option for our requirements.

quote:


Nice observation. No it''s not, and yes there are real-time ray tracing algorithms implemented in hardware (presented as SIGGRAPH this year). Most profesionals (that I know) get away with solutions that researchers might consider ''hacky'' but give far more controllable results. I don''t know anyone who actually uses Lightscape (which, incendently, hasn''t been updated in years)







Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!