Archived

This topic is now archived and is closed to further replies.

Yann L

Wavelet radiosity

Recommended Posts

OK, so my current radiosity system (slightly modified progressive refinement, with adaptive substructuring) doesn't scale very well with my current project... While the visual results are good, it just takes too long to compute on a very high polycount scene. So I'm currently investigating into alternative algorithms. I just finished implementing a stochastic solver (using Monte Carlo random walks). It defintely is faster (a lot), but the visual quality isn't as good as expected. I was thinking about trying wavelet radiosity. From what I've heard/read, the visual results are supposed to be excellent, and convergence is much faster than with standard PR radiosity. But the implementation is far from trivial, and will probably take some time. So I'd be interested, if anyone here ever implemented a wavelet radiosity algorithm. What are your experiences with it, regarding quality / performance ratio (esp. on complex scenes, 1+ million faces) ? Is it worth the trouble ? The info I could find on the net is unfortunately very obfuscated, to say the least... Or does anyone know about a good site, objectively comparing the quality / performance of advanced radiosity algorithms (on a standarized model, eg. the Cornell box) ? Thanks ! / Yann [edited by - Yann L on July 28, 2002 12:56:47 PM]

Share this post


Link to post
Share on other sites
as i don''t see much point in radiosity as it is done today (diffuse radiosity), i haven''t ever implemented those algos. i don''t even understand most parts of them eighter..

i am for some brdf/shaders that describe the material, and you render than iteratively to cumulate incoming and outgoing light.. that way you get all features..

try montecarlo with the metropolis technique, if you get it working you''re the third person on the world

"take a look around" - limp bizkit
www.google.com

Share this post


Link to post
Share on other sites
Yann: I can''t really answer your question, but I have another one

Radiosity modells diffuse interaction. All realtime engines I know of that use radiosity precalculate radiosity lightmaps and then just add specular interaction on top. I suppose non-realtime engines use raytracing for specular calculations and then combine the results. This approach cannot bring correct results because at any given point diffuse light can become specular and specular can become diffuse (assuming you''re breaking the lighting up into the diffuse/specular extremes but I couldn''t find/think of any other way of doing it on current hardware). We''re not even close to getting rid of diffuse/specular extremes and implementing "true" raytracing where each ray correctly breaks up into many rays when it hits the surface.

Do you know of any models that have been developed to correctly merge radiosity and raytracing?

Share this post


Link to post
Share on other sites
quote:
Original post by kill
Do you know of any models that have been developed to correctly merge radiosity and raytracing?


There are plenty of good articles about this to be found in the SIGGRAPH proceedings. Do you have access to an academic library?



Don''t listen to me. I''ve had too much coffee.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
> each ray correctly breaks up into many rays when it hits the surface.

Thats pretty standard for nonrealtime lowpoly scenes.
(You already wrote "many"=simulated and not infinite (real)

Share this post


Link to post
Share on other sites
quote:

Radiosity modells diffuse interaction. All realtime engines I know of that use radiosity precalculate radiosity lightmaps and then just add specular interaction on top. I suppose non-realtime engines use raytracing for specular calculations and then combine the results.


Yep, that's a common technique.

quote:

This approach cannot bring correct results because at any given point diffuse light can become specular and specular can become diffuse (assuming you're breaking the lighting up into the diffuse/specular extremes but I couldn't find/think of any other way of doing it on current hardware).


Well, the whole 'diffuse reflector' idea is inherently flawed. You don't have 100% diffuse surfaces in reality. But it's the only way to make radiosity computable on current HW - and it makes it view-independent. There is lots of ongoing research about the specular issue. Currently, the best approach is to drop the diffuse/specular idea altogether (it was just a hack anyway), and replace the view-independent reflectance factor of every surface by a BRDF. Which, of course, makes the radiosity solution view-dependent. You also have to drop the RGB idea, and trace the complete wavelength spectrum separately, since BRDFs are highly wavelength dependent.

There is an interesting approach using recursive photonmaps and formfactor mapping. The idea is to raytrace the photon-flux through pre-computed diffuse formfactors, but by taking view-dependent properties into account. Again, it's only an approximation (and it takes huge amounts of memory), but you get radiosity quality with specular light, caustics, etc. And it's very non-realtime...

quote:

We're not even close to getting rid of diffuse/specular extremes and implementing "true" raytracing where each ray correctly breaks up into many rays when it hits the surface.


This is more or less what radiosity does, only without the specular component. If you don't use hemicubes, you'll have to shoot rays anyway. The Monte Carlo technique uses explicit raytracing/bouncing (which leads to ugly noise artifacts, if the scene is 'underbounced' by shooting to few rays). The difference is that Monte Carlo uses random bounces to simulate diffuse reflection (and iterates the correct result through probabilities), whereas the view-dependent (specular) solution would require (directional) BRDFs.

[Edit] Tons of typos. Damn, it's already 4am. I'm going to bed.

/ Yann

[edited by - Yann L on July 31, 2002 10:08:00 PM]

Share this post


Link to post
Share on other sites
I havn''t implemented a Radosity system in a long time, but they are notoroisly slow algorithms, and are not used much these days.

If you want something which performs well, you should invesitage some recent papers on ray-tracing using hardware. There are many techiniques using hardware to acheive radoisity like results. Some techniques render the scene from each ''polygons'' perspective (I had a friend once implement a radoisity engine this way). Others are more complicated. a GF4 can render 60 mil triangles a second.

Most of the current research is on acheiving realistic radosity style results in real time. These are Auther-time algos (pre-compute), and are not generally optimized for perfomance since computing power is so good these days.

And, quite frankly, most professionals don''t use this kind of radoisity anymore.





Share this post


Link to post
Share on other sites
quote:

If you want something which performs well, you should invesitage some recent papers on ray-tracing using hardware. There are many techiniques using hardware to acheive radoisity like results. Some techniques render the scene from each ''polygons'' perspective


That''s not raytracing, that''s called the hemicube method. And it''s notoriously unprecise, and no option for our requirements.

quote:

And, quite frankly, most professionals don''t use this kind of radoisity anymore


Which one ?

Actually, radiosity is one of the hottest issues in the professional visualization field, and it''s getting used more and more often. Esp. since parallel computing systems are widely available. Radiosity is the method to create really photorealistic images, there is no way around it. The current industry standard in radiosity is progressive refinement, because it''s reasonably fast and produces the most accurate results. Most professional radiosity packages use it (eg. Lightscape). Hardware assisted approximations (such as hemicubes) are not used in the professional domain, simply because they are not accurate enough.

/ Yann

Share this post


Link to post
Share on other sites

That''s not raytracing, that''s called the hemicube method. And it''s notoriously unprecise, and no option for our requirements.

quote:


Nice observation. No it''s not, and yes there are real-time ray tracing algorithms implemented in hardware (presented as SIGGRAPH this year). Most profesionals (that I know) get away with solutions that researchers might consider ''hacky'' but give far more controllable results. I don''t know anyone who actually uses Lightscape (which, incendently, hasn''t been updated in years)







Share this post


Link to post
Share on other sites
quote:

Most profesionals (that I know) get away with solutions that researchers might consider 'hacky' but give far more controllable results.


Radiosity itself is already a huge hack, no need to make it even worse using hemicubes. Hemicubes are totally uncontrollable, that's exactly the problem. If you use a radiosity solution, that computes formfactors by tracing them, you can be pretty certain to get a good result. You can't, if you use hemicubes - you'll always have to run tests first. Are there any aliasing problems, resolution issues, etc.

Now running tests and using hacked algorithms might be OK for small scale scenes - but it is not for the type of scenes we use. Our current project uses over 580 million faces. And we need a predictable radiosity solution that will work. We rent processing time on a massive parallel cluster to compute it, and it has to work without tricks and hacks. If the outcoming radiosity solution isn't exactly as expected, we lose several $100K. And since our goal is maximum photorealism (with exact lightflow simulation), there is no way around radiosity.

And BTW: Lightscape is used by lots of high profile architectural/design firms, to get photorealistic renderings. It's quality is absolutely top-notch, I'm not aware of any other software delivering renderings of that quality. The only (big) drawback is the computation time. That's why we use inhouse software instead.

/ Yann

[edited by - Yann L on August 1, 2002 10:22:01 PM]

Share this post


Link to post
Share on other sites
quote:
Original post by Yann L
And since our goal is maximum photorealism (with exact lightflow simulation), there is no way around radiosity.


i dont see any point in using radiosity then.. it doesnt do by no means an exact or correct lightflow. i''ll suggest to go with photons. as they are in fact simply a particle system that acts like lightparticles, you can get with them quite simple the complete correct lighting simulation. just store per patch a function of direction over the hemisphere, how much light from what direction does come in. you can use this a) to do simple diffuse lighting in the end, so you get all diffuse-specular specular-specular etc interreflections but the last ones (refractions and such still working, only static), or you use this hemisphere function of direction directly as input for your brdf materials, resulting in fully correct global illumination. this is possible with todays pixelshaders to draw quite fast btw..

radiosity is cool, but by no means realistic..

"take a look around" - limp bizkit
www.google.com

Share this post


Link to post
Share on other sites
quote:

just store per patch a function of direction over the hemisphere, how much light from what direction does come in


We have around 10 billion patches in that scene. I currently range compress the final diffuse lighting of each patch to 2 bytes, and that's over 20 GB data (imagine the swapping). If I would store a direction-dependend photonmap per patch (keep in mind, that to be realistic esp. with speculars, it has to be well sampled over the hemisphere), I'd easily go into the terabytes.

quote:

or you use this hemisphere function of direction directly as input for your brdf materials, resulting in fully correct global illumination.


Yes, that would ofcourse be optimal. But again, it would require a dedicated BRDF per patch , since it has to take into account occlusion/visibility from other patches. You could store a general BRDF for the material, and a separate directional visibility function per patch (and combine them on the fly), but it doesn't change the main problem: memory.

quote:

radiosity is cool, but by no means realistic..


Well, it is realistic in diffuse only environments. For the exact lightflow, all we need is diffuse lighting (for now). Of course, as I said in an earlier post above, radiosity isn't the optimum system, it's just a diffuse-only hack. But for large scale scenes, it's the only method of computing global illumination without blowing up memory/CPUs. Photontracing is cool (and I've already done some interesting experiments with it), but consider the number of photons and bounces you require for a precise solution. A good radiosity implementation will beat a photontracer any day, in terms of memory requirements and computation speed.

On the other hand: I don't see why we are fighting over terminology. Radiosity and photontracing are more or less the same thing Radiosity is just a special subset of a photontracer, that simply uses a uniform distribution instead of a precise BRDF. It's a special case approximation, that's all. Wasn't there someone above, who suggested using 'dirty' approximations, if they look good and are fast ?

/ Yann

[edited by - Yann L on August 2, 2002 8:37:21 AM]

Share this post


Link to post
Share on other sites
you can use materialbrdfs without problems. the visibility component will you gain automatically from the function of direction, wich defines how much light from where comes.

this function of direction can be compressed with spherical harmonics and don''t need to be stored on each patch eighter.. you can store accurate lighting with less than one bit per patch, you just have to know what you need to drop..

you say radiosity is a plain hack. yes it is. so why not hacking again, with hemicubes? you can use even specular interreflections of the brdfs with it, no problem. => its a hack of the REAL lighting, not a hack of radiosity..

go for metropolis

but, i think, for radiosity, the wavelet approach claims to be the fastest, at least, from the documents i''ve read it always performed best..

"take a look around" - limp bizkit
www.google.com

Share this post


Link to post
Share on other sites
quote:

you can use materialbrdfs without problems. the visibility component will you gain automatically from the function of direction, wich defines how much light from where comes.


But then, we're hacking again And droping directional photon bounces, that might be blocked for a certain patch over a certain part of the hemisphere...

[Edit] Oh, you mean in the tracing (precalc) pass ? Yes, that would work, but as soon as you include BRDFs, you'll not be view-independent anymore. Means, that you'll have to include BRDFs on the realtime side as well, with other problems (see below).

quote:

this function of direction can be compressed with spherical harmonics and don't need to be stored on each patch eighter.. you can store accurate lighting with less than one bit per patch, you just have to know what you need to drop..


Well, yes, you need to compress a BRDF anyway, if you want to render it on current consumer level hardware, since it essentially is a 4D function, but the GF4 has unfortunately no 4D texturing support. But from what I've seen until know, the attempts to render full diffuse/specular light through a single constant BRDF, looked always very fake to me. If you have a paper describing a better approach, let me know.

quote:

you say radiosity is a plain hack. yes it is. so why not hacking again, with hemicubes? you can use even specular interreflections of the brdfs with it, no problem. => its a hack of the REAL lighting, not a hack of radiosity..


Hemicubes are just plain horrible. They are not only a mathematical or physical hack (like radiosity, and I could live with that), but they introduce unpredictable imagespace errors: all kinds of aliasing.

quote:

go for metropolis


Yep, metropolis light transport is a highly interesting idea. But it will be rather hard to implement, I guess (and make it stable on a complex scene).

quote:

but, i think, for radiosity, the wavelet approach claims to be the fastest, at least, from the documents i've read it always performed best..


Actaully, I dropped the idea. It's just not worth the trouble implementing, and there are some serious (visual) flaws in the method. I think I'll stay with my current MonteCarlo system, and try to accelerate it further using some form of occlusion mapping on the rays. That way, I could bounce more rays and get a better quality result in the same computation time. I might consider adding matrial BRDFs (since that would be rather easy to integrate), just to see what it might do to the quality.

/ Yann

[edited by - Yann L on August 2, 2002 9:48:15 AM]

Share this post


Link to post
Share on other sites
brdfs have to be per material defined. how you do that, is your problem. they simply describe material reflection properties. what i am talking about is instead of storing all (infinite) incoming lightstrengts from reach direction of your light, compress it. in fact, thats a cubemap, or a hemicube, or a hemisphere, or what ever. it will be that ALL THE TIME unimportant how you do it (even if you raytrace, its just a statistic and approx to the real scene). so bether compress that function f(longitude,latitude) in some good way. spherical harmonics are great for diffuse lightings, and for the specular, a short array of mostimportant rays is good.

there are no hacks, only good and bad approximations.. and the lightdistribution over some surface is very good to compress as it only changes at for example shadows, or some other places..

you need VERY LITTLE data over the surface. not data per patch, but patches per data is the way to go i think..

"take a look around" - limp bizkit
www.google.com

Share this post


Link to post
Share on other sites
Oh OK, now I see what you mean. You want to approximate the incoming light over the surface by a spherical lookup. Little misunderstanding above, we were talking about different kind of hemicubes...

You're right, for the realtime part it could be interesting.

But it won't change the precalc part, you still need to bounce around huge amounts of rays/photons. So it won't really help speeding up this process (in fact, it will even slow it down, since you need to detect shadow boundaries for the spherical compression).

And I'm somewhat sceptical, when it comes to quality. Yes, a smooth surface without shadows will most likely compress very well, and the results will be pretty accurate. But a complex environment can create very complex shadow discontinuities. You could ofcourse detect them, and create multiple hemispheres to approximate the surface, but to get a really good quality you'd need lots of them.

Have you implemented the method ? I'd be interested in seeing some screenshots and compression statistics.

/ Yann

[edited by - Yann L on August 2, 2002 3:14:34 PM]

Share this post


Link to post
Share on other sites
i haven''t implemented anything. just talking from the top of my head

but what i know is that i''ve yet seen a lot of stanford and co. papers and implementations of different lighting approximation stuff. and its impressive how much you can compress and how low the precision you calc with can be if you do it at the right place

"take a look around" - limp bizkit
www.google.com

Share this post


Link to post
Share on other sites
OK

But the idea is definitely interesting. I guess you could get rather good diffuse and specular global illumination in realtime using such a system.

But I still see a lot of questionmarks hanging around

A problem for example, is the hardware scalability of that. Such a system would only be interesting, if it was implemented 100% in hardware. Now, on a GF3/4, you''d need two texture units for the BRDF (2 cubemaps approximating the 4D BRDF function). Then, another one, perhaps even two, for the spherical incidence map. Not much place left for base texture, bumpmap, etc. So, by definition, it would be a multipass algorithm. Not necessarily a problem, but fillrate/=2, polycount/=2.

Other thing: if you approximate surface lighting by hemispheres (or cubes), and say you split the face along discontinuities, you''d get a really big amount of such cubemaps. Imagine the state-changing hell...

But yes, the idea is definitely worth investigating. Do you have any good research papers on that ?

Project update: I''ll definitely go with MonteCarlo for this one. Deadline is in 3 weeks, the cluster is rented in 10 days. I guess I can whip up a good MonteCarlo tracer in that time, but a Metropolis photon pathtracer in 10 days ? You are god, if you can do that

/ Yann

Share this post


Link to post
Share on other sites