Wavelet radiosity

Started by
17 comments, last by Yann L 21 years, 8 months ago
quote:
Most profesionals (that I know) get away with solutions that researchers might consider 'hacky' but give far more controllable results.

Radiosity itself is already a huge hack, no need to make it even worse using hemicubes. Hemicubes are totally uncontrollable, that's exactly the problem. If you use a radiosity solution, that computes formfactors by tracing them, you can be pretty certain to get a good result. You can't, if you use hemicubes - you'll always have to run tests first. Are there any aliasing problems, resolution issues, etc.

Now running tests and using hacked algorithms might be OK for small scale scenes - but it is not for the type of scenes we use. Our current project uses over 580 million faces. And we need a predictable radiosity solution that will work. We rent processing time on a massive parallel cluster to compute it, and it has to work without tricks and hacks. If the outcoming radiosity solution isn't exactly as expected, we lose several $100K. And since our goal is maximum photorealism (with exact lightflow simulation), there is no way around radiosity.

And BTW: Lightscape is used by lots of high profile architectural/design firms, to get photorealistic renderings. It's quality is absolutely top-notch, I'm not aware of any other software delivering renderings of that quality. The only (big) drawback is the computation time. That's why we use inhouse software instead.

/ Yann

[edited by - Yann L on August 1, 2002 10:22:01 PM]
Advertisement
quote:Original post by Yann L
And since our goal is maximum photorealism (with exact lightflow simulation), there is no way around radiosity.


i dont see any point in using radiosity then.. it doesnt do by no means an exact or correct lightflow. i''ll suggest to go with photons. as they are in fact simply a particle system that acts like lightparticles, you can get with them quite simple the complete correct lighting simulation. just store per patch a function of direction over the hemisphere, how much light from what direction does come in. you can use this a) to do simple diffuse lighting in the end, so you get all diffuse-specular specular-specular etc interreflections but the last ones (refractions and such still working, only static), or you use this hemisphere function of direction directly as input for your brdf materials, resulting in fully correct global illumination. this is possible with todays pixelshaders to draw quite fast btw..

radiosity is cool, but by no means realistic..

"take a look around" - limp bizkit
www.google.com
If that's not the help you're after then you're going to have to explain the problem better than what you have. - joanusdmentia

My Page davepermen.net | My Music on Bandcamp and on Soundcloud

quote:
just store per patch a function of direction over the hemisphere, how much light from what direction does come in

We have around 10 billion patches in that scene. I currently range compress the final diffuse lighting of each patch to 2 bytes, and that's over 20 GB data (imagine the swapping). If I would store a direction-dependend photonmap per patch (keep in mind, that to be realistic esp. with speculars, it has to be well sampled over the hemisphere), I'd easily go into the terabytes.

quote:
or you use this hemisphere function of direction directly as input for your brdf materials, resulting in fully correct global illumination.

Yes, that would ofcourse be optimal. But again, it would require a dedicated BRDF per patch , since it has to take into account occlusion/visibility from other patches. You could store a general BRDF for the material, and a separate directional visibility function per patch (and combine them on the fly), but it doesn't change the main problem: memory.

quote:
radiosity is cool, but by no means realistic..

Well, it is realistic in diffuse only environments. For the exact lightflow, all we need is diffuse lighting (for now). Of course, as I said in an earlier post above, radiosity isn't the optimum system, it's just a diffuse-only hack. But for large scale scenes, it's the only method of computing global illumination without blowing up memory/CPUs. Photontracing is cool (and I've already done some interesting experiments with it), but consider the number of photons and bounces you require for a precise solution. A good radiosity implementation will beat a photontracer any day, in terms of memory requirements and computation speed.

On the other hand: I don't see why we are fighting over terminology. Radiosity and photontracing are more or less the same thing Radiosity is just a special subset of a photontracer, that simply uses a uniform distribution instead of a precise BRDF. It's a special case approximation, that's all. Wasn't there someone above, who suggested using 'dirty' approximations, if they look good and are fast ?

/ Yann

[edited by - Yann L on August 2, 2002 8:37:21 AM]
you can use materialbrdfs without problems. the visibility component will you gain automatically from the function of direction, wich defines how much light from where comes.

this function of direction can be compressed with spherical harmonics and don''t need to be stored on each patch eighter.. you can store accurate lighting with less than one bit per patch, you just have to know what you need to drop..

you say radiosity is a plain hack. yes it is. so why not hacking again, with hemicubes? you can use even specular interreflections of the brdfs with it, no problem. => its a hack of the REAL lighting, not a hack of radiosity..

go for metropolis

but, i think, for radiosity, the wavelet approach claims to be the fastest, at least, from the documents i''ve read it always performed best..

"take a look around" - limp bizkit
www.google.com
If that's not the help you're after then you're going to have to explain the problem better than what you have. - joanusdmentia

My Page davepermen.net | My Music on Bandcamp and on Soundcloud

quote:
you can use materialbrdfs without problems. the visibility component will you gain automatically from the function of direction, wich defines how much light from where comes.

But then, we're hacking again And droping directional photon bounces, that might be blocked for a certain patch over a certain part of the hemisphere...

[Edit] Oh, you mean in the tracing (precalc) pass ? Yes, that would work, but as soon as you include BRDFs, you'll not be view-independent anymore. Means, that you'll have to include BRDFs on the realtime side as well, with other problems (see below).

quote:
this function of direction can be compressed with spherical harmonics and don't need to be stored on each patch eighter.. you can store accurate lighting with less than one bit per patch, you just have to know what you need to drop..

Well, yes, you need to compress a BRDF anyway, if you want to render it on current consumer level hardware, since it essentially is a 4D function, but the GF4 has unfortunately no 4D texturing support. But from what I've seen until know, the attempts to render full diffuse/specular light through a single constant BRDF, looked always very fake to me. If you have a paper describing a better approach, let me know.

quote:
you say radiosity is a plain hack. yes it is. so why not hacking again, with hemicubes? you can use even specular interreflections of the brdfs with it, no problem. => its a hack of the REAL lighting, not a hack of radiosity..

Hemicubes are just plain horrible. They are not only a mathematical or physical hack (like radiosity, and I could live with that), but they introduce unpredictable imagespace errors: all kinds of aliasing.

quote:
go for metropolis

Yep, metropolis light transport is a highly interesting idea. But it will be rather hard to implement, I guess (and make it stable on a complex scene).

quote:
but, i think, for radiosity, the wavelet approach claims to be the fastest, at least, from the documents i've read it always performed best..

Actaully, I dropped the idea. It's just not worth the trouble implementing, and there are some serious (visual) flaws in the method. I think I'll stay with my current MonteCarlo system, and try to accelerate it further using some form of occlusion mapping on the rays. That way, I could bounce more rays and get a better quality result in the same computation time. I might consider adding matrial BRDFs (since that would be rather easy to integrate), just to see what it might do to the quality.

/ Yann

[edited by - Yann L on August 2, 2002 9:48:15 AM]
brdfs have to be per material defined. how you do that, is your problem. they simply describe material reflection properties. what i am talking about is instead of storing all (infinite) incoming lightstrengts from reach direction of your light, compress it. in fact, thats a cubemap, or a hemicube, or a hemisphere, or what ever. it will be that ALL THE TIME unimportant how you do it (even if you raytrace, its just a statistic and approx to the real scene). so bether compress that function f(longitude,latitude) in some good way. spherical harmonics are great for diffuse lightings, and for the specular, a short array of mostimportant rays is good.

there are no hacks, only good and bad approximations.. and the lightdistribution over some surface is very good to compress as it only changes at for example shadows, or some other places..

you need VERY LITTLE data over the surface. not data per patch, but patches per data is the way to go i think..

"take a look around" - limp bizkit
www.google.com
If that's not the help you're after then you're going to have to explain the problem better than what you have. - joanusdmentia

My Page davepermen.net | My Music on Bandcamp and on Soundcloud

Oh OK, now I see what you mean. You want to approximate the incoming light over the surface by a spherical lookup. Little misunderstanding above, we were talking about different kind of hemicubes...

You're right, for the realtime part it could be interesting.

But it won't change the precalc part, you still need to bounce around huge amounts of rays/photons. So it won't really help speeding up this process (in fact, it will even slow it down, since you need to detect shadow boundaries for the spherical compression).

And I'm somewhat sceptical, when it comes to quality. Yes, a smooth surface without shadows will most likely compress very well, and the results will be pretty accurate. But a complex environment can create very complex shadow discontinuities. You could ofcourse detect them, and create multiple hemispheres to approximate the surface, but to get a really good quality you'd need lots of them.

Have you implemented the method ? I'd be interested in seeing some screenshots and compression statistics.

/ Yann

[edited by - Yann L on August 2, 2002 3:14:34 PM]
i haven''t implemented anything. just talking from the top of my head

but what i know is that i''ve yet seen a lot of stanford and co. papers and implementations of different lighting approximation stuff. and its impressive how much you can compress and how low the precision you calc with can be if you do it at the right place

"take a look around" - limp bizkit
www.google.com
If that's not the help you're after then you're going to have to explain the problem better than what you have. - joanusdmentia

My Page davepermen.net | My Music on Bandcamp and on Soundcloud

OK

But the idea is definitely interesting. I guess you could get rather good diffuse and specular global illumination in realtime using such a system.

But I still see a lot of questionmarks hanging around

A problem for example, is the hardware scalability of that. Such a system would only be interesting, if it was implemented 100% in hardware. Now, on a GF3/4, you''d need two texture units for the BRDF (2 cubemaps approximating the 4D BRDF function). Then, another one, perhaps even two, for the spherical incidence map. Not much place left for base texture, bumpmap, etc. So, by definition, it would be a multipass algorithm. Not necessarily a problem, but fillrate/=2, polycount/=2.

Other thing: if you approximate surface lighting by hemispheres (or cubes), and say you split the face along discontinuities, you''d get a really big amount of such cubemaps. Imagine the state-changing hell...

But yes, the idea is definitely worth investigating. Do you have any good research papers on that ?

Project update: I''ll definitely go with MonteCarlo for this one. Deadline is in 3 weeks, the cluster is rented in 10 days. I guess I can whip up a good MonteCarlo tracer in that time, but a Metropolis photon pathtracer in 10 days ? You are god, if you can do that

/ Yann

This topic is closed to new replies.

Advertisement