From Classical Radiosity to ...

Started by
114 comments, last by HellRaiZer 20 years, 10 months ago
quote:Original post by davepermen
i'm all for photon-mapping:D not that radiosity is not a great thing by itself. somehow, for me, its just not real lighting. not that the images don't look real, though:D photon-mapping more fits my heart. in the end, lighting IS just a huge particle-system..:D


In case you haven't done so already, you should take a look at the "The Light of Mies van der Rohe"-movie on this page. The other movies are cool to. It's not easy to get these results though.
PS: It can be used to generate lightmaps (like precalculated radiosity), and that's exactly what bungie is going to do for Halo2.

[edited by - Koen on June 10, 2003 11:14:53 AM]
Advertisement
quote:
i'm all for photon-mapping:D not that radiosity is not a great thing by itself. somehow, for me, its just not real lighting. not that the images don't look real, though:D photon-mapping more fits my heart. in the end, lighting IS just a huge particle-system..:D

Hehe, you're right (or maybe not, if we're taking the particle/wave duality into account ). Photon mapping is great. We just need better hardware support for it, so that fully dynamic high quality realtime implementations on complex geometry become possible.


[edited by - Yann L on June 10, 2003 11:20:06 AM]
yann, no, sh is not a gi solution. an approximation, but it can not solve it. you can generate them with some gi-solutions. but you cannot say a lightmap is global illumination. the way its generated possibly is, though.

sh are another way to store what we know yet as extension to bumpmapping: horizon-mapping. and yes, there are bether solutions for sh. the power of them is the relatively fast combination (multiplying the individual components), so you can combine an incomming-light-sh with a "horizon-map"-sh with just say 16 mults or so. thats quite cheap..


the only thing with photon-mapping currently for me is, that you should only use it for determining the indirect light, means not in the first trace, but the second trace. the first trace can be hardware accelerated rendered. the second cannot. the photonmap-lookup could be done hw-accelerated, if done in the first pass.

thats the only thing missing, because tracing several 10thousands or more photons (and you can carry them for some time by adding a lifetime to them) can be done realtime for most scenes without bigger problems..

if i get around the problem with the second-trace, i think it''ll be quite doable for at least q1-style levels. wich would be cool yet:D we''ll see.....:D

"take a look around" - limp bizkit
www.google.com
If that's not the help you're after then you're going to have to explain the problem better than what you have. - joanusdmentia

My Page davepermen.net | My Music on Bandcamp and on Soundcloud

quote:Original post by Koen

In case you haven''t done so already, you should take a look at the "The Light of Mies van der Rohe"-movie on this page. The other movies are cool to. It''s not easy to get these results though.
PS: It can be used to generate lightmaps (like precalculated radiosity), and that''s exactly what bungie is going to do for Halo2.


that thing is AMAZING! wow, thanks for the link. haven''t checked Jensens stuff for a while.. amazing movie.. wow..

i''ll get that realtime!
well.. partially.. i hope:D

"take a look around" - limp bizkit
www.google.com
If that's not the help you're after then you're going to have to explain the problem better than what you have. - joanusdmentia

My Page davepermen.net | My Music on Bandcamp and on Soundcloud

quote:Original post by davepermen
yann, no, sh is not a gi solution. an approximation, but it can not solve it. you can generate them with some gi-solutions. but you cannot say a lightmap is global illumination. the way its generated possibly is, though.

Terminology Yes, you can see a lightmap, as well as a set of SH coefficients, as an approximated GI solution for a specific set of static parameters. The fact that it doesn''t take dynamic or view-dependent parameters into account is irrelevant. It is a solution for a specific camera position, a specific light and material set. This solution can be statically encoded, as diffuse radiosity or even photon maps, as you already mentioned above. The way the solution is initially aquired does not really matter either (several algorithms are exchangeable).

So just to make my point clear: a precomputed radiosity/SH map is a specific solution to the GI equation. Both lightmaps or SH maps are means to hold that solution. They are not solvers, though. With radiosity or SH, we can make the solution view-indepedent, which is nice (fast). With photon-maps, it''s more difficult (hello specularity).

quote:
sh are another way to store what we know yet as extension to bumpmapping: horizon-mapping.

Bump mapping is a hack, and has no physical counterpart. If we really want total physical realism, we need displacement mapping. And we have to see radiosity maps not as lightmaps, but as diffuse illumination distributions over a constant cosine BRDF.

quote:
if i get around the problem with the second-trace, i think it''ll be quite doable for at least q1-style levels. wich would be cool yet:D we''ll see.....:D

If you can do that, then by all means do it. I would be curious to see some shots. The question is, how does it scale with more up-to-date geometry ?
quote:Original post by Yann L
So just to make my point clear: a precomputed radiosity/SH map is a specific solution to the GI equation.

well, they can store a GI solution (and be dynamic changeable in some parameters of the gi solution), but they don''t need to. i can as well store local illumination solutions in the lightmap, or the sh. say i just convert the normals on the bumpmap to sh''s. they aren''t gi-solutions then, just li solutions for the bumpmap. (still, you could then simulate arealights quite simple, wich is still cool:D).

quote:Bump mapping is a hack, and has no physical counterpart. If we really want total physical realism, we need displacement mapping. And we have to see radiosity maps not as lightmaps, but as diffuse illumination distributions over a constant cosine BRDF.

call it a compression? :D (displacement-maps compressed in eyespace:D blah, they ARE a hack:D).
well, not every lightmap is a radiositymap, yes. neigher is a sh a globalillumination-solution:D

quote:
If you can do that, then by all means do it. I would be curious to see some shots. The question is, how does it scale with more up-to-date geometry ?

heh, isn''t q1 up-to-date?:D:D:D well, i''ll try my best. no shoots yet, just collecting infos....:| but you know yet my final target: The Light of Mies van der Rohe :D:D:D



"take a look around" - limp bizkit
www.google.com
If that's not the help you're after then you're going to have to explain the problem better than what you have. - joanusdmentia

My Page davepermen.net | My Music on Bandcamp and on Soundcloud

daveperman:
quote:
monte carlo has nothing to do with radiosity. call it "perfect raytracing" if you want.

I dont understand it. I mean, if raytracing has nothing to do with radiosity, how am i suppose to calculate light transfer, and visibility between patches? And don''t say that Elias tut is the solution. Because, using hemicubes (this is what he does), isn''t how radiosity suppose to work. It''s just a speed up to the whole process, by using hardware acceleration! And, as Yann said, its not the best solution.

And about Monte Carlo. You said that it is not easy to compute, because of the 7 integrals. But, isn''t the solution of those integrals what Monte Carlo integration should do. I mean the whole algorithm is on how to compute those integrals (i don''t know if the initial goal was to compute 7 or 1 integrals). I think that Monte Carlo isn''t "perfect raytracing". In fact, it''s just a method of solving integrals, implemented for use with cg. It can be used anywhere, other than cg.

On BRDFs. This is what i meant when i said, that i need to learn many things. The only things i can find on those advance stuff mention only those things you said. I don''t say that your explanation is not good, but it is not enough. I''m searching for examples, comparissons etc.

Finally photon mapping. All these days i''m searching the net for global illumination algorithms and tutorials, i found many papers on photon mapping. I tried to read them, but you know... papers. Nothing for the beginner. Only advance techniques, and how to speed up a slower algorithm. So, i thought (because i didn''t understood what i was reading), photon mapping is the same as radiosity, from the point of view that both compute light intensities on surfaces, and storing those values to lightmaps (lightmaps = photon map, i thought). Now I see that i''m wrong. I wanted to know what it is, if you can explain!!! Is it requires a huge amount of back knowledge to get started? Are there any "getting started with photon mapping" tutorials? Are there any comparisons with radiosity? I mean, for example the same scene rendered with both of them.
From what you said, photon mapping treats light as little constant quantities (photons) which travels, bounce etc. on the enviroment.

Yann:
quote:
We''re talking about 60 fps.


No we aren''t. We''re talking about 100 or greater fps, if you want it to be a complete alive enviroment! If you just want to walk smoothly in an incredible cool well lighten enviroment, then 60 fps is fine. But if you want full interactivity with it, AI from the enemy to the fly (the bug i mean!!!!!)?????

quote:
Ok, sorry, we got a little carried away there.

Dont excuse yourself. As i said, i maybe learn something from this discussion. So continue...

Finally, about dynamic intensities. The first approach seams ok, but as you said it takes huge amount of memory. About the second approach. If i understood right, you suggest, making a map that is 32-bits but the texture you send to OpenGL will be 8-bit (or make it 24 - 32 bit again). I thought that because you said 4-light contributions in one map. And what about colored lightmaps ? And must i have to recalculate texture object (OpenGL) every time an intensity changes? In an editor, thats fine, but in game, where you want to have the lights break, or switched on off???

Thanks

HellRaiZer
HellRaiZer
quote:Original post by HellRaiZer On BRDFs. This is what i meant when i said, that i need to learn many things. The only things i can find on those advance stuff mention only those things you said. I don't say that your explanation is not good, but it is not enough. I'm searching for examples, comparissons etc.

Bidirectional Reflectance Distribution Function, if I'm correct. This function takes incoming and outgoing direction on the surface as parameters, and tells us what part of the light gets reflected (if memory serves me well).

quote:
So, i thought (because i didn't understood what i was reading), photon mapping is the same as radiosity, from the point of view that both compute light intensities on surfaces, and storing those values to lightmaps (lightmaps = photon map, i thought). Now I see that i'm wrong. I wanted to know what it is, if you can explain!!! Is it requires a huge amount of back knowledge to get started? Are there any "getting started with photon mapping" tutorials? Are there any comparisons with radiosity? I mean, for example the same scene rendered with both of them.
From what you said, photon mapping treats light as little constant quantities (photons) which travels, bounce etc. on the enviroment.


Photon Mapping sends photons (as you said, little quantities of light) through the scene (the amount of photons a lightsource emmits is determined by it's power). When they hit specular or refractive surfaces, they continue their path, when they hit diffuse surfaces they are stored in a photon map (which is independent from the geometry) before continuing their path. At a certain point, the photon has to be stopped, of course. There are several techniques for this (maximum path depth, russian roulette,...). This is the first step. In the second step, monte carlo ray tracing is applied, except that a single path will have a maximum depth of two diffuse bounces. At that point the photon map will be used to estimate the radiance for that ray. That's what photon mapping does (kinda).
If you really want to know, you should read Jensen's book (see the link I posted earlier). But don't trust his kd-tree implementation: it didn't work for me, and it didn't work for someone else on this forum. So I'm thinking there's something wrong with his code. Since it's not well documented at all, implementing the kd-tree yourself will be faster than debugging. It was for me :-)

[edited by - Koen on June 10, 2003 2:53:52 PM]
daveperman:
quote:
radiosity for the world, where lights don''t move.
sh for the objects, where lights can move around them (or, the objects move around the lights)

while its not perfect (only directional lights actually), it works very well. namely as well as environmentmaps.


I wouldn''t consider this for an outdoor game. SH lighting is perfect for a landscape engine that gets lit by a HDR skydome, casting dynamic low frequency terrain shadows. Baked radiosity cannot do this. You say mixing the two works very well, though. Can you show examples of this method in action?

One more thing. Your analogy of normal maps and sh-maps is very misleading. In their most abstract form, normal maps have nothing to do with lighting, they just store geometry. I''ve used tiling normal maps before for micro-collision detection in a racing game without actually having any bump mapping in the game. SH maps store *precomputed radiance transfer* which get calculated directly from material colours and normals so you could effectively write a renderer that uses nothing but SH maps. Of course, if you wanted to any form of local lighting you''d need to keep them.

This thread seems to have degraded into arguments about subjective interpretations of technical terms. The way I see it, an ambient term simulates GI, lightmaps can simulate GI (be they radiosity, path-tracing, monte-carlo or whatever generated), and spherical harmonics can simulate GI. The only differences are in the maths: which parts of the Physically Based Rendering Equation they approximate/store/compress/whatever and how they combine with your runtime lighting model to produce your simulation.
AP: lightmaps are normally for indoor-levels, yes. sh for outdoor is nice.

combining them? the same way q3 combined gl-lights with lightmaps: lightmaps for the world, gl-lights with precomputed light-values in a lightgrid. instead of gl-light-lightvalues, you store per lightgrid cell a spherical harmonic or what ever as incomming light. you have your models with the sh on it and voilà, you can enlight the model by the global illumination input.
to throw shadows then into the world, i think you need the neighbourhood transfer, that _could_ work, i think..


a sh is just a cubic envmap, compressed to some values, showing how the scene around a patch looks. in the case of meshes (like in the case for unreal3), they show up the amount of light that gets reflected in all directions for any given incomming light direction.. its not at all gi, just mesh-i. and it is a similar thing to the bumpmapping, at least the way it gets used in unreal3, as far as i can see.. for lighting perpixel the mesh:D

you agree they have similarities with horizonmapping?

sh can store some values, wich can be used for gi calculations. they can by itself not simulate gi, they are no algorithm. radiosity is. phong-lighting is. sh-lighting is (and thats no gi algo, its a replacement/addition for phong-lighting!).

an sh is just a cubemap.. in another domain, and at a small resolution..

"take a look around" - limp bizkit
www.google.com
If that's not the help you're after then you're going to have to explain the problem better than what you have. - joanusdmentia

My Page davepermen.net | My Music on Bandcamp and on Soundcloud

This topic is closed to new replies.

Advertisement