Archived

This topic is now archived and is closed to further replies.

DaBit

Radiosity energy value to color mapping

Recommended Posts

Hi, Currently I am using radiosity for the static lighting in my 3D engine. This works fine, but I still have problems converting the resulting patch energy values to RGB color triplets. I tried both linear scaling and log scaling of both the max energy value to the max color (1.0, 1.0, 1.0) and the average color to the average intensity. But I still haven''t found a suitable mapping function/algorithm. Can anyone help? DaBit.

Share this post


Link to post
Share on other sites
Hi!

Try mapping the maximum INITIAL radiance to 1.0. At light sources, you can get a bit more (like 1.1) if they are reflective (because you add the initial radiance), but just clip it.

Let me know how this worked for you (I''ve only read it in a book and haven''t got the time to implement it yet).

You might also allow the scaling dynamically as the solution progresses, since it doesn''t affect the radiosity calculation. Just let the artist choose a correct scaling as the solution progresses (to prevent too dark or bright scenes). A good guess should be provided by the maximum initial radiance, though.

BTW, did you use quad or triangular patches. Just curious ...

MK42

Share this post


Link to post
Share on other sites
When I map the initial radiances to 1.0, the scene gets WAY too dark. This is because a small bright light emits a lot of energy per steradian of viewing. I tried it. The best results until now are obtained by not taking into account the initial energy emitters when calculating the scene colors, and clipping the light colors afterwards (they get intensities like RGB = [120.0, 120.0, 120.0] on a scale from 0.0 is black, 1.0 is white.)

Letting the artist decide the scaling and offset factors is another option, and maybe the only viable. But I prefer to have an automated mapping that delivers good results, so only finetuning is required.

Oh, I used triangular patches. They are much easier to implement with data as delivered by modelling tools (they usually spit out triangles). This generates more patches intially, since a square consists of two triangles, but less subdivision is required since they are already smaller. Thus, I lose some speed by using triangular patches, but not so much as you would expect.

DaBit.

Share this post


Link to post
Share on other sites
How about this? Add 1 to your initial values so they range from 1 to some larger number and then take the sqrt() of this number. Maybe even the sqrt(sqrt()) of the number. Then subtract 1 and linearly scale and then clamp to 0,1.

Share this post


Link to post
Share on other sites
Hi!

Sorry for this late answer. Hmmm, I see where the problem is with the method I described above. But, couldn''t you try to ''normalize'' the initial energies with the patch area? I mean, how about comparing initial and final (energy*area)? It sounds like it might work. This would weigh smaller lights less than larger lights. I don''t know how the calculated values ''fit into'' this scheme, but it sounds reasonable. What do you think?

Ciao,

MK42

Share this post


Link to post
Share on other sites
bishop_pass:

Currently I am doing something like that (single sqrt, though), but I leave out the light sources.

MK42:

Of course, the energy a patch sends into space is determined by an ''energy per square meter'' value and the area itself. This does not prevent the fact that a light is many times brighter than a lit surface. The intensity falls off with the square of the distance.

All of you, look at this sample app. Buggy, but it should work a bit.

Win32 + sample scenes only:
http://www.arcobel.nl/~dabit/sb_test/sb_win32.zip

Linux + sample scenes only:
http://www.arcobel.nl/~dabit/sb_test/sb_linux.zip

Win32 + Linux + sample scenes + original .MAX scenes: http://www.arcobel.nl/~dabit/sb_test/sb_all.zip

DaBit.

Share this post


Link to post
Share on other sites
DaBit,

Are you using Hierarchical radiosity, or progressive refinement, or what algorithm?

I implemented HR and wasted 2 weeks trying to find out why I''m getting amplified light in the corners and along edges. What I mean is the ocrners are brighter. I think I know why, and it seems to be the algorithm. But this can''t be right, because the algorithm obviously works when impleneted right, so there must be something I''m doing wrong. I even downloaded Pat Hanrahan''s code (the guy who invented it) and compared my implementation to his, and it appears he does it about like I do. I finally gave up.

I then used the mesh that HR creates, and did a straght progressive refinement of the mesh (shooting) and it basically looks fine. So I think I''ll rewrite my code to just do PR.

But it pisses me off, because it seems HR has such potential and is more efficient, and I basically did all the work of coding it.

Share this post


Link to post
Share on other sites
bishop_pass:

The program I linked to is based on progressive refinement, but it is not yet complete. Adaptive patch subdivision is not yet implemented, and there are always as many patches as there are elements. Both refinements of the PR method can easily be added, but first I want to have a good quality radiosity solution.

How do you do your formfactor calculation? Hemicube? Raycasting?

What would be the speedup gained from a HR method?

Dabit.

Share this post


Link to post
Share on other sites
The bigger your scene gets, the better HR is likely to be compared to PR. That''s because PR has to shoot to all the patches (elements?) in the scene. where as HR shoots to coarse representations that are at a distance, and then the children inherit their parents received energy and interact with nearby elements.

Form factors get my brain so tied in knots. Fij is the fraction of energy leaving i which arrives at j, but no wait, this algorithm gathers, but is inverting the form factor, but no wait, it''s using a differential area, so don''t multiply by area, but no wait, this algorithm uses point to disk, and so on.

HR according to Hanrahan just uses point to disk, and so I initially tried that... had problems. I tried supersampling the patch, still had problems, but I wrote a test program to compare and I know super sampling improves the result for close patches.

I found a algorithm that is diff area to polygon (computes it exactly, fairly simple, but I didn''t implement it).

Right now I''m just using point to disk.

Share this post


Link to post
Share on other sites