• Advertisement
Sign in to follow this  

Radiosity in Practice

This topic is 4342 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have a few questions about implementing certain parts of radiosity. I hope somebody can give me some insight: 1) All papers around the internet seem to use a single value for the light. How about color lights? How do the formulas change with these? 2) If I do get an imlpementation of light color, how do I make sure my radiosity renderer produces color bleeding? 3) Is there a nice way of calculating the reflectivity of a patch with the color at the point? I would have guessed that a dark point tends to absord more light, thats why it's dark... In that case, the color map should be able to be used as a "reflexivity map"? A reflexivity of [RR, RG, RB] would mean that this surface reflects RR of red component, RG of green and RB of blue. Therefore, the final "color" at this point would be [RR * L_R, RR * L_G, RB * L_B], where [L_R, L_R, L_B] is the total light received at this point. Tell me if this is wrong, thx.

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by David Hart
1) All papers around the internet seem to use a single value for the light. How about color lights? How do the formulas change with these?

I believe the articles use a single color value to simplify the explanation of the radiosity process. However for color lights you just perform the same steps on each channel (RGB) in parallel.

Quote:
Original post by David Hart
3) Is there a nice way of calculating the reflectivity of a patch with the color at the point? I would have guessed that a dark point tends to absord more light, thats why it's dark... In that case, the color map should be able to be used as a "reflexivity map"? A reflexivity of [RR, RG, RB] would mean that this surface reflects RR of red component, RG of green and RB of blue. Therefore, the final "color" at this point would be [RR * L_R, RR * L_G, RB * L_B], where [L_R, L_R, L_B] is the total light received at this point. Tell me if this is wrong, thx.

As it turns out, the color map (texture) in this context is often refereed to as the Albedo, which is the measure of reflectiveness of a surface. So you are 100% correct.

Quote:
Original post by David Hart
2) If I do get an imlpementation of light color, how do I make sure my radiosity renderer produces color bleeding?

As long as you use the equation you presented in your third question to reflect the light, color bleeding should be a natural result of the process. However this means you'll need to have the textures at the time you perform radiosity.

Share this post


Link to post
Share on other sites
David: You`re worrying for color bleeding, while it seems you don`t have basic simple rooms lighted with radiosity. Don`t worry for color bleeding yet, it`s just a byproduct of whole radiosity implementation. You don`t turn it on separately. You just have to distribute most of the energy inside the scene so it is visible to human eye. Theoretically, you could need just few passes, if that colored object had high reflectivity and was very near some other white wall. But generally, you need to distribute most of the energy inside the scene so that it becomes visible (technically, when all patches had shot their energy just once, the color bleeding is there - it`s just not visible until you set you Exposure value high enough to see the bleeding effect (though, the rest of the scene shall become totally saturated, but it`s possible) ).

I`d like to recommend you few ideas, so that you actually get to finishing the radiosity implementation - many people start it but later they drop it because it`s just so much work and most of the papers are not a very god read either.

1. Start with boxes - it`ll make your initial testing and experimenting much easier than some general scenes. Make a box room and place a light in the middle of the box. That way you can manually check the results (calculate the form factor manually) and spot any bugs right away.

2. Do not engage in texturing right away. There`s several issues with filtering and correct calculation of texture coordinates that would make you scratch your head needlessly while you`re struggling with basic FF calculation.
Instead, consider dividing your walls (simple quads) into many square patches - like a regular grid. Make those patches polygonal (2 triangles) and assign it a color after you calculate the radiosity. This way you`ll be able to check your results immediatelly after you go through few passes.
To explain a bit more, if you have a wall of the box that has a real-world size of, say, 10.0f (regular coordinates of your vertices) and you decide that your Patch will have a size of 1.0f (just make sure all patches in your whole scene have a same area), you divide the Wall into 10x10 quads (i.e. patches) and let the program distribute the light. Right after first pass, you`ll be able to see the results immediatelly.
So, leave the texturing (with all its problems at triangle edges) as a final issue when you`ll be sure your radiosity processor works correctly.

3. Start with only a one, single pass. You might not see anything meaningfull (i.e. it`ll be just black). After your calculations you must convert the Received energy values of each patch into RGB values to be displayed on your monitor. Now you could assign your lights a value in range (0-255) so it`s better debugged when you are just starting, but remember, that the light source can have any range (0-1000000) or energy. In the end, it just has to be remapped to a range (0-255) to be displayed on your monitor. Exposure function is easiest here, since it`s just 2 lines of code. You`ll need to experiment with the exposure parameters a little to get some good results.

4. Progressive Refinement.At the start of each pass, just browse all patches in the scene and choose the one with highest unshot energy (Patch .UnShot.R + Patch .UnShot.G + Patch .UnShot ). Make that one Shooter and shoot his energy to all other patches in the scene. With multiple lights, first few passes will mean, that you`ll shoot the energy from the lights, and when all lights were shots, Indirect Illumination shall finally take place - the energy shall be shot from the scene patches that already received the energy from lights - that`s how you can get that color bleeding (if the objects are near enough).

5. FF Calculation. At the very start, forget the visibility - just let it be 1.0f. Besides, it`s the visibility that eats the most of the time to compute full radiosity distribution. You won`t have shadows at the very start, but hey, it`s a good start !

6. If you haven`t yet, read the Radiosity threads by me (VladR) and HellRaiZer at these forums. Everything you might need regarding implementing a full radiosity solution is there. It just awaits for you to be implemented.

7. You can get some pretty "Disney was here" scenes with multiple lights:
Clicky !

Share this post


Link to post
Share on other sites
Thank you both for your precise and informative answers!

To Zipster:

Quote:
As it turns out, the color map (texture) in this context is often refereed to as the Albedo, which is the measure of reflectiveness of a surface. So you are 100% correct.


Now that I read my explanation again, I'm not sure if I did get something wrong in the end. Lets imagine I get my reflexivity value from a color map [RR, RG, RB], and that the incoming light is [IR, IG, IB]. Following the standard formulee, the "reflected" light is:

[RR * IR, RG * IG, RB * IB] * FF

And the "absorbed" light is:

[(1-RR) * IR, (1-RG) * IG, (1-RB) * IB] * FF

But the problem is that the absorbed light, the final illumination of the patch would have a color "inverted" in comparison with the original color/reflexivity map.

To VladR:

Thanks for your tips VladR.

Quote:
If you haven`t yet, read the Radiosity threads by me (VladR) and HellRaiZer at these forums. Everything you might need regarding implementing a full radiosity solution is there. It just awaits for you to be implemented.


Yep, I've already used a good helping of search tool and read most of your posts. Nice "Disney" screenshot :)

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
Original post by David Hart
But the problem is that the absorbed light, the final illumination of the patch ...


I might be misunderstanding you here, but just to clarify: the absorbed light is effectively discarded, while the reflected light represents the visible colour of the patch.


Share this post


Link to post
Share on other sites
Quote:
Original post by David Hart
Now that I read my explanation again, I'm not sure if I did get something wrong in the end. Lets imagine I get my reflexivity value from a color map [RR, RG, RB], and that the incoming light is [IR, IG, IB].

Why do you have reflexivity value as a special map in the first place ? Isn`t it enough to specify "rho" per object (e.g. wood/metal/concrete...) ?

Also, after you shoot the unsent radiance from the Shooter to all receivers, you have to clear the unsent radiance on the shooter so that it doesn`t become a shooter in next pass again (where you`ll traverse all patches to find out the patch with highest unsent radiance (that shall become the shooter in that pass) ).

Share this post


Link to post
Share on other sites
Quote:
Original post by David Hart
Now that I read my explanation again, I'm not sure if I did get something wrong in the end. Lets imagine I get my reflexivity value from a color map [RR, RG, RB], and that the incoming light is [IR, IG, IB]. Following the standard formulee, the "reflected" light is:

[RR * IR, RG * IG, RB * IB] * FF

And the "absorbed" light is:

[(1-RR) * IR, (1-RG) * IG, (1-RB) * IB] * FF

But the problem is that the absorbed light, the final illumination of the patch would have a color "inverted" in comparison with the original color/reflexivity map.

As the AP mentioned, the absorbed light is just discarded. It's the reflected light that becomes the final illumination. After all, you can only see the light that's reflected off a surface. For instance, a surface appears black because it absorbs a large amount of light, and you only see a little. Likewise, a white surface reflects a large amount of light and only absorbs a little. The absorbed light contributes to non-visual attributes such as temperature or conductivity, and other things you aren't interested in.

Share this post


Link to post
Share on other sites
Thanks for the clarification! I understand everything much better now.

Quote:
Why do you have reflexivity value as a special map in the first place ? Isn`t it enough to specify "rho" per object (e.g. wood/metal/concrete...) ?


If I understood Zipster correctly, this helps to get color bleeding working correctly. Secondly, it allows you to specifiy a color map, an emissive map, and nothing else, as both the emissive and reflexivity values can be found by a simple texture lookup.

Share this post


Link to post
Share on other sites
Quote:
Original post by David Hart
If I understood Zipster correctly, this helps to get color bleeding working correctly. Secondly, it allows you to specifiy a color map, an emissive map, and nothing else, as both the emissive and reflexivity values can be found by a simple texture lookup.


1. Color Bleeding is just a "free" (it just takes a lot of time) byproduct of whole radiosity lighting technique. You do not have to do anything special to "enable" it, other than let the most of the energy distribute all over the patches. Which is done by as many passes as is necessary for a given scene. Of couse, you don`t necessarilly have to notice the color bleeding in each scene, it all depends on given scene. As in real life, color bleeding is almost impossible to see in real, unless you have a specific set like a blank white paper on your desk touching some brightly red object and the nearby lamp light is shining directly on that given red object, thus the first, direct reflections go straight to that white paper and so it becomes visible. Now try, to move the light source and/or red object further away and you`ll spot that at certain distance, this color bleeding disappears (it`s still there physically, just our eyes aren`t good enough at spotting such small amounts of energy - they have certain threshold below which you won`t see it anymore - although it can be measured by machines).

2. How does your emissive map work ? Do you have a special texture for each object that specifies the reflectivity at given texel ? Why is this even needed ? For start, until you have some first correct nice results, just use e.g. rho=0.4f for all patches. This is usually enough.

3. Are you using Progressive Refinement Shooting to speed up your results ?

4. I don`t know how long are you trying to get the grasp of this whole radiosity lighting technique, but don`t expect all pieces to come together at once. It might take few days/weeks until everything makes sense.

Share this post


Link to post
Share on other sites
Hi guys! With your help I was able to write a simple radiosity renderer today. Thx VladR for convincing me to start simple, I found more practical problems along the way than I thought. The renderer "seems" to work fine, execept from the fact that it's damn slow. Optimizations will come later.

Screenshot

But for now I have a problem. Basically, my form factors end up to be very small (about 0.0006 for a HIGH form factor), which means I have to put a crazy light value to even get a semi-lit scene. But that's not the most annoying problem. It also means that after one reflection, the light value is so small that no patch is visibly affected by second order reflections (ie, I get no ambient-like effect).

Light intensity has to be pushed high for the top to be lit (lower-rez patches).

Here is my C# code for calculating Form Factors. Can you check if anything is wrong?


private static float GetFormFactor(Patch transmitter, Patch receiver)
{
DX.Vector3 ray = receiver.Position - transmitter.Position;
float fDistanceSquared = ray.LengthSquared();

ray.Normalize();

float fCosTransmitter = Math.Max(DX.Vector3.DotProduct(transmitter.Normal, ray), 0);
float fCosReceiver = Math.Max(DX.Vector3.DotProduct(receiver.Normal, -ray), 0);
float fDifferentialArea = transmitter.Area / receiver.Area;

return fDifferentialArea * fCosTransmitter * fCosReceiver / (float)(fDistanceSquared * Math.PI);
}






cos values are always in the range [0,1], and the differential area ends up to be 1 as all patches are the same size. So my form factor is actually dividing 1 by a huge value (distance squared).

1) I thought color bleeding was the color bleeding off non-emissive patches, due to reflection. In that case, you would need to multiply the color (or reflection, same thing) to the reflected light to get that effect.

2) My emissive map works so that a surface can emit different colors over it's surface. I saw a paper that used that to give a nice effect to colored windows.

3) I am using progressive sorted-shooting. Refinement is where u refine patches if needed, right? I haven't got up to that level yet.

Share this post


Link to post
Share on other sites
I figured it out finally. I had a problems in two places. For the whole time I thought you had to calculate the amount of energy received and transmitted by each patch. But that amount does not translate well to light values at the end. No, what you have to calculate is the amount of energy received and transmitted PER UNIT AREA.

So to people that are doing that mistake, two points to remember:

1) When dividing surfaces into patches, dont divide the emission value, as however the number of subdivisions you do, the amount of emitted energy per unit area does not vary.

2) As the energy received at j from point i is Rj = Ei * FFi-j, and we know that Rj and Ei have both for unit, Energy / Area, we can deduce that the form factor is exactly what it's name implies: a unit-less factor. Therefore the differential area value of the FF forumla must be an area. So my formula was wrong. After doing some googling I found out that the differential area is nothing more than the receiver's area, and it makes sense.

  • Congratulations on that first screenshot ! It looks great and it seems there are no artifacts ! How many patches did you use for each wall ?

    Quote:
    Basically, my form factors end up to be very small (about 0.0006 for a HIGH form factor),
    This is perfectly normal. Form factors have very small value.

    Quote:
    which means I have to put a crazy light value to even get a semi-lit scene
    That`s right.

    Quote:
    It also means that after one reflection, the light value is so small that no patch is visibly affected by second order reflections (ie, I get no ambient-like effect).

    Light intensity has to be pushed high for the top to be lit (lower-rez patches).
    Well, while progressive shooting gives you great immediate results after just one pass per each light, if you want a realistic ambient lighting in whole scene, you`ll need many more passes since ambient lighting is actually just indirect lighting (i.e. gathered from all other patches). This is especially true in such cases as your cube, where the top side is lit only by indirect lighting. So that it is lit correctly, all four sides of your box should shoot the light to it. Try doing 1,5,10,50,100,250 or 500 passes and compare the differences on snapped screenshots. You`ll notice that the more passes, the better the ambient lighting.
    Take a look at this screenshot where I compared the same scene under different amount of passes (1,2,3,4,5,10,25,50,100,250,500,1000).
    As you can see it, it took 1000 passes (and about 2 hours of computation) to get realistic ambient lighting. I didn`t include 1500 passes into that screenshot since the result was neglibible over 1000 passes. Notice, that after 1000 passes, only 5 Pct of initial energy remained undistributed. Even after 100 passes, as much as 32 Pct of initial energy was undistributed, so even 100 passes wan`t enough for realistic ambient lighting !

    As for your form-factor calculation, at first look it seems to be alright though I seem to be missing the Area from the formula for the FF to be dimensionless.

    Quote:
    1) I thought color bleeding was the color bleeding off non-emissive patches, due to reflection. In that case, you would need to multiply the color (or reflection, same thing) to the reflected light to get that effect.
    As an example, color bleeding occurs when you have some bright red object near a white wall. It doesn`t have to be a light source. But, this red object is tesselated into patches the same way as that white wall is. After some amount of passes in your scene, even this red object becomes shooter and it shoots the red light (combined with any other light it managed to gather in previous passes). Since the white wall is nearest, it gathers most of this red light (because of form factor calculation, where the shot energy drops exponentially with the distance) and thus has some reddish glow near the red object.

    Quote:
    2) My emissive map works so that a surface can emit different colors over it's surface. I saw a paper that used that to give a nice effect to colored windows.
    Yes, it`s a cool idea, but isn`t it a bit of an overkill for a start ?

    Quote:
    3) I am using progressive sorted-shooting. Refinement is where u refine patches if needed, right? I haven't got up to that level yet.
    I`m not completely sure what the theory says about the term "refinement", but as far as I understand it, it simply means, the scene gets gradually better after each pass - i.e. is refined.

    Now you could try putting some other small box and try computing visibility - i.e. you`ll get nice shadows this way. To have soft shadows, the patches have to be very small enough, though.

    BTW, How long did it take your PC to compute the solution on your first screenshot and how many patches were in that scene ?

    EDIT: Link and typos

    Share this post


    Link to post
    Share on other sites
    I've got no experience of radiosity lighting, but just a couple of quick questions in your discussion...
    Are you doing all calculations in software? Like, as a part of a raytracer? If so, how well does this translate to realtime rendering? Are programmable GPU's capable to do all this in hardware now? And what kind of trade-offs do you have to do in order to get it fast?

    Thanks,
    William

    Share this post


    Link to post
    Share on other sites
    I want to make sure we have the same definition of the term "pass". For you, one pass seems to be the emiting from one "surface". I used the term pass for the emitting from one "patch".

    Ok, first of all, the code is not optimized, and I compute light maps every 100 passes to have a progressive vision of the computation. So both of these factors slow down the calculation substancially.

    All the following images are with my fixed form factor and formulee, based on the energy per unit area. The stats: each wall is divided into roughly 10'000 patches, for a total of 61'327 patches in the scene.

    A calculation that went on for about 30mn in background.
    24'400 passes (a bit more than a third of total patch count) in 760seconds

    Edit:

    Quote:
    Are you doing all calculations in software?


    Yes.

    Quote:
    If so, how well does this translate to realtime rendering?


    True radiosity in realtime is not really possible now.

    Quote:
    Are programmable GPU's capable to do all this in hardware now?


    Some parts of the calculation, mainly the computaiton of the form factor can be speeded up on the graphics cards using a hemicube (google it), a geomerical approximation of a formula.

    Quote:
    And what kind of trade-offs do you have to do in order to get it fast?


    The hemicube is an approximation. So if you lower it's resolution, you get less good results for a faster computation. You can speed things up by decreasing the patch resolution, but then lighting looks VERY pixellated. See my second screenshot.

    Share this post


    Link to post
    Share on other sites
    Quote:
    Original post by wall
    Are you doing all calculations in software? Like, as a part of a raytracer?
    Yes, although, now i could do one-pass also on GPU with my method that approaches the problem from different angle than other current methods.

    Quote:
    Original post by wallIf so, how well does this translate to realtime rendering? Are programmable GPU's capable to do all this in hardware now? And what kind of trade-offs do you have to do in order to get it fast?

    There are several methods for GPU-based radiosity. Google them.
    One was even presented in Game Developer Magazine. But their main problem is the slow output of results through textures.

    Some time ago, I was involved in the discussion about real-time radiosity on these forums. I posted some thoughts about my proposed method there so search these forums if you want to find out more. You could achieve lots of realistic effects this way that are currently just hacked and that`s how it looks in current games. From the top of my head, some of these effects are :
    1. Slow opening of the door into a dark room where the light in front of the door starts to light the new room gradually.
    2. Flashing colored lights (RPG, dungeons) - I tried it and it`s 10 times more realistic, if your interacting lights are being computed by radiosity than just interpolating among few precomputed positions.
    3. Turning the light off and on slowly (1<second) - not just instantly or by interpolation as it`s done currently. It feels completely different if the light softly interacts with all walls and objects. You`ve got to try to to understand what I mean.
    As for trade-offs, with current memory requirements of the games on PC (this is impossible on old-gen consoles with 32 MB of RAM), you could easily just load precomputed form-factors for each room on the fly, because form-factors take most of the time in computations. Then, just let the Vertex Shader compute it (several instructions) and you`re done.
    You could write your radiosity renderer so that it computes the solution gradually each frame - i.e. you`ll let it compute 10000 patches each frame and so it`ll gradually converge to a stable, good-looking solution.
    Dynamic moving lights could mean a bigger memory consumption, but really, current games require 1 GB of RAM to play fluently (or even more). So what is some 100 MB for form-factors ? They can be easily unpacked in a Shader (2 Bytes pre Form-Factor can be enough if you think about it carefully and make few assumptions/conditions). It can be definitely real-time for walls and static objects where you know the position of light and so the visibility remains constant so it can be precomputed. Those object that can be moved (barrels,crates) have few patches anyway, so it`s not a problem to recompute them in on-demand (be it in a given frame or distributed over next few frames).
    Shadows from moving characters can be made by some other methods and the character itself can be lit by single pass of radiosity through vertex shader. Still, it`ll look 10 times better than some conventional methods.

    Basically, if you precompute just the visiblity of wall patches (among each other) that take the most of the time of Form-Factor computation, the final Form Factor can be easily computed in Shader (just few multiplications and sincos). And you need just 1 bit for each patch (bits are easily decompressed in shader). So if your scene needs, say, 100000 patches and 10 passes to look good (it would be artist`s job to handle this of course with low amount of passes), it needs just 122 KB (100000*10/8/1024) of visibility data ! Or it would be just 2 MB of already precomputed Form-Factors. Is 2 MBs per room really that much ? Even if you wanted 50 passes, it would be just 10 MB per room !

    Option 2: Multi-core CPUs are entering mass market. I bet that within 2 years a minimum requirement shall become a dual-core CPU. You could just let that second (third/fourth) CPU compute form-factors or just gradually refine the solution through more passes or handle the dynamic objects.

    Share this post


    Link to post
    Share on other sites
    Quote:
    Original post by David Hart
    I want to make sure we have the same definition of the term "pass". For you, one pass seems to be the emiting from one "surface". I used the term pass for the emitting from one "patch".
    No, I also refer to one pass as a patch`s pass.

    Quote:
    Original post by David Hart
    All the following images are with my fixed form factor and formulee, based on the energy per unit area. The stats: each wall is divided into roughly 10'000 patches, for a total of 61'327 patches in the scene.
    Quite a lot of patches ! But the result is worth it, isn`t it ?

    Quote:
    Original post by David Hart
    24'400 passes (a bit more than a third of total patch count) in 760seconds
    It doesn`t seem unreasonably slow for 24k passes in 10 minutes. What is your CPU ? I must say, that after first few optimizations, I made my computations in 25% of original version (mainly because I was recomputing form-factors for R,G,B each time for the first time when I implemented it, but there were also other places worth optimizing, like not using any functions, but having all code inside main loop - this alone brought about 15% increase).

    Now try putting few colored lights into your scene !

    Share this post


    Link to post
    Share on other sites
    I come back after implementing reflexivity maps, emissivity maps, and some code optimizations:

    In the following screenshot, I didnt use any emissive map but reflexivity maps. I also increased the resolution of patches to a total of 240'400, running during two hours: 44'460 patch runs. My processor: Intel 2.8Ghz.

    Screenshot

    It gives a really nice light and texture resolution BUT it becomes so slow that not a lot of second order reflections are observable. I will have to find a way to improove the algorithm. If only I could store the form factors, it would speed everything up a lot, but with 240'400 patches, that's about 57.79 Gb of information!!!

    Any ideas?

    Share this post


    Link to post
    Share on other sites
    Storing form factors does indeed take an extreme amount of memory. Me and VladR has a discussion a few months ago on these boards about real-time radiosity, and I believe I also did some storage calculations.

    But to be honest, your radiosity solution looks excellent at the moment! You'd really need to test it with a larger scene to know if those extra passes are making a visible improvement.

    Share this post


    Link to post
    Share on other sites
    Quote:
    Original post by David Hart
    It gives a really nice light and texture resolution BUT it becomes so slow that not a lot of second order reflections are observable.
    Exactly ! That`s also the basic point of real-time radiosity. Obviously you can`t make 45000 passes each frame (The screenshot looks gorgeous though ! ). But if you would experiment with your current scene, you would find out that for common in-game use you need just a fraction of your current amount of patches. You`ll be able to see the difference in quality of lightmaps themselves, but there`s a threshold when it`s blended with original textures, that you can`t spot the difference. Your current texture resolution (matching patch resolution) is about 256x256 per each side (are there just 4 without the top ?) of the cube. 48x48 could be enough - this makes 9216 patches in whole scene (48*48*4). That`s less than 4% of original amount of patches. Try posting a same-size screenshot with such amount of patches.
    Also, for real-time radiosity in a game, only few passes are needed to get some basic indirect lighting. Then, to simulate ambient lighting in dark places, you would multiply all light intensities below some threshold by some multiplier (found by experimenting) so that you wouldn`t just have dark walls suddenly become white at all texels, but gradually lighter (i.e. you would still notice differences in light intensities even in the ambient area).
    As always, if you were comparing pure lightmaps (full radiosity vs real-time radiosity), the difference would be noticable. But when basic material textures come into play (e.g. brick), this difference disappears and so it`s entirely possible to make a comparable results in real-time.

    Quote:
    Original post by David Hart
    If only I could store the form factors, it would speed everything up a lot, but with 240'400 patches, that's about 57.79 Gb of information!!!

    Any ideas?
    Well actually, 240400*240400*4 = 215 GB (you forgot to multiply by sizeof float), so obviously this is not an option. But with those 9216 patches, it would still eat about 324 MB. With some compression (arithmetic coder anyone?), this could be half size or less, for the price of few multiplications (and huge memory latency of course).
    If you were experimenting with the light settings for some given static scene for several days (as an artist setting the lighting for some level), it would make sense to precompute all form-factors, and then just let them cache into memory while experimenting with the light settings. At least that`s what I have in my radiosity editor. You know that some scene needs adjustment and you`ll spend few days just with it, so you let computer precompute all form factors and then you can see the differences of changed light almost in real-time (few seconds of computation).

    Zipster: It was you who downrated me for 5 points during that real-time Radiosity thread ? Because it was an instant drop in 5 points (noticed in 5 minutes while checking replies at that time) and not many people have such high rating to make such an effect (especially not those anonymous posters).

    Share this post


    Link to post
    Share on other sites
    Quote:
    Original post by VladR
    Quote:
    Original post by David Hart
    If only I could store the form factors, it would speed everything up a lot, but with 240'400 patches, that's about 57.79 Gb of information!!!

    Any ideas?
    Well actually, 240400*240400*4 = 215 GB (you forgot to multiply by sizeof float), so obviously this is not an option. But with those 9216 patches, it would still eat about 324 MB. With some compression (arithmetic coder anyone?), this could be half size or less, for the price of few multiplications (and huge memory latency of course).


    Depending on the geometry of the scene, the formfactor matrix can often be very sparse. I found that scenes with lots of tight corridors between connecting rooms could have their formfactor matrix represented quite simply in compressed STL vectors with a matching index stored with the formfactors. Maintaining a similar persistent vector of relative visibility between renderable polygons also has benefits in rendering speed as well as storage requirements.

    Not that none of the above really helps with the simple cube scene used by the OP, as it's polygon visibility matrix is fully populated. I found that a simple check for patches occupying the same polygon (and therefore invisible to each other) can improve calculation speeds and storage requirements tremendously. I haven't read the previous threads, so this maybe old news :)

    Share this post


    Link to post
    Share on other sites
    I did some more optimizations.

    Quote:
    I found that a simple check for patches occupying the same polygon (and therefore invisible to each other) can improve calculation speeds and storage requirements tremendously. I haven't read the previous threads, so this maybe old news :)


    I didn't implement exactly this, even if it's a great idea. But I already check if a surface is visible from the emitting patch. If it isn't, I don't do any calculations for it's patches.

    I tried to implement the algorithm described here, where only every 4th patch is calculated, and where intermediate patches' form factor can be estimated with linear interpolation in certain cases. Indeed it runs much faster, but I still havent got it working and I get artifacts:

    With the problematic linear interpolation approximation.
    Same scene WITHOUT the approximation.

    While I went out tonight I left the renderer running without the linear interpolation approximation, during a total of 3h, with a resolution of 61'700 patches. As you can see, the light is strange at the corners, but I think thats only a bi-product of the relatively low resolution:

    Here.

    Share this post


    Link to post
    Share on other sites
    Quote:
    Original post by BlackSheep
    Maintaining a similar persistent vector of relative visibility between renderable polygons also has benefits in rendering speed as well as storage requirements.

    IMO, the best trade-off (memory vs speed) for full-radiosity solution (shadows and multiple passes) is just having precomputed visibility information (you need just 1 bit per patch) because that`s what takes most of the time. Thus, with a scene of 30.000 patches you need 30.000 * 30.000 / 8 = 107 MB of visibility information. Then you`re just calculating form-factors which isn`t so much time-consuming (especially compared to visiblity calculation).
    I`m gonna try it today and post results here.

    EDIT:I just did a test scene that had 30079 patches. With precomputed visibility (actually, it was a room where all patches are always visible so each form-factor calculation was processed in full), I made 30079 passes.
    It took 586 seconds (i.e. less than 10 minutes) for full-radiosity solution. Not bad for 1.8GHz CPU, if you consider there were 904 Million form factors calculated ;-)
    The point here is that to make,say, 250 passes (very good quality) it takes just 5 seconds which is, what I would call, a real-time usage for lighting of rooms. If number of passes is adjustable (text file+reload at run-time in my case), you`ve got a pretty versatile lighting utility.

    Quote:
    Original post by BlackSheep
    I found that a simple check for patches occupying the same polygon (and therefore invisible to each other) can improve calculation speeds and storage requirements tremendously. I haven't read the previous threads, so this maybe old news :)

    Well, it should only help you in case you`re just blindly testing visibility of every patch with each other patch, right ? Such patches (on same polygon) would get rejected during form-factor calculation anyway (because of phi_p, phi_r angles). So you would just save 1 (maybe 2) dotproducts for a price of condition for each patch (possibly hundreds of thousands of condition executions throught complete radiosity solution). Otherwise you`re testing visibility of given patch in your code before testing angles of normals, which would be inefficient.
    Personally, I have a text configuration file where you can specify self-shadowing and if you set it to false for given object, it automatically shoots only to patches of all other objects, so if I have a wall that has, say, 5000 patches, any shooter from that wall is shooting energy directly (without any checking for each patch) only to all other objects (thus saving 5000 partial form-factor calculations per pass). Of course, it`s good mainly for walls. If you had some more complicated object like a statue, the results would be better with Self-Shadowing On. But majority of patches are usually just walls, so why not make a use of this ?

    David: I can`t seem to think what`s wrong with your last picture where you get the artifacts at corners. This might be problem with the precision (cumulation of very small errors due to very small numbers of FF), or some latest code updates.

    Quote:
    Original post by David Hart
    I tried to implement the algorithm described here, where only every 4th patch is calculated, and where intermediate patches' form factor can be estimated with linear interpolation in certain cases. Indeed it runs much faster, but I still havent got it working and I get artifacts:

    If you`re calculating your form-factors on CPU, then interpolation is a great way to decrease the total calculation time. Still, I can`t think of a reason why wouldn`t you just temporarily decrease the resolution of patches and when you have set the lights up, just let it calculate in final (higher) resolution ?
    As for the artifacts, are you calculating each fourth patch at every row of patches (horizontally), or do you have each fourth patch also vertically ? Because in latter case, you`ve got 4 corners of a 4x4 quad and each of this corners has to be interpolated with all neighbouring corners of such 4x4 quads, so it`s not just simple linear interpolation - you`ve got to weight it in regards to any neughbouring corners. Also, shadow boundary is more probable to get artifacts.

    EDIT2:I have dropped the idea of interpolation when I was starting with radiosity and somehow forgot it. But now, that you reminded me about it, I made some calculations:
    Scene with 60k patches can be approximated with 4146 patches (each of them acting as the corner of 4x4 patch quad).Since wall patches in a game are stationary, we can precompute their visibility (2 MB). During 1 second I managed to compute 401 passes (full FF calculation). This means it`s possible to do it in real-time !
    Now I don`t know how much time would it take to update the texture each frame (or maybe each 10th frame) - I`d have to try it. But the calculations itself are definitely real-time ! Besides, once you would reach certain number of passes (say,1000), you could just keep the last textures and stop doing any more passes to raise the framerate ! So, for stationary lights, the real-time radiosity is entirely possible for common scenes.
    Moving lights (like in Doom3) couldn`t be precalculated, but still, 401 passes per second offer plenty room for many other in-game activities (AI,navigation,controls).

    What do you think ?

    [Edited by - VladR on March 31, 2006 9:12:53 PM]

    Share this post


    Link to post
    Share on other sites
    Quote:
    Original post by VladR
    Well, it should only help you in case you`re just blindly testing visibility of every patch with each other patch, right ? Such patches (on same polygon) would get rejected during form-factor calculation anyway (because of phi_p, phi_r angles). So you would just save 1 (maybe 2) dotproducts for a price of condition for each patch (possibly hundreds of thousands of condition executions throught complete radiosity solution). Otherwise you`re testing visibility of given patch in your code before testing angles of normals, which would be inefficient.
    Personally, I have a text configuration file where you can specify self-shadowing and if you set it to false for given object, it automatically shoots only to patches of all other objects, so if I have a wall that has, say, 5000 patches, any shooter from that wall is shooting energy directly (without any checking for each patch) only to all other objects (thus saving 5000 partial form-factor calculations per pass). Of course, it`s good mainly for walls. If you had some more complicated object like a statue, the results would be better with Self-Shadowing On. But majority of patches are usually just walls, so why not make a use of this ?


    I realise that logically a patch-by-patch test should be slower than simple per-polygon testing, or just letting it drop out of the FF equation and saving the conditionals, but my test scenes were showing savings of up to 30 seconds on a 5 minute scene, so I kept with it - every little helps, and that's quite a big help :) This may have been a geometry-dependant saving though, I'll need to test the code with different scenes to reach a more reliable conclusion.

    Quote:

    EDIT2:I have dropped the idea of interpolation when I was starting with radiosity and somehow forgot it. But now, that you reminded me about it, I made some calculations:
    Scene with 60k patches can be approximated with 4146 patches (each of them acting as the corner of 4x4 patch quad).Since wall patches in a game are stationary, we can precompute their visibility (2 MB). During 1 second I managed to compute 401 passes (full FF calculation). This means it`s possible to do it in real-time !
    Now I don`t know how much time would it take to update the texture each frame (or maybe each 10th frame) - I`d have to try it. But the calculations itself are definitely real-time ! Besides, once you would reach certain number of passes (say,1000), you could just keep the last textures and stop doing any more passes to raise the framerate ! So, for stationary lights, the real-time radiosity is entirely possible for common scenes.
    Moving lights (like in Doom3) couldn`t be precalculated, but still, 401 passes per second offer plenty room for many other in-game activities (AI,navigation,controls).

    What do you think ?


    I'm not sure why you think you could only do static lighting in real-time? Formfactors are constant and independant of lighting variables, so it should be possible to do whatever you like to the lights. You don't explicitly mention what data you're storing - visibility, complete FF, or more?

    Share this post


    Link to post
    Share on other sites
    Quote:
    EDIT2:I have dropped the idea of interpolation when I was starting with radiosity and somehow forgot it. But now, that you reminded me about it, I made some calculations:
    Scene with 60k patches can be approximated with 4146 patches (each of them acting as the corner of 4x4 patch quad).Since wall patches in a game are stationary, we can precompute their visibility (2 MB). During 1 second I managed to compute 401 passes (full FF calculation).


    Could you please post a screenshot of the end result image? Do you mean that you ONLY calculate the form factor for every 4th patch? And you do linear interpolation on all others? Im curious to see if the results is "visually" close to a full form factor calculation.

    Quote:
    I'm not sure why you think you could only do static lighting in real-time? Formfactors are constant and independant of lighting variables, so it should be possible to do whatever you like to the lights.


    Well not really. Form factors between two stationnary objects are constant. But if we start talking about moving objects, just moving a light means recalculating all the form factors between each patch on the light with all other patches. And then I could start moving an object in front of the light, and even more form factors to calculate...

    [Edited by - David Hart on April 1, 2006 6:20:58 AM]

    Share this post


    Link to post
    Share on other sites
    Quote:
    Original post by David Hart
    Well not really. Form factors between two stationnary objects are constant. But if we start talking about moving objects, just moving a light means recalculating all the form factors between each patch on the light with all other patches. And then I could start moving an object in front of the light, and even more form factors to calculate...


    Exactly. VladR mentioned precomputing static walls, without reference to moving objects, which would indeed require complete recalculation of form factors. However, for static geometry, there is no restriction on position, intensity or colour of the lights in the scene. Even the number of lights is relatively easy to play with, as only the first pass (the initial shooting of the lights' energy) will show an increase in calculation time, although there may be a corresponding increase in the number of passes required to obtain an acceptable solution.

    Share this post


    Link to post
    Share on other sites
    Sign in to follow this  

    • Advertisement