## Recommended Posts

jeremie009    696

Hi,

I'm building a Lightmapper for my small engine and I'm running into a bit of a problem using radiosity. I'm splitting my scene into small patches and propagating the light using a simplified version of the form factor.

 private float FormFactor(Vector3 v, float d2, Vector3 receiverNormal, Vector3 emitterNormal,
float emitterArea)
{
emitterArea * (-Vector3.Dot(emitterNormal, v)
/(Pi*d2+emitterArea);
}


the problem I'm having is with the bounce light. They never converge and keep adding up energy. I could stop after some iteration but the code is probably incorrect since the energy never goes down.

                        if (Vector3.Dot(ne, lightdir)<0)
{
var form = FormFactor(lightdir, distance, nr, ne, emitter.Area);
}


this is the function where I had the bounce light.

Llightdir is the vector from the emitter patch to the receiver.

ne is the normalize normal of the emitter patch.

nr is the normalize normal of the receiver patch.

I try to scale my scene to see if maybe it was a energy or scaling problem but it didn't work.

the only thing that actually work was to divided by 4 the bounce light but that seems incorrect because in some scene the light ended up converging and on other there where just adding more energy.

So I'm wondering is there some kind of rule I'm missing. Should I add attenuation to the bounce light or the form factor is enough ? I spend the last week try to piece it together but most sources on internet didn't gave me clues on how to balance the bounce energy.

BTW I choose the form factor because it's easy to run on the cpu.

##### Share on other sites
JoeJ    2587
d2 usually means squared distance, but you seem to give distance.
Maybe that's the only thing wrong...

##### Share on other sites
jeremie009    696

Thanks for noticing but distance is distanceSquare.

##### Share on other sites
alvaro    21266
Try to compute the amount of power entering a patch and the amount of power coming out of it. If some power is being absorbed by each patch, the propagation should converge.

I thought the way people solve radiosity was by setting up a large sparse linear system of equations and solving it. There are methods that should be fast for that situation, notably the conjugate gradient method [EDIT: Never mind, your matrix is probably not symmetric]. But I've never done it myself. Edited by Álvaro

##### Share on other sites
JoeJ    2587
I've had similar problems when doing this years ago, reading did not help,
but after some trial and error i got it working and so found my own way to understand it.
You can try my algorithm below...
I think there are two ways doing it:

1. Send emitter light to reveivers until only a small amount below given threshold is left.

2. Reflect light around until a given number of bounces.
That's what i'm doing, the equations i came up are slightly different from the stuff i've read about number one.

// r = receiver, e = emitter, lacc = summed light the emitter receives from all visible emitters
void InterreflectSamples (qVec3 &Rpos, qVec3 &Rdir, qVec3 &Epos, qVec3 &Edir, SampleNode *e, qVec3 &lacc)
{
qVec3 diff = Epos - Rpos;

float cosR = dot(Rdir, diff); // not really cosine because diff is not normalized, but that's just optimization and works
if (cosR>0)
{
float cosE = -dot(Edir, diff);
float d2 = diff.SqL() + 1.0e-11f; // SqL means squared length

float area = (float)Edir[3]; // storing area in the fourth number, because qVec3 is internally float[4]
float ff = (cosR * cosE) / (d2 * (PI * d2 + area)) * area; // form factor of a disc

if (cosE > 0)
{
qVec3 light = e->owncol[3] * e->owncol; // owncol = emitter diffuse color, owncol[3] = emission (> 0 for light sources)
light += cmul (e->lrec, e->owncol); // lrec = light received from other samples, cmul = component wise vector multiplication
light[3] = 0;
lacc += ff * light;
}
}
}
It should be used in a way like this:

while (numBounces--)
{
{
vec accumulatedLight (0,0,0);

foreach (samples as emitter) if (EmitterIsVisibleFromReceiver (emitter, accumulatedLight))
{
}

receiver->lrec = accumulatedLight; // <- bug fix here
}
}

for all samples
{
vec finalColorToProofResult = cmul (lrec, owncol) + owncol * owncol[3];
}

hope this helps (and hopefully there is no bug in the second pseude code).
To compare this with your code you would need to undo the optizations (seems i've removed this older code from my project).

EDIT:
Fixed a bug noticed by JSandusky below Edited by JoeJ

##### Share on other sites
JoeJ    2587
seems i've failed again to format the code nicely, and again i can't find the edit button

##### Share on other sites
JSandusky    237

All I see as input is emitter color. What happens to the accumulated color once it's been entirely accumulated before the next pass? Are you averaging? Adding them to the emitter color? etc?

The emitter should have a "reflectance" value if you're doing more than a single bounce (assuming you aren't just averaging passes). Without absorption it'll never converge.

If you're using PBR you could use something like:

smoothness * (sqrt(smoothness) + roughness);


as a really rough approximation of absorption (that's Lagarde's specular dominate direction IIRC).

Personally, I really prefer rendering the scene from each lumel's position along the normal and multiplying by a weight map (in fisheye for draft, and hemicube for quality) for radiosity (on top of a brute force direct lighting map) and calling it a day.

Link to some source in my lightmapper for an example of how incredibly simple that approach can be:

https://github.com/JSandusky/Urho3D/blob/Lightmapping/Source/Tools/LightmapGenerator/FisheyeSceneSampler.cpp

##### Share on other sites
JoeJ    2587

What happens to the accumulated color once it's been entirely accumulated before the next pass?

You're right, there was a bug. I've fixed the original post.

There's no need for reflectance because the material is assumed perfectly diffuse.

Rendering enviroment maps for each texel may be a good idea for performance, but for accurary you need high resolution.
The number of pixels describes both distance and area of an object, so it's a big difference if a small object with high emission
ends up taking 1 or 2 pixels in the eviroment map, resulting in banding or noise. I found 256 x 256 env maps still not good enough for my needs.
For sure that's not an issue if you handle light sources seperately.

##### Share on other sites
alvaro    21266

You're right, there was a bug. I've fixed the original post.

Please, don't do that. It makes it hard to understand the thread as a conversation. You should revert your edit and have a new post with the fixed code (or only the relevant parts, if that's more clear).

##### Share on other sites
jeremie009    696

I did try your code against mine just in case I was missing something but the result is the same. The light keep adding up instead of converging. So its fine to just do a couple of pass but the problem arise when you need more precision and more pass. The value are suppose to average out after a few pass. But that is not my case. Did you manage to do more than 5 pass without blowing up the light ? I can't on my implementation so I have to assume that is incorrect.

The reflection value are suppose to be the albedo color but if you albedo is pure white, it'll just reflect as much energy as it receive which is incorrect.

The form factor seem to give away too much energy so the bounce are really strong.

I could implement energy conserving sort of thing but I though radiosity was more correct that other approximation.

##### Share on other sites
alvaro    21266
If your scene is a closed box with walls that have albedo 1 and some light source inside, then of course the energy doesn't have any way of leaving and your formulas won't converge. But as long as there is something dark in the scene (outer space will do, if the scene is not completely closed), it will converge. You can also make albedo less than 1, as it should be.

##### Share on other sites
jeremie009    696

The albedo was the issue. I had a outer space scene but I never tried it. Anyway thanks for the input.

##### Share on other sites
JoeJ    2587
(storing the results elsewhwere and updating all lrec values at one time before the next iteration).
I this case a bit of damping helps (receiver->lrec = accumulated light * 0.98).

After porting to GPU it's unavoidable many values update at the same time, but i never got explosions,
just fluctuations when there are heavy changes in the scene.
The highest aldebo value i use is 0.9.

##### Share on other sites
jeremie009    696

When you ported  your radiosity on the gpu, did you use some sort of hierarchy or just brute force?

##### Share on other sites
JSandusky    237
When you ported  your radiosity on the gpu, did you use some sort of hierarchy or just brute force?

In my case (rendering a view from the lumel), I started with brute force. Later I moved to an iterative lattice distribution pretty much exactly matching Hugo Elias' description of such an approach (render every forth lumel, then find those in between and if close enough lerp otherwise render, then find those in between that subset and lerp if close enough) just adding a "dominant" plane check where I just dot-prod'ed the lumel's normal against the 6 cardinal planes and classified that lumel by the best fitting plane, so far that's been sufficient.

For CPU/GPU side real-time, I cluster by dominant plane and distance "emitters" and then calculate a limited number of form factors (3 usually) for each lumel against those clusters. I then create a vertex buffer for the clusters, on the CPU I just do a simple directional light, but on the GPU I create a point cloud of multiple samples for each cluster (random distribution) and use transform feedback to calculate the results at the end of the frame (shadow maps already exist at this point), I then propogate that to the lumels, and then apply the data back to the lightmap averaging the sample values. Quite fast, sending the updated lightmap to the GPU is slower than everything else combined.

Rendering enviroment maps for each texel may be a good idea for performance, but for accurary you need high resolution.
The number of pixels describes both distance and area of an object, so it's a big difference if a small object with high emission
ends up taking 1 or 2 pixels in the eviroment map, resulting in banding or noise. I found 256 x 256 env maps still not good enough for my needs.
For sure that's not an issue if you handle light sources seperately
Pretty sure you have that backwards. Banding and noise don't present themselves until you start bumping up the resolution in my experience so far. I normally use 32x32 for each hemicube face.

As you increase the resolution you also decrease the coarseness of multiplier "map" (whether that map is a real map you precompute or you just calculate the value on the fly). Even though that map is normalized, the number of values that end up becoming significant contributing factors is substantially higher as the resolution increases. That coarseness at low resolution also really saves you when it comes to dealing with situations where the hemicube penetrates corner geometry, most methods of dealing with that result in poor occlusion at edges by offsetting the sampling point to avoid interpenetration and therefore not capturing the foreign excidence seen by the actual point (requiring an ambient occlusion pass to reacquire).

Penetration will generally be quite severe in one or maybe two faces, at low res that coarseness will make those penetrated values practically meaningless to the overall contribution.

I've never seen banding at 128x128 hemicube faces or lower. It does appear to be a problem for some, but I'd imagine it has more to do with a faulty lumel vs world unit ratio determination and misguided attempts at irradiance caching (do it in surface space, not world space) than the actual process of rendering from a lumel. The other villain to the approach is specular. GPU hardware texture filtering creates far more undesirable artifacts as far as I've seen.

The big problem of that approach is specular response. It's really easy to end up with an implementation that cannot converge (as in it'll bleach out to white). By rendering from the lumel via your normal means of rendering you're subjected to the specular response of your materials, so you have to account for that, and in doing multiple passes against a lightmapping technique that also includes specular response (ambient + directional for example) you have to account for that as well.

Almost all radiosity research and study focuses on diffuse transfer. The original theory of rendering from the lumel also assumed a purely diffuse environment.
Edited by JSandusky