Avoiding dark areas in ray tracing

Started by
7 comments, last by Geri 10 years, 1 month ago

Hi everyone,

I'm getting some nice pictures from my ray tracer but I've noticed a problem with some geometry setups. For example, this pile of spheres:

[attachment=20292:pyramid_1.3h.png]

Between the lower and the middle level, the inner areas are completely dark. I'm not sure if this is realistic or not (I don't have any spheres to test it with) but it doesn't look right. I think I know why it's happening - my maximum ray depth is 5, and the rays probably get stuck in the cavities and never collect any colour. Is this realistic, and can I fix it without using infinite ray depth?

Thanks!

Advertisement

It doesn't look like your ray tracing has any diffuse reflection component to it (i.e. global illumination). Without the scattering induced by surface roughness it may be impossible for certain areas to reflect back any light.

I think I know why it's happening - my maximum ray depth is 5, and the rays probably get stuck in the cavities and never collect any colour

you are right, that sounds like the reason.

The way to fix it is to not limit your rays to a particular depth, but to reduce the amount of contribution on every recursion.

every surface in the real world ( and I mean also mirrors, glass, air, chrome...) have some loss in transmission of light. this can be simulated the easiest if you reduce the contribution they do by some amount per recursion level. every mirror ball would reduce it to e.g. 98% (you could set this value per material), it would take a while, but with increasing recursion it would end up by e.g. 1%

and once a ray reaches 1%, you could assume it won't contribute anything. (I think you can calculate the maximum recursion by taking the log of the canceling value and divide by the log of the reduction value e.g. log(0.01)/log(0.98) to estimate the worst case and maybe tweak both values to restrict the time consumption, instead of looking up realistic values)

that way you won't trace rays that have barely any contribution and you will trace important rays instead of canceling them due to some recursion limit. and the reduction of contribution might lead to a more realistic look.

I think I know why it's happening - my maximum ray depth is 5, and the rays probably get stuck in the cavities and never collect any colour

you are right, that sounds like the reason.

The way to fix it is to not limit your rays to a particular depth, but to reduce the amount of contribution on every recursion.

every surface in the real world ( and I mean also mirrors, glass, air, chrome...) have some loss in transmission of light. this can be simulated the easiest if you reduce the contribution they do by some amount per recursion level. every mirror ball would reduce it to e.g. 98% (you could set this value per material), it would take a while, but with increasing recursion it would end up by e.g. 1%

and once a ray reaches 1%, you could assume it won't contribute anything. (I think you can calculate the maximum recursion by taking the log of the canceling value and divide by the log of the reduction value e.g. log(0.01)/log(0.98) to estimate the worst case and maybe tweak both values to restrict the time consumption, instead of looking up realistic values)

that way you won't trace rays that have barely any contribution and you will trace important rays instead of canceling them due to some recursion limit. and the reduction of contribution might lead to a more realistic look.

The keyword for this is "russian roulette" (google it!) and is indeed a way to go as deep as necessary (in the sense that the final image will be unbiased, exactly as if you had used infinite depth for all your rays). Usually though it is only turned on after two bounces because it tends to increase variance (being a form of rejection sampling).

I should mention that it is only unbiased if it is properly implemented, and there are a bunch of variants floating around the internet (as usual when it comes to things like path tracing) which may or may not be equivalent so tread carefully.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

OK, so you mean something like this:

trace(float attenuation)
{
reflectionColour = refractionColour = (0, 0, 0);
if (intersectedShape.reflectivity*attenuation > cutoff){reflectionColour = trace(intersectedShape.reflectivity*attenuation);}
if (intersectedShape.transparency*attenuation > cutoff){refractionColour= trace(intersectedShape.transparency*attenuation);}
}

i.e. I'd start my initial rays with an attenuation value of 1 and then every time the recursive rays hit a new object, if the attenuation value of the spawned ray would be less than the cutoff, I wouldn't spawn a new ray? I think that makes sense! It could also lead to better efficiency.

Here's how it looks after the new algorithm:

[attachment=20310:fin_1109_large_adaptive.png]

Still not sure if it's right, but there's some more light in there! The spheres don't reflect any ambient light by the way, just specular.

Also, another question: when I multiply colour together, for example when I multiply the colour of a reflection by the colour of the surface it reflects off, is it standard to normalise the colour vectors?

Also, another question: when I multiply colour together, for example when I multiply the colour of a reflection by the colour of the surface it reflects off, is it standard to normalise the colour vectors?

No, that is usually a hack to try and keep your color values in [0..1] before you get around to implementing HDR rendering (with tonemapping). There is no limit to how bright a pixel can appear.

As an example, perceived intensity from the sun can easily be over seven orders of magnitude higher than from a candle, so normally you would just handle that by multiplying the color vectors directly, and at the very end of your rendering pipeline you would apply a tonemapping algorithm to bring those values back into [0..1] range while trying to preserve the large differences in brightness, so that it can be displayed on a computer. There might be a few other post-processing things you would want to do as well either before or after tonemapping, but that's the gist of it.

I recommend outputting to an existing HDRI format rather than rolling your own tonemapping algorithm if you're not targeting realtime, so that you can use existing tools to tonemap it, but it's up to you of course.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

have some fake color multiplier even on the dark locations, otherwise you will get this. i use 0.6 or 0.8 ambient light multiplier for shadowed places, those are ideal for my taste, but the practicular number basically varies on every scene.

This topic is closed to new replies.

Advertisement