Jump to content

  • Log In with Google      Sign In   
  • Create Account


Avoiding dark areas in ray tracing


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
8 replies to this topic

#1 george7378   Members   -  Reputation: 1185

Like
1Likes
Like

Posted 08 March 2014 - 08:48 AM

Hi everyone,

 

I'm getting some nice pictures from my ray tracer but I've noticed a problem with some geometry setups. For example, this pile of spheres:

 

pyramid_1.3h.png

 

Between the lower and the middle level, the inner areas are completely dark. I'm not sure if this is realistic or not (I don't have any spheres to test it with) but it doesn't look right. I think I know why it's happening - my maximum ray depth is 5, and the rays probably get stuck in the cavities and never collect any colour. Is this realistic, and can I fix it without using infinite ray depth?

 

Thanks!



Sponsor:

#2 Aressera   Members   -  Reputation: 1383

Like
3Likes
Like

Posted 08 March 2014 - 12:55 PM

It doesn't look like your ray tracing has any diffuse reflection component to it (i.e. global illumination). Without the scattering induced by surface roughness it may be impossible for certain areas to reflect back any light.



#3 Krypt0n   Crossbones+   -  Reputation: 2481

Like
6Likes
Like

Posted 08 March 2014 - 01:35 PM

I think I know why it's happening - my maximum ray depth is 5, and the rays probably get stuck in the cavities and never collect any colour

you are right, that sounds like the reason.

 

The way to fix it is to not limit your rays to a particular depth, but to reduce the amount of contribution on every recursion.

every surface in the real world ( and I mean also mirrors, glass, air, chrome...) have some loss in transmission of light. this can be simulated the easiest if you reduce the contribution they do by some amount per recursion level. every mirror ball would reduce it to e.g. 98% (you could set this value per material), it would take a while, but with increasing recursion it would end up by e.g. 1%

and once a ray reaches 1%, you could assume it won't contribute anything. (I think you can calculate the maximum recursion by taking the log of the canceling value and divide by the log of the reduction value e.g. log(0.01)/log(0.98) to estimate the worst case and maybe tweak both values to restrict the time consumption, instead of looking up realistic values)

 

that way you won't trace rays that have barely any contribution and you will trace important rays instead of canceling them due to some recursion limit. and the reduction of contribution might lead to a more realistic look.



#4 Bacterius   Crossbones+   -  Reputation: 8478

Like
2Likes
Like

Posted 08 March 2014 - 04:23 PM

 

I think I know why it's happening - my maximum ray depth is 5, and the rays probably get stuck in the cavities and never collect any colour

you are right, that sounds like the reason.

 

The way to fix it is to not limit your rays to a particular depth, but to reduce the amount of contribution on every recursion.

every surface in the real world ( and I mean also mirrors, glass, air, chrome...) have some loss in transmission of light. this can be simulated the easiest if you reduce the contribution they do by some amount per recursion level. every mirror ball would reduce it to e.g. 98% (you could set this value per material), it would take a while, but with increasing recursion it would end up by e.g. 1%

and once a ray reaches 1%, you could assume it won't contribute anything. (I think you can calculate the maximum recursion by taking the log of the canceling value and divide by the log of the reduction value e.g. log(0.01)/log(0.98) to estimate the worst case and maybe tweak both values to restrict the time consumption, instead of looking up realistic values)

 

that way you won't trace rays that have barely any contribution and you will trace important rays instead of canceling them due to some recursion limit. and the reduction of contribution might lead to a more realistic look.

 

 

The keyword for this is "russian roulette" (google it!) and is indeed a way to go as deep as necessary (in the sense that the final image will be unbiased, exactly as if you had used infinite depth for all your rays). Usually though it is only turned on after two bounces because it tends to increase variance (being a form of rejection sampling).

 

I should mention that it is only unbiased if it is properly implemented, and there are a bunch of variants floating around the internet (as usual when it comes to things like path tracing) which may or may not be equivalent so tread carefully.


The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#5 george7378   Members   -  Reputation: 1185

Like
1Likes
Like

Posted 08 March 2014 - 04:46 PM

OK, so you mean something like this:

 

trace(float attenuation)
{
reflectionColour = refractionColour = (0, 0, 0);
if (intersectedShape.reflectivity*attenuation > cutoff){reflectionColour = trace(intersectedShape.reflectivity*attenuation);}
if (intersectedShape.transparency*attenuation > cutoff){refractionColour= trace(intersectedShape.transparency*attenuation);}
}

 

i.e. I'd start my initial rays with an attenuation value of 1 and then every time the recursive rays hit a new object, if the attenuation value of the spawned ray would be less than the cutoff, I wouldn't spawn a new ray? I think that makes sense! It could also lead to better efficiency.



#6 george7378   Members   -  Reputation: 1185

Like
1Likes
Like

Posted 08 March 2014 - 06:20 PM

Here's how it looks after the new algorithm:

 

fin_1109_large_adaptive.png

 

Still not sure if it's right, but there's some more light in there! The spheres don't reflect any ambient light by the way, just specular.



#7 george7378   Members   -  Reputation: 1185

Like
1Likes
Like

Posted 09 March 2014 - 09:20 AM

Also, another question: when I multiply colour together, for example when I multiply the colour of a reflection by the colour of the surface it reflects off, is it standard to normalise the colour vectors?



#8 Bacterius   Crossbones+   -  Reputation: 8478

Like
5Likes
Like

Posted 09 March 2014 - 01:24 PM

Also, another question: when I multiply colour together, for example when I multiply the colour of a reflection by the colour of the surface it reflects off, is it standard to normalise the colour vectors?

 

No, that is usually a hack to try and keep your color values in [0..1] before you get around to implementing HDR rendering (with tonemapping). There is no limit to how bright a pixel can appear.

 

As an example, perceived intensity from the sun can easily be over seven orders of magnitude higher than from a candle, so normally you would just handle that by multiplying the color vectors directly, and at the very end of your rendering pipeline you would apply a tonemapping algorithm to bring those values back into [0..1] range while trying to preserve the large differences in brightness, so that it can be displayed on a computer. There might be a few other post-processing things you would want to do as well either before or after tonemapping, but that's the gist of it.

 

I recommend outputting to an existing HDRI format rather than rolling your own tonemapping algorithm if you're not targeting realtime, so that you can use existing tools to tonemap it, but it's up to you of course.


The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#9 Geri   Members   -  Reputation: 177

Like
1Likes
Like

Posted 13 March 2014 - 04:04 PM

have some fake color multiplier even on the dark locations, otherwise you will get this. i use 0.6 or 0.8 ambient light multiplier for shadowed places, those are ideal for my taste, but the practicular number basically varies on every scene.


Create your 3D RPG: Maker3D

 





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS