Jump to content
  • Advertisement
Sign in to follow this  
vonengel

Photon Tracing

This topic is 4877 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi guys, at the moment I am implementing the photon map algorithm und encountered a few problems. In the first path of the algorithm (photon tracing). A light source (1,1,1 | r,g,b) emits a photon and this photon hits an object and will be transmitted from a blue sphere (0,0,1) and afterwards hits a diffuse white surface (1,1,1). How do I have to scale the power of the photon after the transmission? And with how much energy will the photon be stored in the photon map? Thanks in advance for your help. Greetings, David

Share this post


Link to post
Share on other sites
Advertisement
The photon's energy is scaled by the color and the diffuse reflectance coefficient for each surface it reflects off of. In your example, the final photon would be stored as (0, 0, 1 * (photon indicence direction 1 . surface normal 1) * (photon indicence direction 2 . surface normal 2) * scaled photon power). Note that a photon emitted from a pure white light won't have a pure white storage color - it has to be scaled down by 1/num_emitted_photons to ensure the energy is distributed evenly in the scene.

Share this post


Link to post
Share on other sites
Quote:
Original post by ApochPiQ
The photon's energy is scaled by the color and the diffuse reflectance coefficient for each surface it reflects off of. In your example, the final photon would be stored as (0, 0, 1 * (photon indicence direction 1 . surface normal 1) * (photon indicence direction 2 . surface normal 2) * scaled photon power). Note that a photon emitted from a pure white light won't have a pure white storage color - it has to be scaled down by 1/num_emitted_photons to ensure the energy is distributed evenly in the scene.


Thanks for the quick response.
What do you mean with "(photon indicence direction 1 . surface normal 1)" and "(photon indicence direction 2 . surface normal 2) " ?
Do you mean the incident direction of the photon and the normal of the surface at the intersection point?

In my render toolkit the reflectance coefficient is give by a constant between [0,1] (Phong Material).

If the photon hits a non diffuse surface (for example a specular one), will the specular coefficient be used to scale the power of the photon?

Share this post


Link to post
Share on other sites
Quote:
What do you mean with "(photon indicence direction 1 . surface normal 1)" and "(photon indicence direction 2 . surface normal 2) " ?


He means doing a dot product between the incident light direction and normal. It's exactly like N dot L.

Share this post


Link to post
Share on other sites
Quote:
Original post by vonengel
Thanks for the quick response.
What do you mean with "(photon indicence direction 1 . surface normal 1)" and "(photon indicence direction 2 . surface normal 2) " ?
Do you mean the incident direction of the photon and the normal of the surface at the intersection point?

In my render toolkit the reflectance coefficient is give by a constant between [0,1] (Phong Material).

If the photon hits a non diffuse surface (for example a specular one), will the specular coefficient be used to scale the power of the photon?



Yes, that's exactly right. Sorry, I wasn't very clear on that. Note that you actually need to invert the photon's incident direction or the angle will not be acute and the dot product will give you ugly results. This also only applies for Lambertian diffuse surfaces.

In general, the photon weight will be scaled by the value of the surface's BRDF given the incident direction and surface normal. If you have a texturing system you obviously need to evaluate the surface's actual attributes at the point of intersection. This coefficient should always be on the interval [0,1] as you said to ensure the light transport works correctly.

In Phong you can also have surfaces which are partly specular and partly diffuse, or partly specular and partly transmissive, etc. In these cases, the coefficients must all add up to 1. You then choose a random number on [0,1], and use it to select which attribute of the surface will be obeyed (transmission, specular/diffuse reflection, etc.) There is also a chance the photon is absorbed (if the Phong coefficients sum to less than 1, which they should for all physically accurate materials) in which case the light of the photon is deposited (if you are using the photon map for irradiance) or discarded (if you are using it for radiance). This entire process is generally referred to as "Russian Roulette."

Share this post


Link to post
Share on other sites
Yes, I am aware of that. I have already implmented the Photon Tracing Step using russian roulette. The only thing I was uncertain in the step was the scaling of the photon power.

Another question concerning the visualisation step of the algorithm.
The render equation is split up in 4 components.
-Direct illumination (Shadow feelers)
-Specular and glossy reflection (ordinary raytracing)
-Caustics (density estimate in the caustic photon map)
-Multiple diffuse reflections (distribution ray tracer, for the secondary rays the global photon map is evaluated.)

When I simple add all 4 terms together I did not receive a reasonable image, because the contribution from the direct illumination is so strong and the multiple diffuse and caustic part are so small.
Is there any other method, than simply adding the terms together? Or do I have an error in my direct illumination calculation?

Share this post


Link to post
Share on other sites
Quote:
Original post by vonengel
Yes, I am aware of that. I have already implmented the Photon Tracing Step using russian roulette. The only thing I was uncertain in the step was the scaling of the photon power.

Another question concerning the visualisation step of the algorithm.
The render equation is split up in 4 components.
-Direct illumination (Shadow feelers)
-Specular and glossy reflection (ordinary raytracing)
-Caustics (density estimate in the caustic photon map)
-Multiple diffuse reflections (distribution ray tracer, for the secondary rays the global photon map is evaluated.)

When I simple add all 4 terms together I did not receive a reasonable image, because the contribution from the direct illumination is so strong and the multiple diffuse and caustic part are so small.
Is there any other method, than simply adding the terms together? Or do I have an error in my direct illumination calculation?


From my understanding, you simply do add the terms together. You most likely have the direct illumination wrong. I was dealing with the same problem yesterday. I simply didn't take in account the quadratic attenuation. It made the rendered image look like it was shot aside a nuclear explosion.

Share this post


Link to post
Share on other sites
Quote:

From my understanding, you simply do add the terms together. You most likely have the direct illumination wrong. I was dealing with the same problem yesterday. I simply didn't take in account the quadratic attenuation. It made the rendered image look like it was shot aside a nuclear explosion.


Oh, how did you involved the quadratic attenuation (from the intersection point to the light sources)?
At the end I am performin a tone mapping step. I select the highest color value of the hole image and divide all other values by this max value. Do you also perform such a step?

Share this post


Link to post
Share on other sites
Quote:
Original post by vonengel
Quote:

From my understanding, you simply do add the terms together. You most likely have the direct illumination wrong. I was dealing with the same problem yesterday. I simply didn't take in account the quadratic attenuation. It made the rendered image look like it was shot aside a nuclear explosion.


Oh, how did you involved the quadratic attenuation (from the intersection point to the light sources)?
At the end I am performin a tone mapping step. I select the highest color value of the hole image and divide all other values by this max value. Do you also perform such a step?


The quadratic attenuation is based on the fact that the density of photons is reduced quadratically. For a point light source, as you move away, the surface area of the sphere surrounding it is 4pi*r^2... So you have to divide the light power by that, as the distance augments.

Myself: I don't have point light sources. For the simple reason that no such thing exists in nature. In order for light to be emitted, it has to come out of something. Instead, I have an emissive power value for all 3D objects (any object can emit light). I just sample the surface points of light emitting objects with shadow rays. And for each light point, I assume a hemispheric energy distribution, so I weight both with the cosine of the angle at the light source, and by hemispheric attenuation (divide by 2*pi*r^2). It might not be entirely accurate, but it gives good results.

If I ever want something that produces the same effect as a point light source, I can simply create a very small spherical light source. That being said, I also don't have infinite planes. I have discs (planes with a size-limiting radius).

Share this post


Link to post
Share on other sites


The quadratic attenuation is based on the fact that the density of photons is reduced quadratically. For a point light source, as you move away, the surface area of the sphere surrounding it is 4pi*r^2... So you have to divide the light power by that, as the distance augments.
[\quote]
Thanks, I will implement this, and hope it will all fit well together. :-) I'll let you know when it is done.

Quote:

Myself: I don't have point light sources. For the simple reason that no such thing exists in nature. In order for light to be emitted, it has to come out of something. Instead, I have an emissive power value for all 3D objects (any object can emit light). I just sample the surface points of light emitting objects with shadow rays. And for each light point, I assume a hemispheric energy distribution, so I weight both with the cosine of the angle at the light source, and by hemispheric attenuation (divide by 2*pi*r^2). It might not be entirely accurate, but it gives good results.

If I ever want something that produces the same effect as a point light source, I can simply create a very small spherical light source. That being said, I also don't have infinite planes. I have discs (planes with a size-limiting radius).

Hmm, thats right. There are no point lights in real nature. Most of the time I use area lights, but unfortunately they are very costly to calculate. The old Monte Carlo carvet. You need many sample rays to get a good estimation. Perhaps I will implement the shadow photon technic, so I don't need the shadow feelers so often.


Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!