Sign in to follow this  
roos

Photon mapping question

Recommended Posts

Hi, I've been working on wrapping my brain around Henrik wann Jensen's book on Photon mapping. For the most part I understand it, though at the moment I'm avoiding subsurface scattering and participating media since the volume rendering equation looks pretty scary, lol... Anyways, one thing I don't understand is... he said in his book that after you emit photons from a given light source, THEN you scale the power of each photon by the number of photons emitted. But, why not scale the photons power from the start? Like, say you know you're going to emit 100,000 photons from a given light source, then when you emit the photon, simply scale it by 1/100000 to begin with. Maybe I'm missing something? The only reason I can think of why you would delay the scaling is if the number of emitted photons is adaptively determined by the code (say, if there is some metric the program can use to figure out when enough photons have been emitted), but afaik he doesn't mention any proper way of doing that in the book. Thanks! roos

Share this post


Link to post
Share on other sites
If you are always emitting a constant number, you don't necessarily have to scale at all. But yea, I'd scale them before firing them, unless there is some reason you can't, like you want to calculate for a certain amount of time.

But theres things I don't understand about photon mapping, having never got a book. Why would you need to take the normal of the surface into consideration? Isn't that just an approximation anyway? Less photons will hit something on a steep angle.

Share this post


Link to post
Share on other sites
I don't know, I never worked with photon tracing, but IIRC you can shot as many photons as you want, so perhaps calculating the energy of each photon later let you shot photons until you think the quality is enaught. But you cannot know this when you shot them, because when you shot a photon you don't know how many others you will use later.
Could this be the reason?

Edit: Eelco was faster :-)

Share this post


Link to post
Share on other sites
I just finished reading Jensen's book, apart from the participating media chapter...

You can choose the strategy you want to emit photons, but with what he is advocating in his book, I wondered about the same thing as well. He specifies a fixed number of photons to be emitted beforehand, and then uses a "projection map" so that the lights emit light only in the directions where there is geometry (because wasting photons in space is useless).

As for adapting the number of photons on the fly... It doesn't seem very useful. I suppose you could give the more important lights more photons, but it would seem like in the strategy Jensen describes, its better to have photons of equal power, as much as possible (that is the motivation behind "russian roulette"), and so it would be better to distribute the photons evenly among the sum of the power of all lights, so that more powerful lights emit more photons.

So, my conclusion is that it is indeed useless with a strategy like this to scale the photon power afterwards. You can just divide the total power of the light by the number of photons you plan to emit from that light, and let each photon carry that much power. The way I would do it is that all emitted photons carry the same power. Say you have a total light power of 500 watts, and you want to emit 500 000 photons, then you let each photon carry 0.001 watt of power.

Share this post


Link to post
Share on other sites
Quote:
Original post by roos


Maybe I'm missing something? The only reason I can think of why you would delay the scaling is if the number of emitted photons is adaptively determined by the code (say, if there is some metric the program can use to figure out when enough photons have been emitted), but afaik he doesn't mention any proper way of doing that in the book.


That's exactly what most photon mappers do. The user specifies that they want a certain amount of photons stored in each photon map and then the raytracer fires photons until enough photons have been stored. However, not all photons get stored (some photons won't hit anything, and some implementations of russian roulette will terminate photons without storing them, etc). Therefore, you can't predict in advance how many photons will be shot to fill up the photon maps to the spedified level.

Share this post


Link to post
Share on other sites
Quote:
Original post by cwhite
Quote:
Original post by roos


Maybe I'm missing something? The only reason I can think of why you would delay the scaling is if the number of emitted photons is adaptively determined by the code (say, if there is some metric the program can use to figure out when enough photons have been emitted), but afaik he doesn't mention any proper way of doing that in the book.


That's exactly what most photon mappers do. The user specifies that they want a certain amount of photons stored in each photon map and then the raytracer fires photons until enough photons have been stored. However, not all photons get stored (some photons won't hit anything, and some implementations of russian roulette will terminate photons without storing them, etc). Therefore, you can't predict in advance how many photons will be shot to fill up the photon maps to the spedified level.


Jensen says in his book that no matter how many photons are stored, you can't just divide the light's power only among the photons that gets stored.

Share this post


Link to post
Share on other sites
Quote:
Original post by Max_Payne
Quote:
Original post by cwhite
Quote:
Original post by roos


Maybe I'm missing something? The only reason I can think of why you would delay the scaling is if the number of emitted photons is adaptively determined by the code (say, if there is some metric the program can use to figure out when enough photons have been emitted), but afaik he doesn't mention any proper way of doing that in the book.


That's exactly what most photon mappers do. The user specifies that they want a certain amount of photons stored in each photon map and then the raytracer fires photons until enough photons have been stored. However, not all photons get stored (some photons won't hit anything, and some implementations of russian roulette will terminate photons without storing them, etc). Therefore, you can't predict in advance how many photons will be shot to fill up the photon maps to the spedified level.


Jensen says in his book that no matter how many photons are stored, you can't just divide the light's power only among the photons that gets stored.


Of course. You're dividing the lights power by the total number of photons shot. With most implementations, nShot > nStored.

Share this post


Link to post
Share on other sites
Hi,

My cable modem died a while ago, so... wow, i log in today and so many replies to my post :) Thanks guys, I get it now! I always assumed you just told each light to emit X photons and don't worry about how many actually get stored, but specifying a number of photons you want stored makes more sense since that'll determine the output quality.

roos

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this