Distributed Photon Mapping : I was Right

Started by
31 comments, last by ApochPiQ 18 years, 10 months ago
Well, my renderer is far from done... But I have done some experiments that seem to prove I was right. I made a thread about distributed photon mapping recently. I was asking if by averaging multiple passes at lower photon densities, we could obtain a higher image quality with reduced noise, that would converge towards the right solution. Well, I programmed it, and here is what I obtained. This is direct photon map visualisation. I will eventually improve the system by using the photon map only for indirect illumination. After 1 pass: After 17 passes: ApochPiQ said that by averaging, the image would end up losing its features and diverging. However, as you can see, it isn't the case. In fact, some image features are becoming clearer (there is a noticeable shadow over the sphere after 17 passes).

Looking for a serious game project?
www.xgameproject.com
Advertisement
It looks quite nice but you didn't say how long each pass took, and what it would look like doing a single pass with 17X as many photons...
Quote:Original post by d000hg
It looks quite nice but you didn't say how long each pass took


Too long. About 30 minutes per pass. The quality is still insatisfying since its direct visualisation of the photon map.

Quote:and what it would look like doing a single pass with 17X as many photons...


Each pass is rendered using a little over 3 million photons. It takes 40 MB of RAM per million photon, times two, since I need twice that to build the KD-tree (photon heap), so 80 MB per million photon. The thing is. I have 2 GBs of RAM, and that's not enough for 17 times as many photons (I would need over 4 GBs). Thats the whole point of this multi-pass approach.

Looking for a serious game project?
www.xgameproject.com
Wow, looks really good! Out of curiosity, is it plausible to do the distributed rendering over the internet?

(note: I have about zero experience with photon mapping/raytracers)
You'll remember that I said it will look like it works in certain cases. This very simple case is one of them. Try it with a high-frequency color bleeding pattern like the checkered-wall variant of the Cornell box, or with a high-detail caustic. Also, run a side-by-side comparison with a high-photon render. They'll be different.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

Quote:Original post by Cypher19
Wow, looks really good! Out of curiosity, is it plausible to do the distributed rendering over the internet?

(note: I have about zero experience with photon mapping/raytracers)


The whole point is to do it over the internet ;) Each pass can be performed on a different node, allowing to distribute the load of the rendering process.

Quote:You'll remember that I said it will look like it works in certain cases. This very simple case is one of them. Try it with a high-frequency color bleeding pattern like the checkered-wall variant of the Cornell box, or with a high-detail caustic. Also, run a side-by-side comparison with a high-photon render. They'll be different.


Different? Possible. The point is, its converging. If it converges here, then I would say that applying the same averaging technique to any scene will work, as long as each rendering node uses the same rendering process.

Right now my concern is to eliminate those nasty borders at wall/wall and wall/ceiling intersections... Then I will work on separating the direct illumination, and use the photon map for indirect illumination only.

Looking for a serious game project?
www.xgameproject.com
The fact that it converges for a cornell box with one sphere in it has absolutely nothing to do with whether it will converge for other situations.

ApochPiQ is right. This example has a very low-frequency irradiance function. Because it is low frequency, averaging several samplings will give a good approximation to the true function.

But try the examples that ApochPiQ suggested and you will find that your method oblitearates all the high-frequency detail.
Quote:Original post by Max_Payne
Right now my concern is to eliminate those nasty borders at wall/wall and wall/ceiling intersections... Then I will work on separating the direct illumination, and use the photon map for indirect illumination only.


Those "nasty" borders? They are actually correct. What you are seeing is diffuse interreflection as color from one wall bleeds onto the adjacent wall/ceiling at the edge. You WANT that behavior, if you need convincing, just look at any corner in a real room, and you'll see the real life effect.

Quote:The whole point is to do it over the internet ;) Each pass can be performed on a different node, allowing to distribute the load of the rendering process.


Sorry, should have clarified. I know the whole point behind distributed rendering is to do it over intra/internet, but I was asking if the amount of data that needs to be sent allows for a benefit from doing it over the internet (i.e. you uploading a client exe, we all let it run and then you (remotely) tell the exe to compute the algorithm for a certain clump of photons)
VERY nice. Your image is really impressive, but even better is the way you're experimenting to try find better ways to do things.

Like some have already said, your experiments haven't really proved that all cases are convergent, but rather demonstrate that cases exist in which convergence does occur.

But I'm strongly inclined to believe that you're right. Why not try the technique on some of difficult cases, as ApochPiQ suggests. If it works, then your technique introduces a host of really interesting possibilities.

This topic is closed to new replies.

Advertisement