Sign in to follow this  
irreversible

Photon mapping

Recommended Posts

irreversible    2860
Before I delve into explaining precisely what I'm doing, I'll just ask a simple question. I think I have the photon map set up properly - the photon tracing stage seems to be going smoothly and the distribution is good. However, when estimating the radiance of required points for for the frame buffer, I'm getting some really uncool results. Here's a screenshot of search radius = 1, max number of photons in search radius = 50 (up to 50 samples are averaged to get the queried value) link r = 1, np = 400 link r = 1, np = 1500 link As far as I've read, 200-400 photons are more than adequate to get a good estimate of the radiation at the required points, also in more complex scenes. Also note that there's more than enough photons in the search radius for np = 50 and np = 400 (np = 1500 varies from 40 - 1500 photons in the search radius). Looking at the second image, you'll notice that all of the surfaces are "checkered", which implies bad or uneven distribution (which is nevertheless structured). I'm pretty sure this isn't a matter of distribution so much as proper radiance estimation. I mean, am I supposed to use 1500+ samples to get a good image. Check out this place - scroll down to "2. Area Lights and Soft Shadows" and take notice of the third image (hotlinking for your convenience): No such checkeredness. Also notice how my third image has circles that have formed around the bright spot (which I can't really explain either since the light source is a point and photon distribution is guaranteed to be uniform) and how the floor is still checkered when viewed close up. This leads me to suspect that I'm missing a final phase in my renderer or (since most things seem to be working quite nicely, including the generation of good results with large parameters), there's probably some fundmental flaw in my code. Can anyone suggest what could be wrong off the top of their head (eg how many samples should be enough)? [Edited by - irreversible on December 12, 2006 10:05:31 PM]

Share this post


Link to post
Share on other sites
irreversible    2860
Hm - I though I made sure hotlinking was working. Oh well - quickfixed the problem.

I have been using the following method: since packed photon direction is commonly represented through two 8-bit variables (theta and phi, which is the method I'm using right now), these rather naturally map to a 256x256 grid of intensity values that can be exported as a bitmap (to see relative spherical distribution in all directions).

Also, by looking at the produced images after the render, it's easy to deduce that photons have to be present in the correct pattern to form the quite evidently correct lighting pattern. Knowing these two things, I can know that there's a linear distribution of photons throughout the scene and that the distribution itself is correct.

What is lacking is the finesse (the smooth transitions) that should be produced in the render pass by radiance estimation. I hope you can open the images now (sadly, each in a separate window) to see what I have in mind. By clicking on Original Size on the eSnips page, you can view the images in their native 512x512 size.

As stated above, I can get decent results, but only by using an inexorbitant number of photons for the estimation, which - for lack of a better term - is way too damn slow.

I'm using Jensen's code so there shouldn't hopefully be anything too unexpected. The estimation code looks like this:


for(i = 1; i <= np.found; i++)
{
TPhoton *p = np.index[i];
//use direct mapping with no culling
irrad[0] += p->power[0];
irrad[1] += p->power[1];
irrad[2] += p->power[2];
}

double tmp=(1.0f / PI) / (np.dist2[0]);
irrad[0] *= tmp;
irrad[1] *= tmp;
irrad[2] *= tmp;

Share this post


Link to post
Share on other sites
ApochPiQ    23005
Direct visualization of the photon map is extremely inefficient, as Jensen notes repeatedly. The real wins come from combining a direct lighting pass (Whitted style) with partial irradiance data from the photon map. Jensen details several techniques for this in the photon mapping book as well as freely available papers all over his site.

In short, what you're getting is precisely what you should be getting.


Also, note that it is generally advisable to use a stratified stochastic distribution of emitted photons, not a purely uniform one.

Share this post


Link to post
Share on other sites
phantomus    734
I once implemented photon mapping (it's a while back I admit) and what I remember is that I got poor results from the built-in VS2005 random generator. Use the mersenne twister instead, it's faster and much better. That solved similar problems for me.

EDIT: Didn't read the last post above mine; looking at your images it indeed looks like you use an even distribution. Keep it, but add randomness (using the Mersenne Twister).

Share this post


Link to post
Share on other sites
irreversible    2860
Quite counter-intuitively, after a little bit of pondering, it seems obvious to use a smaller number of emitted photons, a relatively small number of photons in the estimation and a larger radius to achieve a greater sense of smoothness. This, of course, is only suitable for an ambient diffuse light - directional lights and caustics require just the opposite.

In any case, if the pixellation is normal, then that's that I suppose :). I added the Mersenne twister - I'm not sure of the benefits just yet - the visual quality didn't improve much; however, I haven't done any speed benchmarking with respect to rand() - on large scenes this could make a world of difference.

Thanks for the feedback, guys!

Share this post


Link to post
Share on other sites
cwhite    586
If you plan on using the photon map for indirect or direct lighting, then you should also look into the "final gather" operation, which is computationally expensive (instead of querrying the photon map at that point, it shoots out hundreds of rays over the hemisphere and querries the photon map at those points). This prevents direct visualization of the error associated with the photon map, and generally produces decent results for indirect illumination, though I'm a bit too rusty to remember if it also produces good results for direct illumination.

Share this post


Link to post
Share on other sites
talas    217
I don't think direct illumination is a good goal for photon mapping, unless you truly have static scenes. Occlusion problems will become readily apparent in dynamic scenes, but the eye will be more forgiving for indirect lighting. It is my understanding that this is generally the adopted view with radiosity solutions as well.

Along the same lines, real-time raytracing engines are accepting the fact that primary rays are better processed through a rasterizer than through actual path tracing - which should be done for secondary rays (including global illumination algorithms if implemented). The raster engine is just so much faster...

Share this post


Link to post
Share on other sites
phantomus    734
Quote:
primary rays are better processed through a rasterizer than through actual path tracing


Which realtime ray tracer are you referring to? I know of several fast ray tracers (including one I wrote), but none of them uses rasterization to optimize the 'first hit' for primary rays.

Using rasterization would make it rather hard to have adaptive supersampling, by the way.

Share this post


Link to post
Share on other sites
talas    217
Quote:
Original post by phantomus
Quote:
primary rays are better processed through a rasterizer than through actual path tracing


Which realtime ray tracer are you referring to? I know of several fast ray tracers (including one I wrote), but none of them uses rasterization to optimize the 'first hit' for primary rays.

Using rasterization would make it rather hard to have adaptive supersampling, by the way.


Sorry to kind of hijack the thread..

There are a couple from IEEE RT '06 that I saw combining rasterization and ray tracing. Pixar used this method in Cars, though that method is not realtime. The coherency experienced with primary rays is really well-suited for a raster engine. A few posters in the conference did similar things on Cell processors as well.

Don't get me wrong, ray-object intersection w/ Kd-trees is very fast, but hardware rasterization is still winning out for primary rays. I think if raytracing is going to be adopted in realtime, it's going to at least be merged with rasterization in a similar method first. Then if special purpose hardware comes out for raytracing, maybe we'll see a full switch.

I agree with you on adaptive supersampling. I don't think this issue was addressed in any implementation I saw. Pixar uses sub-pixel geometry with many passes, so I think they get around it this way - that won't be the case for real-time. I may be pulling this out from someplace, but I think I remember 16spp using jittered camera.

Share this post


Link to post
Share on other sites
phantomus    734
Quote:
I think if raytracing is going to be adopted in realtime, it's going to at least be merged with rasterization in a similar method first.


I fully agree. I recently did some lectures on ray tracing and it's role in future games, and came to the same conclusion with the students: Ray tracing is not going to take over in one revolutionary switch or whatever; it needs to be introduced gently and preferably as an 'optional feature'. Perhaps that will convince NVidia to add hardware support for it. :)

I also suppose you're right that combining rasterizing and ray tracing for the first hit is probably faster than full kd-tree traversal, even for full screen ray tracing (as opposed to ray tracing some objects), especially for the kind of scene complexity we are looking for at the moment (<50k visible triangles). I guess the reason it isn't used by many tracers is the fact that it only speeds up primary rays, while everyone is shooting for recursive ray tracing. Goals contradict with practice though: Real time ray tracers mostly seem to use simple scenes with little recursive effects.

I believe one speaker for RT06 complained about this: Everybody is speeding up the 'core process', but nobody is exploiting the benefits of ray tracing, which in the end causes real time ray tracing to be a mere academic excercise instead of the visual breakthrough that everyone hopes for.

By the way, I just started a project with some students to build a real time ray tracing benchmark to give the 'general public' a better idea of what ray tracing can do and what kind of performance can be expected. We ordered an 8-core machine to test stuff on (Moore will help us out once the benchmark is released), and on this machine, we will have real-time performance (~30 fps). Our course has both programmers and visual artists, so it's an interesting team.

Share this post


Link to post
Share on other sites
Actually in my tests, raytracing the first hit is usually faster with raytracing than with rasterization, at least with Reyes renderers (i.e. what Pixar use, and what that Cars paper mentioned is about).

The benefits of Reyes rendering come when you want to do antialiasing, motion blur and depth of field. Because the reyes algorithm seperates sampling from shading, you can achieve very high-quality antialiasing (we typically use up to 400 samples per pixel for rendering fur) with a miniscule overhead. Turning on motion blur is essentially free. That's something you'll never get out of a raytracer.

The point made in the Cars paper is that a hybrid rasterization/raytracing algorithm works well for film production, but is a nightmare to maintain as a code base, because you've essentially got two complete renderers sticthed together.

Share this post


Link to post
Share on other sites
greenhybrid    108
I think "hardware-triangles" may beat ray-tracing-primaries in the triangle count regions phantom mentioned (<50k).

But having a look at CAD-systems or terrain visualisation with >50M triangles, I think ray tracers are unbeatable. I've just written a terrain renderer using ray tracing which is able to render (some of you might hav seen my IOTD the last days) 134M triangles on a 512MB machine (the terrain stored in <400MB, I propose 1G triangles on a 4GB machine) at an semi-interactive framerate (current results on my AhtlonXP1800+[yet not fully optimized traversal]: 0.5-3 fps @ 800x600, afair).
Doubling both the width and the height of the heightmap (so 4x in total) just adds one traversal step in my ray tracer (in a tree of recursion depth 13 the performance just falls by a factor of 1/13), while it may increase the overdraw in a rasterizer by a factor of two and 4x triangles have to be drawn.

Concluding I'd say that a ray tracer (even a full software/main processor one) is able to beat rasterizer hardware.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this