Jump to content
  • Advertisement
Sign in to follow this  
Max_Payne

Photon Mapping, Need Explanations

This topic is 4956 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

If someone has worked with photon mapping, I could use some explanations. Jensen says he stores the photons in a balanced kd-tree... I would like to know: 1. What exactly is a balanced kd-tree, and how the photons are stored in it. Are there only one per node, how do the data structures for the nodes look like? 2. What is the most efficient technique to find the n nearest photons to a point (x,y,z) in space, assuming the photons are stored in a balanced kd-tree. 3. What do we do with the n nearest photons? Do we simply calculate the lighting they would produce at the point (x,y,z), sum the lighting all photons would produce, and divide that by the circular area in which they were found? 4. Suppose I have a frame buffer of a render, containing color values for all pixels in the form of 32 bit floating point RGB triplets. To produce "HDR", would I simply find the maximum R, maximum G, maximum B, and transform the values for all points between 0 and max as going linearly from 0.0 to 1.0?

Share this post


Link to post
Share on other sites
Advertisement
There are a wide variety of ways to approach kd trees, as well as a large number of different strategies for implementing the basic theory. Jensen uses a set of tactics that are geared specifically towards the specific needs of photon mapping.

A kd tree is a variant on a binary tree. Each node has zero, one, or two child nodes. The "left" branch represents a decision value less than that of the parent node. The "right" branch has a greater or equal decision value. Jensen's decision value is obtained by determining the maximum difference in coordinates for all points being arranged into the tree.

For example, if the set of points being stored is very spread out on the x axis but compact on y and z, Jensen's method will use the x axis as the decision variable for the first node. The points are sorted by their x coordinates, and the median (middle point by index, not necessarily the point closest to the spatial center) of the resulting set is used as the root node. All of the points to the "left" (x < root x) are then passed into a recursive function which does the same thing over again; the remaining points are processed similarly. For Jensen's purposes, each node in the tree corresponds to precisely one photon.

A balanced kd tree attempts to store exactly the same number of nodes on the left branch as the right branch. This principle will hold no matter which node you examine (i.e. for any node, the number of left-children and right-children should be close to equal). This makes a better representation of spatial division as well as improving performance for lookups in the tree later on.

Jensen uses a flat-heap storage method for his trees, as opposed to a linked-node method. The linked node method works like a linked list, but each node has two links - one for left children, and one for right children. This leads to horrible memory fragmentation and cache trashing problems, though; so for performance, the flat-heap method is almost always preferred. In the flat heap method, all of the tree nodes are stored in a massive array. A formula is used to determine the index of the left child node given any starting node, etc. It can be hard to visualize and isn't very intuitive, but it makes sense once you get used to it and is very fast compared to a linked-node model. Unfortunately I can't recall the formulas used off the top of my head but Jensen details them properly in several of his papers.


Photon lookup is fairly straightforward. Basically you use the kd-tree like a binary tree. Start with the root node, and compare the node's x value (or whatever the decision value for that node was) with the query point's corresponding coordinate. Use this comparison to decide if you need to go down the left or right child branches. Repeat this decision recursively until you run out of child nodes. At each node, check the distance between the node and the query point. Use this distance with a simple priority queue to keep track of the n-closest points. Really the only unusual part is the kd-tree traversal; the n-closest problem should be very familiar to you and very easy to solve efficiently.


Once the n nearest photons are found, the irradiance estimate has to be computed. The easiest way is just a naive average: add up the intensities and colors of all the photons, then divide by the surface area they represent. Jensen's method is to use the distance from the query point to the most distant of the n photons as the radius of a circle; a simple pi-r-sq will produce the area covered by the sampled photons. However this approach requires millions of photons to look good. A better method is to run the n photons through a filtering function that determines their relative contributions to the sample, and intelligently "reconstructs" the irradiance based on their values. Several different suggestions have been made; Jensen favors a simple cone filter, which provides better visual results but is still highly limited and requires a fairly large number of photons. Other filters such as Gaussian filters have been tested with good results but poor performance.

Actually the reconstruction of a value based on relatively close-by sample points (kernel density estimation, to be technical) is a relatively unsolved problem in statistics. Solutions do exist (filtering is one of the most common) but they require large numbers of sample points to deliver good quality results. Research is being done to find better density estimation techniques but it will probably be some time before this reaches photon mapping.


I've done a sort of "hacked" HDR in my research. Basically I store all colors as floating point RGB triplets, and do some lazy filtering during the render process. Simply put, values on the interval [0.0, 1.0] are treated as standard RGB values (i.e. multiplied by 255, packed into a DWORD, and sent to the video buffer). Values > 1.0 are treated as exponents. First I use a messy (and only half-effective) formula to convert these exponents down to a normal RGB triplet; that triplet is then DWORD-packed and stored in the video buffer. Next, I add 1.0 to each color value and exponentiate it by the original triplet value in each color channel. This value is clamped back into a specific range, scaled back onto the [0.0, 1.0] interval, and DWORD packed into a secondary buffer. A post-process filter blurs off the secondary buffer (brighter values blur farther) and then blends the resulting value (again, brighter values are more dominant during blending) onto the original frame buffer, which is then rendered. The result is a subtle but very effective HDR-style "glare" effect and some interesting color bleeding in brightly lit environments. Unfortunately the technique requires a lot more fine tuning and produces a lot of visual errors - and it is incredibly slow, which makes it unimportant for my current research, meaning I probably won't get the chance to fix it up right for quite some time.

Share this post


Link to post
Share on other sites
Well thanks for answering question 1, but that still leaves the 3 other questions mostly unanswered.

Some time ago, someone provided me with a very efficient and simple strategy for finding the n nearest photons, but I forgot about it.

As for HDR, what you suggest sounds awfully overcomplicated.

Share this post


Link to post
Share on other sites
HDR is quite simple, you've got to use *some* tone-mapping function to map the HDR values (perhaps spectral, perhaps RGB) into values displayable on physical hardware. Your method would work (although it would be terrible in most scenes), Apoch's hack works too, but isn't physically based. You could try a simple exp exposure function, or just output your renders into a packed RGBE format (like .hdr) and let some other program worry about the tonemapping (here you could also add physically based glare effects also).

Share this post


Link to post
Share on other sites
Quote:
Original post by JuNC
HDR is quite simple, you've got to use *some* tone-mapping function to map the HDR values (perhaps spectral, perhaps RGB) into values displayable on physical hardware. Your method would work (although it would be terrible in most scenes), Apoch's hack works too, but isn't physically based. You could try a simple exp exposure function, or just output your renders into a packed RGBE format (like .hdr) and let some other program worry about the tonemapping (here you could also add physically based glare effects also).


I'd like to convert them myself, so I can output displayable TGA files. I have the idea data set too...

I have pixels all stored as 3x32 bit floating point RGB(0.0-"infinity"), and I want to convert it all to 3x8 bit RGB (0-255).

Share this post


Link to post
Share on other sites
Could try:

http://www.cs.nott.ac.uk/~qiu/hdri/fastmaphdri.html
A New Real Time Tone Mapping Model

But it really depends on whether you want to do it properly, or just quickly. If quickly then I strongly recommend exporting to .hdr and using HDRView which allows you to adjust the exposure while you're viewing (useful for debugging).

Failing that then try an exp exposure function:

(c is input component, k is some exponent, try 100 to start, c' is output)

c' = 1.0 - exp ( -c * k )

Then scale/clamp the result to 255 (or whatever your output range is).

Share this post


Link to post
Share on other sites
For tone mapping, I've found that the photographic tone reproduction method developed by Reinhard, Stark and Shirley works rather well. The paper describing that algorithm is available from the following URL:

Photographic Tone Reproduction for Digital Images.

Look into section 3.1 titled "Initial luminance mapping".

To obtain the world luminances for a pixel, I first convert the pixel values to Yxy; the world luminance is then the Y value. Y is then scaled by Ld/L, where Ld is the displayable luminance and L the scaled luminance, and the pixel is converted back to RGB.

Here is my implementation (lwhite and lblack are the minimum and maximum world luminances in the image, respectively).


real averageLum, lwhite, lblack;

// Compute the world luminances
image->ConvertToYxy(lwhite, lblack, averageLum);
averageLum = exp(averageLum);

real scaleFactor = m_keyValue/averageLum;

lwhite *= (lwhite*scaleFactor);

// Tone map the image
for (uint i = 0; i < width; i++)
{
for (uint j = 0; j < height; j++)
{
Color luminance = image->GetPixel(i, j);
real L = luminance[0];

luminance[0] = L * (lwhite + L) / (lwhite + lwhite*L);

image->SetPixel(i, j, luminance);
}
}

image->ConvertFromYxy();


Share this post


Link to post
Share on other sites
Thank you Junc, I might just try that :)

I'm still looking for a quick way to find the n nearest photons to a point in the photon map though. So if someone has ideas for that.

Share this post


Link to post
Share on other sites
Buy Henrik Wann Jensen's book (it's excellent and even includes source code that does exactly what you want).
If you don't want to get the book, just check out the photon mapping papers here:
http://graphics.ucsd.edu/~henrik/papers/

Share this post


Link to post
Share on other sites
I haven't got much experience with photon mapping, never tryed it, but why would you have to absolutely use a kd-tree?
having a kd-tree with one single photon per leaf sounds quite huge knowing that you can have a few hundred thousands photons stored, or even more.
maybe some other data structures could be more appropriate?
like a sphere tree? all you need to do is a sphere to sphere collision test, gather all the leaf spheres that touch your original sphere (center: the 3D point where you want to gather the luminance, radius: the maximum search radius), then get the n nearest photons with brute-force (having something like 10 or 20 photons per leaf sphere?), or maybe just use all the photons in the colliding spheres, weighting their influence according to their distance to the original point of interest?

dunno, just some random ideas... a voronoi diagram might also be useful if you are really concerned by speed...

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!