Geometrian 1811 Report post Posted July 28, 2012 (edited) Hi, I've written a standard path tracer, GPU accelerated with OpenCL. It fails for small area lights (and of course point light sources fail entirely). Using direct light sampling doesn't work, because that can't handle caustics correctly, if at all. I want to write a bidirectional path tracer, but I'm not sure how. My understanding: 1: Two paths are generated--one from the light and one from the camera. 2: The vertices of the paths are connected in all possible ways to form a number of complete paths from the light to the camera (paths that pass through opaque objects are discarded). I.e., all paths that use some light path vertices, jump somewhere on the camera path, and then use the rest of the camera path vertices. 3: Each path is evaluated by weighting the probability of each interaction occurring, producing a sample. 4: The sample is averaged into the final pixel's color. I would like to make sure that the above is correct. Continuing on the assumption that it is: In standard path tracing, I use importance sampling (e.g., for a diffuse surface I generate random rays with a cosine weighted distribution, then just weight all samples equally). For bidirectional path tracing, I speculate that both the light subpath and the camera subpath could be generated in this way, but the interactions at both ends of the connecting step would need to be weighted manually (because you can't just choose the endpoints randomly) by the interactions' PDF. This all sound good? Thanks, -G Edited July 28, 2012 by Geometrian 0 Share this post Link to post Share on other sites
Bacterius 13165 Report post Posted July 29, 2012 [quote]I want to write a bidirectional path tracer, but I'm not sure how. My understanding: 1: Two paths are generated--one from the light and one from the camera. 2: The vertices of the paths are connected in all possible ways to form a number of complete paths from the light to the camera (paths that pass through opaque objects are discarded). I.e., all paths that use some light path vertices, jump somewhere on the camera path, and then use the rest of the camera path vertices. 3: Each path is evaluated by weighting the probability of each interaction occurring, producing a sample. 4: The sample is averaged into the final pixel's color[/quote] That is essentially correct, just remember that you cannot use the camera position as a node for obvious reasons (that'd be one weird camera), so make sure you start at the first intersection on the camera path. [quote]In standard path tracing, I use importance sampling (e.g., for a diffuse surface I generate random rays with a cosine weighted distribution, then just weight all samples equally). For bidirectional path tracing, I speculate that both the light subpath and the camera subpath could be generated in this way, but the interactions at both ends of the connecting step would need to be weighted manually (because you can't just choose the endpoints randomly) by the interactions' PDF.[/quote] Exactly, this is why you can't just use any intersection point on your light and camera paths as nodes, because many materials simply cannot be handled in that way. For instance, perfectly specular and dielectric refractive materials essentially have no PDF (it's a Dirac function, i.e. zero everywhere except at a single infinitesimal point), so they cannot be importance sampled, since the probability of a ray randomly bouncing specularly to a given point is by definition zero. 2 Share this post Link to post Share on other sites
Geometrian 1811 Report post Posted July 30, 2012 (edited) Awesome, thanks! I've implemented the bidirectional path tracer as described. It works in the sense that an image develops. However, the results are visually inaccurate: [img]http://img819.imageshack.us/img819/6457/image1iu.png[/img] Based on some thought and what I'm seeing, I think the problem is in the way the algorithm weights samples. Because the camera and light subpaths are importance sampled, they are not attenuated each bounce. Because there are far more paths of longer length than of shorter length, and the longer paths are not attenuated, the longer paths end up contributing more to the image. EDIT: Actually, they should be attenuated by 1/pi for each bounce for diffuse surfaces. Hmmmm, that might do it. EDIT-EDIT: After adding in an attenuation of 1/pi at each diffuse bounce, the image improves, but still looks similar. Ideas? Thanks, Ian Edited July 30, 2012 by Geometrian 0 Share this post Link to post Share on other sites
Bacterius 13165 Report post Posted July 30, 2012 (edited) You should weigh your contributions so that the total samples over all the subpaths weigh to 1. The easy way out is to weigh them uniformily by multiplying them by 1/n where n is the number of subpaths sampled, and the improved way is to give more weight to more important subpaths (another form of importance sampling). As long as the subpaths weigh to 1 and all weights are nonzero, the image will converge right (if the rest of the code is good), but a good sampling scheme will improve convergence rate. Edited July 30, 2012 by Bacterius 0 Share this post Link to post Share on other sites
Geometrian 1811 Report post Posted July 30, 2012 Yes, so that's my question. What's a working sampling scheme for bidirectional path tracing? My current method tries to importance-sample the paths by simply choosing more likely paths, and then weighting all samples by 1.0. Then, for subpath connections, I compute the PDF of each surface and weight the paths overall by this. Pseudocode: [source lang="c"]//All surfaces are 100% diffuse. No specular or refractions anywhere. path from light is NOT a random walk; directions are cosine weighted path from camera is NOT a random walk; directions are cosine weighted function trace() accumulated = (0,0,0) for each possible subpath in the path from the light for each possible subpath in the path from the camera sample = (0,0,0) //Walk along the light subpath for each vertex of the subpath from the light (including the light) sample *= vertex.diffuse * (1/pi) sample += vertex.emission //Switch from the light subpath to the camera subpath sample *= PDF(connecting ray, last light subpath vertex) //For diffuse surfaces, this is PDF(theta)=cos(theta)/pi sample *= PDF(connecting ray, last camera subpath vertex) //Likewise //Walk along the camera subpath (going backwards, because the camera path was generated from the camera outward) for each vertex of the (reversed) subpath from the camera (excluding the camera) sample *= vertex.diffuse * (1/pi) sample += vertex.emission accumulated += sample return accumulated[/source]This is clearly wrong in its weighting. How can it be fixed? Thanks, Ian 0 Share this post Link to post Share on other sites
Geometrian 1811 Report post Posted July 31, 2012 Most of the problem turned out to be a bug in the middle part of my code, where it switched from the light subpath to the camera subpath. The results looked much more plausible. I added the weighting scheme suggested on [url="http://graphics.ucsd.edu/~iman/BDPT/"]this page[/url]: "The only criteria here is that all the weights for paths of a given length sum up to one". I take this to mean: -Generate light and camera subpaths with importance sampling (i.e., generate more of the more probable paths). -Walk through the light subpath. For each (diffuse) importance sampled step, multiply by (1/pi), so as to normalize the interaction (I choose the rays with a cosine probability, but one needs to divide by pi to ensure that the probabilities sum to 1.0). -For the transversal between paths, find the connecting ray, and multiply by the PDF that ray was chosen from the light subpath point. -By the principle of reciprocity, we can do the same thing in reverse: multiply by the PDF that reversed ray was chosen from the camera subpath point. -Walk through the camera subpath backwards, proceeding identically to walking the light subpath. -This process comprises a sample. -To weight the sample, the weights for all paths of a given length must add up to 1.0. To average, find out how many paths of a given length there are, then divide each path of that length by that total. I derive that if the light subpath and camera subpath are each N long, then the weighting for a sample from a path of length path_length (NOT including the final step from the surface to the camera) is (1/(N-abs(path_length-N)). -Average many weighted samples together to derive a value for the pixel. Does this seem like a valid weighting scheme? The results look correct to me. However, I did try comparing them with naïve path tracing, and unfortunately it seems that the algorithms give totally different results. The OpenCL kernel chooses which algorithm to use for which pixel in the below image (path tracing left, bidirectional path tracing, right): [img]http://img641.imageshack.us/img641/648/image2gg.png[/img] The lighting is fundamentally different in the path tracer. I think the bidirectional path tracer's looks more realistic, but I don't know. They both look too dark; the light emits (99.0,99.0,99.0). The path tracer uses the importance-sampled camera subpath, and walks along in just the same way as the latter part of the bidirectional path tracing algorithm (i.e., weighting by (1/pi) each diffuse bounce). Because all the paths are the same length, no extra weighting is happening. So, is the bidirectional path tracer's weighting correct? Is the path tracer's weighting correct? If so, why don't they match? Thanks, Ian 0 Share this post Link to post Share on other sites
Bacterius 13165 Report post Posted July 31, 2012 Can you use some more samples for the unidirectional path tracing part to let it converge properly? It's hard to see through the speckles. [quote]I added the weighting scheme suggested on this page: "The only criteria here is that all the weights for paths of a given length sum up to one".[/quote] Yes, this is what I meant by "As long as the subpaths weigh to 1 and all weights are nonzero, the image will converge right (if the rest of the code is good), but a good sampling scheme will improve convergence rate.". [i]I recommend you get the naive weighing scheme up and running first against reference renders, then you can worry about using a better weighting scheme.[/i] [quote]-Generate light and camera subpaths with importance sampling (i.e., generate more of the more probable paths). -Walk through the light subpath. For each (diffuse) importance sampled step, multiply by (1/pi), so as to normalize the interaction (I choose the rays with a cosine probability, but one needs to divide by pi to ensure that the probabilities sum to 1.0).[/quote] Sounds good. [quote]-For the transversal between paths, find the connecting ray, and multiply by the PDF that ray was chosen from the light subpath point. -By the principle of reciprocity, we can do the same thing in reverse: multiply by the PDF that reversed ray was chosen from the camera subpath point.[/quote] That is correct (be careful with the connecting ray's direction, sometimes you can get the sign wrong and it screws up your PDF computation). [quote]-This process comprises a sample. -To weight the sample, the weights for all paths of a given length must add up to 1.0. To average, find out how many paths of a given length there are, then divide each path of that length by that total. I derive that if the light subpath and camera subpath are each N long, then the weighting for a sample from a path of length path_length (NOT including the final step from the surface to the camera) is (1/(N-abs(path_length-N)).[/quote] Wait, no, if your subpath is N long then the naive uniform weighting for each subpath sample should just be 1 / N. You simply average each subpath contribution uniformily - the actual physical radiance weighting has already been done via the different PDF interactions at each bounce. From the same website: [quote]For the Naive Bi Directional Path Tracing I used equal weights (1/n) for all the Wij of the same length.[/quote] If your light source has a brightness of 99 then BDPT is definitely incorrect here, the ceiling should be way more illuminated (but I could be wrong, we would need an actual reference render like smallpt as it's difficult to decide which render is correct if both are plausible) 0 Share this post Link to post Share on other sites
Geometrian 1811 Report post Posted July 31, 2012 (edited) I had been comparing the renderer to SmallPT's scene (as best as I am able; double precision support is not yet implemented and my PT attempts to be more general). My path tracer does not give correct results. I suspect that I will not understand bidirectional path tracing weighting fully until I understand path tracing weighting. Let me try to get a simple path tracer's weighting scheme up and running. Let's go for the simplest possible scheme--this would be naïve sampling, yes? The most basic path tracer ever would work like this, right?--with ALL the weightings: 1. Do a random walk from the camera. Just generate random samples on a hemisphere, and walk around hitting things. This produces an ordered list of collisions. 2. Set the sample color to (0,0,0). 3. Starting from the last point, walk backwards. At each point: ....1. Multiply the sample color by the diffuse color. ....2. Multiply the sample color by the PDF of the surface and the next ray (i.e., cos(theta)/pi). ....3. Add the emissive color. 4. Repeat 1-3. Average a bunch of sample colors at each pixel to get a final color Thanks so much for the help! -G Edited July 31, 2012 by Geometrian 0 Share this post Link to post Share on other sites
Bacterius 13165 Report post Posted July 31, 2012 [quote name='Geometrian' timestamp='1343722937' post='4964759'] The most basic path tracer ever would work like this, right?--with ALL the weightings: 1. Do a random walk from the camera. Just generate random samples on a hemisphere, and walk around hitting things. This produces an ordered list of collisions. 2. Set the sample color to (0,0,0). 3. Starting from the last point, walk backwards. At each point: ....1. Multiply the sample color by the diffuse color. ....2. Multiply the sample color by the PDF of the surface and the next ray (i.e., cos(theta)/pi). ....3. Add the emissive color. 4. Repeat 1-3. Average a bunch of sample colors at each pixel to get a final color [/quote] No. There is no PDF calculation in naive path tracing as your rays are already randomly bouncing around (which is equivalent to a diffuse BRDF), so the PDF is inherently taken into account through that random sampling. In BDPT, you instead [b]force[/b] the connecting ray to go from one point to another, [color=#ff0000]so you need to normalize the consequences of that event to its probability of actually happening[/color] - which is what the PDF does. This is assuming a random walk of course, if you use importance sampling you do need PDF's. Does that make sense? 0 Share this post Link to post Share on other sites
Geometrian 1811 Report post Posted July 31, 2012 (edited) [quote name='Bacterius' timestamp='1343723622' post='4964763']There is no PDF calculation in naive path tracing as your rays are already randomly bouncing around (which is equivalent to a diffuse BRDF), so the PDF is inherently taken into account through that random sampling.[/quote]Are you sure? Diffuse [i]surfaces[/i] scatter light equally in all directions equally, but a single point scatters light with a cosine-weighted BRDF. When viewed from above, each point reflects strongly. When viewing at an angle, the reflectance is weaker by a factor of cos(theta), but there are more points per unit area by the same factor, so it appears the same reflectance. You get diagrams like: [url="http://upload.wikimedia.org/wikipedia/commons/b/bd/Lambert2.gif"]http://upload.wikime...bd/Lambert2.gif[/url]. Am I missing something?[quote name='Bacterius' timestamp='1343723622' post='4964763']In BDPT, you instead [b]force[/b] the connecting ray to go from one point to another, [color=#ff0000]so you need to normalize the consequences of that event to its probability of actually happening[/color] - which is what the PDF does. This is assuming a random walk of course[/quote]Yes, I believe this is what I did for the BDPT. I think in retrospect though that I should understand ordinary PT fully first. Thanks, -G Edited July 31, 2012 by Geometrian 0 Share this post Link to post Share on other sites
Bacterius 13165 Report post Posted July 31, 2012 [quote name='Geometrian' timestamp='1343724483' post='4964768'] Are you sure? Diffuse surfaces scatter light equally in all directions equally, but a single point scatters light with a cosine-weighted BRDF. When viewed from above, each point reflects strongly. When viewing at an angle, the reflectance is weaker by a factor of cos(theta), but there are more points per unit area by the same factor, so it appears the same reflectance. You get diagrams like: [url="http://upload.wikime...bd/Lambert2.gif."]http://upload.wikime...d/Lambert2.gif.[/url] Am I missing something? [/quote] Actually, the diffuse BRDF is constant at 1/pi, the cos(theta) term is in fact universal, but you are right, I've been spending too much time around diffuse-only algorithms haha (where the diffuse PDF cancels out that term, so I forgot about it). But I think I've found out where you went wrong - you should be dividing by the PDF, not multiplying (obviously - if an event has a 0.01 probability of occuring, its contribution should be scaled by 1/0.01 = 100, not 0.01). Did you read [url="https://docs.google.com/viewer?a=v&q=cache:GhHostE91rAJ:www.cs.dartmouth.edu/~fabio/teaching/graphics08/lectures/18_PathTracing_Web.pdf+path+tracing+pseudocode&hl=en&gl=nz&pid=bl&srcid=ADGEESjb-lVxfT156PyKViCB15yza5bDyywleDI7KmLZinI1YlYsFQ-dvDLu5V3tQHG4jDGe0zgYoNknp_wX-SwXS8dyV7AU2QoRJoH9GO9CeFcsIiAZL2dvu03XDXo4OmoOBpkGIhmS&sig=AHIEtbRSS_pvGnJiol_mgQoYKDTzOtZGDg"]this[/url]? (slide 9) To be fair I'm now confused. I had a lot of trouble understanding this in the past and it seems I've forgotten how it works, once again [img]http://public.gamedev.net//public/style_emoticons/default/sad.png[/img] I will need to read over it later. Sorry... 0 Share this post Link to post Share on other sites