I've written a standard path tracer, GPU accelerated with OpenCL. It fails for small area lights (and of course point light sources fail entirely). Using direct light sampling doesn't work, because that can't handle caustics correctly, if at all.
I want to write a bidirectional path tracer, but I'm not sure how. My understanding:
1: Two paths are generated--one from the light and one from the camera.
2: The vertices of the paths are connected in all possible ways to form a number of complete paths from the light to the camera (paths that pass through opaque objects are discarded). I.e., all paths that use some light path vertices, jump somewhere on the camera path, and then use the rest of the camera path vertices.
3: Each path is evaluated by weighting the probability of each interaction occurring, producing a sample.
4: The sample is averaged into the final pixel's color.
I would like to make sure that the above is correct. Continuing on the assumption that it is:
In standard path tracing, I use importance sampling (e.g., for a diffuse surface I generate random rays with a cosine weighted distribution, then just weight all samples equally). For bidirectional path tracing, I speculate that both the light subpath and the camera subpath could be generated in this way, but the interactions at both ends of the connecting step would need to be weighted manually (because you can't just choose the endpoints randomly) by the interactions' PDF.
This all sound good? Thanks,
Edited by Geometrian, 28 July 2012 - 03:01 AM.