Posted by Arjan B, 12 February 2014 · 622 views
Two of the subjects I'm following at the university at the moment are Additional Component Computer Graphics and Algorithms for Massive Data. The first pretty much lets you propose a graphics related project while the second focuses on processing large amounts of data or computations efficiently. So, of course, my proposal was to implement a path tracer with CUDA. This proposal was accepted for both subjects! So now I'll be working on something I would be anyway, but now with a friend and I get credit for two courses while I'm at it. Yay!
With the knowledge gained from implementing a simple path tracer, surfing the internet and watching lectures about global illumination on YouTube (http://www.youtube.com/playlist?list=PLslgisHe5tBPckSYyKoU3jEA4bqiFmNBJ), I decided to start over with an empty project and set up a clean code structure.
- Working indirect illumination. Taking things, such as the BRDF of the material and probability of sampling a given ray, into account
- Cosine-weighted sampling for diffuse materials
I'm currently trying to implement a direct lighting approach. At every ray-object intersection point p, I sample a random point on each of the lights and calculate their contribution to the lighting of point p. Given that I have not implemented any kind of fall-off of a light's intensity over distance, an image generated with just direct lighting appears far too bright. As such, the combination of direct and indirect lighting at every intersection is heavily dictated by the (incorrect) direct lighting. This can clearly be seen in the gallery album included with this post. Or, at least, I hope so. Since I've never uploaded one before.
However, I'm wondering if implementing fall-off of the light's intensity will "fix" this. Since it will make the indirect lighting image even more dark, making it contribute even less compared to the direct lighting image. But I guess we will see how it all works out.
An optimization to reduce noise might be to sample the light's shape projected onto the unit hemisphere around the normal, instead of sampling just some random point on the light's shape. Since for a point p, a large fraction of the points on, for example, a sphere are occluded by the sphere itself. This would lead to a whole lot less samples that lead to no contribution.
After direct lighting is fixed, the next thing on the agenda is... CUDA! In the next entry, I'm hoping to show off some slick videos of how awesomely fast and interactive our path tracer will be. Stay tuned!