Jump to content

  • Log In with Google      Sign In   
  • Create Account
  • You cannot edit this iotd

Lambda Spectral Path Tracer  by Bacterius    *****


Time Spent: 1 month
Date Added: Oct 18 2012 07:26 PM

Physically based spectral path tracer written in C++, still largely a work in progress but quite functional as illustrated above. The image features physically correct Fresnel-weighted refractions and reflections as well as diffuse interreflection ("color bleeding").

This path tracer uses an optimized bounding volume hierarchy to accelerate ray tracing, allowing it to support scenes well into the tens of millions of objects with only a logarithmic performance hit. Another feature is the easy integration of scalable multithreading, thanks to OpenMP.

The code is available for free at github under the MIT license.

Feedback and/or contributions are very welcome!  
Code::Blocks w/ GCC 4.6.
Delphi 6.

  • You cannot edit this iotd



Looks great! How long did it take to render this image?
I second: this looks impressive! What's the render time for this image?
being spectral, I suppose it support dispersion ? it would be great if you'd put a screenshot of a prism or diamond exhibiting this feature. Any plan to port it to GPU ? (cuda, openCL or direct compute)
Yeah, that's pretty epic.
Hello, thanks guys Posted Image the render time for this one was 12h34m5s, mainly because of the amount of glass involved (the BRDF's aren't very optimized as of now). Those without glass take a more reasonable 2-4 hours to render.

But that's with unidirectional path tracing, I predict I will need much less samples when I get around to implementing bidirectional path tracing.

Florent, indeed the tracer supports dispersion, however, dispersion is only visible with very small light sources (otherwise dispersion occurs all over the place and the result is a net white), which standard path tracing has trouble sampling. However I really want to do such an image as soon as BDPT is up.

As for GPU support, it would be difficult, mainly because of the amount of abstraction in the code which doesn't map well to GPU. I think it would be doable in CUDA because of all the extra features, however I don't have an nVidia graphics card. An interesting idea to keep in mind though.

I added a new render (the previous, low-resolution version took 2h27m49s)
Ahh ffs. That -1 was an accident.
Impressive. How about making a 5 minute fully rendered video in 3D?

Just kidding :))
Very nice!
Great work.

Look forward to see more :)
Good luck!
So realistic.

So, could you get a rainbow by shooting a ray of light through a prism?
@slayemin: Essentially, yes. But there are some problems, in particular the light beam must be very small to clearly see the rainbow, and since right now the path tracer only supports forward rendering (from the eye) it is too difficult to deal with small light source. Using bidirectional path tracing, it will work (when I get time to implement it).
The other fun part is decomposing light into its components so that you can apply Snells law of refraction. It's would increase your rendering time by a lot, but that can be alleviated by running mulitple threads or distributing the render jobs across multiple computers.

When I wrote my proof of concept, I defined light as a series of wavelengths and then looped through each wavelength and matched it to a color table. Thus, you could have pure white light, use chemical light sources like an argon, neon, or sodium, and simulate the spectral diffusion. Cool stuff! I can send you details if you're interested.
@slayemin: it's what the code is doing, actually Posted Image the color-matching curve is in cie.cpp and for each sample I iterate over every wavelength in the visible range at 5nm intervals.
Looks very good. :) How many samples did you use for those images? Hours seems a tad long, even for those BRDFs; are you using explicit light paths? How many rays/s?
@Guest: 512/1024 samples were used for each of those, and each sample includes 81 wavelength samples. I do not use explicit light paths yet (which explains the high sample count), only implicit ones. As for rays per second, I never really measured them, it really only makes sense when you are targetting realtime I guess, but you can measure it from the time taken. For instance for the cup picture, I averaged approximately 1.15M rays/s (i.e. ~300k per thread). Not very optimized yet..