Jump to content

  • Log In with Google      Sign In   
  • Create Account






Ray Tracing Article Series

Posted by Bacterius, 18 January 2014 · 849 views

Hello,
I'm planning to write a series of ray tracing articles, focusing both on theory and on implementation. They will each be accompanied with a working implementation showing the topics covered, some pictures, and references to consult, though the main goal is to convey various ray tracing ideas and techniques in an intuitive way for the reader to implement on his own in parallel. Therefore it won't be "here, dump this code in this method and compile", the approach will be hands-on, with some code when discussing renderer design, but mostly theory and discussion.

The code itself will be in C#/Mono. Why C#? First, because it's an elegant language, and I found it well-suited to a variety of tasks, including most of what a ray tracer has to do. It is also much less complex than C++ syntactically and in terms of low-level features, and tends to be more readable on average, especially from the point of view of algorithms that need not be cutting-edge fast for this particular project, but should be easy to grasp.from the code. It will also probably appeal a bit more to the non-C++ crowd. In any case, I do not plan on spending too much time on acceleration structures for ray-scene intersection (which probably deserves a series of its own), so I'll briefly go over how they work in part 4 and then defer to the Embree library, which means the code will remain plenty fast enough for ray tracing.

This is the planned roadmap, though it will probably change over time:

Part 1: Introduction, basic Whitted ray tracing theory (draw little spheres and output a pixel map to a file)
Part 2: Start working on materials and lay the groundwork for introducing the BRDF later on, also add triangles
Part 3: Extend the program to have an actual, graphical window, and use the mouse/keyboard to move around the world, talk a little bit about cameras, what they are (in a typical renderer) and how they can be implemented
Part 4: Consolidate our budding renderer to handle large triangulated meshes, add a model/mesh system, talk about BVH's/kd-trees/octrees and pull in Embree to start rendering millions of triangles, and work out a better design to represent our geometry primitives (spheres? triangles? both? we'll see), add texture support here as well as the ability to build scenes from a list of models and loading them at runtime
Part 5: Introduce BRDF's and abstract our material system to handle arbitrary BRDF's
Part 6: Generalize the previously discussed BRDF's to transparent materials, because we can, and start hinting at a more advanced multi-bounce rendering algorithm
Part 7: Introduce the path tracing algorithm, russian roulette, discuss the weaknesses of a naive implementation and add direct lighting
Part 8: Interlude on the topic of atmospheric scattering, compare with the BRDF and render some pretty fog pictures, Beer-Lambert law, etc.. connect this with subsurface scattering and scattering in general
Part 9: Introduce photon mapping, and discuss how we can abstract both photon mapping, path tracing, and ray tracing under a common interface for our renderer to use, and implement a photon mapping renderer
Part 10: Compare the photon mapping and the path tracing algorithm to come up with the bidirectional path tracing algorithm, and implement a version of it, compare the results
Part 11: Talk about color, color systems, gamuts, spectral rendering, implement dispersion in our renderer
Part 12: Tidy up the renderer, finish the article series and conclude on everything we've covered + extra stuff to look at if you want to keep going

Please do voice opinions about how you would prefer the articles to be, I've tried to make them small enough so that no one article is overwhelming to read, while not having a 100-part series, but if you feel some are too sparse or too condensed I'd be glad to rearrange. If you want to see anything that I haven't covered above, tell me in the comments, so I know to incorporate it into the series if it's reasonable to do so.

I'll probably write the articles at a rate of about one every 1-2 months, depending on how long it takes (the first few will probably be quick to write, grinding down to a slow crawl around the end). Posting too many at once would be counterproductive and annoying anyway.




Sounds great, looking forward to it! :)

Is the idea to focus on classical ray tracer, CPU only? Or will you cover GPGPU techniques too?

 

Out of interest, where do modern PCs stand on CPU-ray tracing performance? I always believed it was too slow for realtime, but that was probably 10 years ago when I last looked into it. PCs are far faster, but on the other hand people expect to render at much higher resolutions and AFAIK ray-tracing performance is closely linked to the number of pixels, which is bad since doubling your resolution quadruples the pixel count!

Is the idea to focus on classical ray tracer, CPU only? Or will you cover GPGPU techniques too?

 

Out of interest, where do modern PCs stand on CPU-ray tracing performance? I always believed it was too slow for realtime, but that was probably 10 years ago when I last looked into it. PCs are far faster, but on the other hand people expect to render at much higher resolutions and AFAIK ray-tracing performance is closely linked to the number of pixels, which is bad since doubling your resolution quadruples the pixel count!

 

I am not sure whether to cover GPGPU as well. I've written a GPU path tracer before, but I can't say it was particularly fast or elegant. If there's time I might slip in an article about GPGPU, its strengths and weaknesses, and tips on how to optimize them compared to a CPU implementation, but I don't think there will be any GPGPU implementation this time (besides, I can't use CUDA, and OpenCL isn't exactly nice to code against).

 

CPU ray tracers are actually quite competitive with even the best GPU ray tracers, their SIMD instructions can be used to cast multiple rays simultaneously, and extra cores help as well. I'd say a well optimized CPU renderer is only 4 to 5 times as slow as a GPU ray tracer, and is considerably easier to write and extend. As for performance, that's true, CPU's can render a basic phong environment in realtime at decent resolutions (a big issue being the CPU-GPU bottleneck in displaying the rendered image, which puts a hard limit on the framerate achievable at a given resolution), once you look into the photorealistic algorithms things start getting somewhat slow, but it's still largely fast enough that you can move the camera around without waiting minutes for something to come up.

 

Generally these algorithms are designed so that you can tell them to stop (or pause/resume) at any time, so you can display the current progress so far as often as you need and keep rendering in the background.

Teach me how to do this

 

http://youtu.be/5U1EcI_1HiU?t=1m12s

Teach me how to do this

 

http://youtu.be/5U1EcI_1HiU?t=1m12s

 

Sure, that's scattering and perhaps some lens flare effects, I might add some notes on lens flares even though it's not strictly ray tracing, or more generally the geometry of a camera's optical system, ...

Make it really easy with code I can use. Thanks :)

It will be weird to receive explanations from a cat, it sounds interesting :D

Recent Comments

Latest Visitors

PARTNERS