Hello,

I'm planning to write a series of ray tracing articles, focusing both on theory and on implementation. They will each be accompanied with a working implementation showing the topics covered, some pictures, and references to consult, though the main goal is to convey various ray tracing ideas and techniques in an intuitive way for the reader to implement on his own in parallel. Therefore it won't be "here, dump this code in this method and compile", the approach will be hands-on, with some code when discussing renderer design, but mostly theory and discussion.

The code itself will be in C#/Mono. Why C#? First, because it's an elegant language, and I found it well-suited to a variety of tasks, including most of what a ray tracer has to do. It is also much less complex than C++ syntactically and in terms of low-level features, and tends to be more readable on average, especially from the point of view of algorithms that need not be cutting-edge fast for this particular project, but should be easy to grasp.from the code. It will also probably appeal a bit more to the non-C++ crowd. In any case, I do not plan on spending too much time on acceleration structures for ray-scene intersection (which probably deserves a series of its own), so I'll briefly go over how they work in part 4 and then defer to the Embree library, which means the code will remain plenty fast enough for ray tracing.

This is the planned roadmap, though it will probably change over time:

Part 1: Introduction, basic Whitted ray tracing theory (draw little spheres and output a pixel map to a file)

Part 2: Start working on materials and lay the groundwork for introducing the BRDF later on, also add triangles

Part 3: Extend the program to have an actual, graphical window, and use the mouse/keyboard to move around the world, talk a little bit about cameras, what they are (in a typical renderer) and how they can be implemented

Part 4: Consolidate our budding renderer to handle large triangulated meshes, add a model/mesh system, talk about BVH's/kd-trees/octrees and pull in Embree to start rendering millions of triangles, and work out a better design to represent our geometry primitives (spheres? triangles? both? we'll see), add texture support here as well as the ability to build scenes from a list of models and loading them at runtime

Part 5: Introduce BRDF's and abstract our material system to handle arbitrary BRDF's

Part 6: Generalize the previously discussed BRDF's to transparent materials, because we can, and start hinting at a more advanced multi-bounce rendering algorithm

Part 7: Introduce the path tracing algorithm, russian roulette, discuss the weaknesses of a naive implementation and add direct lighting

Part 8: Interlude on the topic of atmospheric scattering, compare with the BRDF and render some pretty fog pictures, Beer-Lambert law, etc.. connect this with subsurface scattering and scattering in general

Part 9: Introduce photon mapping, and discuss how we can abstract both photon mapping, path tracing, and ray tracing under a common interface for our renderer to use, and implement a photon mapping renderer

Part 10: Compare the photon mapping and the path tracing algorithm to come up with the bidirectional path tracing algorithm, and implement a version of it, compare the results

Part 11: Talk about color, color systems, gamuts, spectral rendering, implement dispersion in our renderer

Part 12: Tidy up the renderer, finish the article series and conclude on everything we've covered + extra stuff to look at if you want to keep going

Please do voice opinions about how you would prefer the articles to be, I've tried to make them small enough so that no one article is overwhelming to read, while not having a 100-part series, but if you feel some are too sparse or too condensed I'd be glad to rearrange. If you want to see anything that I haven't covered above, tell me in the comments, so I know to incorporate it into the series if it's reasonable to do so.

I'll probably write the articles at a rate of about one every 1-2 months, depending on how long it takes (the first few will probably be quick to write, grinding down to a slow crawl around the end). Posting too many at once would be counterproductive and annoying anyway.

I'm planning to write a series of ray tracing articles, focusing both on theory and on implementation. They will each be accompanied with a working implementation showing the topics covered, some pictures, and references to consult, though the main goal is to convey various ray tracing ideas and techniques in an intuitive way for the reader to implement on his own in parallel. Therefore it won't be "here, dump this code in this method and compile", the approach will be hands-on, with some code when discussing renderer design, but mostly theory and discussion.

The code itself will be in C#/Mono. Why C#? First, because it's an elegant language, and I found it well-suited to a variety of tasks, including most of what a ray tracer has to do. It is also much less complex than C++ syntactically and in terms of low-level features, and tends to be more readable on average, especially from the point of view of algorithms that need not be cutting-edge fast for this particular project, but should be easy to grasp.from the code. It will also probably appeal a bit more to the non-C++ crowd. In any case, I do not plan on spending too much time on acceleration structures for ray-scene intersection (which probably deserves a series of its own), so I'll briefly go over how they work in part 4 and then defer to the Embree library, which means the code will remain plenty fast enough for ray tracing.

This is the planned roadmap, though it will probably change over time:

Part 1: Introduction, basic Whitted ray tracing theory (draw little spheres and output a pixel map to a file)

Part 2: Start working on materials and lay the groundwork for introducing the BRDF later on, also add triangles

Part 3: Extend the program to have an actual, graphical window, and use the mouse/keyboard to move around the world, talk a little bit about cameras, what they are (in a typical renderer) and how they can be implemented

Part 4: Consolidate our budding renderer to handle large triangulated meshes, add a model/mesh system, talk about BVH's/kd-trees/octrees and pull in Embree to start rendering millions of triangles, and work out a better design to represent our geometry primitives (spheres? triangles? both? we'll see), add texture support here as well as the ability to build scenes from a list of models and loading them at runtime

Part 5: Introduce BRDF's and abstract our material system to handle arbitrary BRDF's

Part 6: Generalize the previously discussed BRDF's to transparent materials, because we can, and start hinting at a more advanced multi-bounce rendering algorithm

Part 7: Introduce the path tracing algorithm, russian roulette, discuss the weaknesses of a naive implementation and add direct lighting

Part 8: Interlude on the topic of atmospheric scattering, compare with the BRDF and render some pretty fog pictures, Beer-Lambert law, etc.. connect this with subsurface scattering and scattering in general

Part 9: Introduce photon mapping, and discuss how we can abstract both photon mapping, path tracing, and ray tracing under a common interface for our renderer to use, and implement a photon mapping renderer

Part 10: Compare the photon mapping and the path tracing algorithm to come up with the bidirectional path tracing algorithm, and implement a version of it, compare the results

Part 11: Talk about color, color systems, gamuts, spectral rendering, implement dispersion in our renderer

Part 12: Tidy up the renderer, finish the article series and conclude on everything we've covered + extra stuff to look at if you want to keep going

Please do voice opinions about how you would prefer the articles to be, I've tried to make them small enough so that no one article is overwhelming to read, while not having a 100-part series, but if you feel some are too sparse or too condensed I'd be glad to rearrange. If you want to see anything that I haven't covered above, tell me in the comments, so I know to incorporate it into the series if it's reasonable to do so.

I'll probably write the articles at a rate of about one every 1-2 months, depending on how long it takes (the first few will probably be quick to write, grinding down to a slow crawl around the end). Posting too many at once would be counterproductive and annoying anyway.