Jump to content

  • Log In with Google      Sign In   
  • Create Account

Rendering Is Fun

Ray Tracing Article Series

Posted by , 18 January 2014 - - - - - - · 1,948 views

Hello,
I'm planning to write a series of ray tracing articles, focusing both on theory and on implementation. They will each be accompanied with a working implementation showing the topics covered, some pictures, and references to consult, though the main goal is to convey various ray tracing ideas and techniques in an intuitive way for the reader to implement on his own in parallel. Therefore it won't be "here, dump this code in this method and compile", the approach will be hands-on, with some code when discussing renderer design, but mostly theory and discussion.

The code itself will be in C#/Mono. Why C#? First, because it's an elegant language, and I found it well-suited to a variety of tasks, including most of what a ray tracer has to do. It is also much less complex than C++ syntactically and in terms of low-level features, and tends to be more readable on average, especially from the point of view of algorithms that need not be cutting-edge fast for this particular project, but should be easy to grasp.from the code. It will also probably appeal a bit more to the non-C++ crowd. In any case, I do not plan on spending too much time on acceleration structures for ray-scene intersection (which probably deserves a series of its own), so I'll briefly go over how they work in part 4 and then defer to the Embree library, which means the code will remain plenty fast enough for ray tracing.

This is the planned roadmap, though it will probably change over time:

Part 1: Introduction, basic Whitted ray tracing theory (draw little spheres and output a pixel map to a file)
Part 2: Start working on materials and lay the groundwork for introducing the BRDF later on, also add triangles
Part 3: Extend the program to have an actual, graphical window, and use the mouse/keyboard to move around the world, talk a little bit about cameras, what they are (in a typical renderer) and how they can be implemented
Part 4: Consolidate our budding renderer to handle large triangulated meshes, add a model/mesh system, talk about BVH's/kd-trees/octrees and pull in Embree to start rendering millions of triangles, and work out a better design to represent our geometry primitives (spheres? triangles? both? we'll see), add texture support here as well as the ability to build scenes from a list of models and loading them at runtime
Part 5: Introduce BRDF's and abstract our material system to handle arbitrary BRDF's
Part 6: Generalize the previously discussed BRDF's to transparent materials, because we can, and start hinting at a more advanced multi-bounce rendering algorithm
Part 7: Introduce the path tracing algorithm, russian roulette, discuss the weaknesses of a naive implementation and add direct lighting
Part 8: Interlude on the topic of atmospheric scattering, compare with the BRDF and render some pretty fog pictures, Beer-Lambert law, etc.. connect this with subsurface scattering and scattering in general
Part 9: Introduce photon mapping, and discuss how we can abstract both photon mapping, path tracing, and ray tracing under a common interface for our renderer to use, and implement a photon mapping renderer
Part 10: Compare the photon mapping and the path tracing algorithm to come up with the bidirectional path tracing algorithm, and implement a version of it, compare the results
Part 11: Talk about color, color systems, gamuts, spectral rendering, implement dispersion in our renderer
Part 12: Tidy up the renderer, finish the article series and conclude on everything we've covered + extra stuff to look at if you want to keep going

Please do voice opinions about how you would prefer the articles to be, I've tried to make them small enough so that no one article is overwhelming to read, while not having a 100-part series, but if you feel some are too sparse or too condensed I'd be glad to rearrange. If you want to see anything that I haven't covered above, tell me in the comments, so I know to incorporate it into the series if it's reasonable to do so.

I'll probably write the articles at a rate of about one every 1-2 months, depending on how long it takes (the first few will probably be quick to write, grinding down to a slow crawl around the end). Posting too many at once would be counterproductive and annoying anyway.


Lens Flares: We're in business [demo included]

Posted by , 06 October 2013 - - - - - - · 3,386 views

Hello everyone,
it's been a long time since my last update on lens flare rendering. Too long, arguably, since most of the work I am about to unveil in this journal entry was completed in the past two weeks or so, once everything clicked into place. First of all, here is the demo (File->Download), which requires Windows along with a DX11 capable graphics card, as the demo makes extensive use of the DX11 implementation of the Fast Fourier Transform (ID3DX11FFT). It comes bundled with the latest build of SlimDX straight from the SVN repository which you will need to run the demo (you cannot use the Jan 2012 runtime as it has a critical bug which renders the FFT implementation unusable, bug which was fixed in late 2012). It also comes with a set of hand-drawn apertures which you can play around with. Please let me know if the program does not work for you when it should so I can fix it.

How to use the demo

The tech demo is fairly self-contained and simple to use. Upon starting the program, you'll be asked to select an aperture you want to use, pick whichever one you prefer in the apertures folder. At this point the aperture has been preprocessed is ready to use. You are given four possible views (types of display):

Aperture Transmission Function: simply displays the aperture you are using, and formally represents the amount of light transmitted through each point of the aperture (white = all light passes through, black = light is blocked). The Load Aperture button lets you choose another aperture (other settings will be maintained).

Aperture Convolution Filter: shows the "convolution filter" which essentially shows the distribution of diffracted light around a central beam of light, for most apertures most light diffracts near the center, with most of the light remaining unperturbed exactly at the central pixel (this display is tonemapped and the brightness scale is not linear). This is an RGB image, with each channel representing a different diffraction distribution. Red tends to be diffracted farther away than green or blue as it has a larger wavelength.

Original Frame Animation: shows a simple synthetic scene, which is what you would observe if light did not diffract (note I did not bother with anti-aliasing).

Convolved Frame: shows the same scene as above, but with diffraction effects added in.

There are a few configuration options:

Observation plane distance: this represents, in some sense, the distance between the aperture and the sensor which collects the diffracted light, on an inverse scale, so the smaller it is the longer diffracted waves travel, and so the larger the lens flare appears (up to some limit).

Exposure level: this is self-explanatory and controls the exposure setting of the tonemapped displays (extreme values may lead to unrealistic and/or glitchy results).

Animation speed: controls the speed at which the synthetic scene plays out at (it can be set to zero to pause all movement).

Animation Selection: lets you select among a few (hardcoded) scenes.

The aperture definition settings are a bit more involved, but basically let you select at which wavelengths to sample the diffraction distribution of the aperture (other wavelengths are interpolated) and also let you associate a custom color to each wavelength if you so desire. The default settings are fairly close to reality, but are not perfectly calibrated. Right click the list to play around with these settings.

The demo should run at 60 fps on most mainstream cards, and it may be a bit slow for those of you with slower cards, but it should hopefully still be interactive. Let me know if it is unacceptably slow for you, as the bottleneck is believed to be in the FFT convolution stage which is essentially compute-bound, so I'd be interested to know where to focus optimization efforts with some hard statistics.

The code is not yet mature enough to be released, and there are still a few bugs in my implementation (in particular, a nasty graphical corruption of the central horizontal and vertical line of the diffraction distribution at high exposures which probably comes from a subtle off-by-one bug in one of the shaders) but overall the algorithm is rather robust. The cornerstone of the approach is of course the convolution step which uses the FFT and the Convolution Theorem to great effect to efficiently convolve the diffraction distribution with the image in order to achieve very convincing occlusion and diffraction effects.

Posted Image


For those of you who cannot run the demo, I have also compiled a short video to illustrate:


Closing notes

Note that this algorithm almost certainly exists in high profile renderers - I haven't checked but while I essentially derived the theory and implementation on my own, I am confident that this is already implemented in one form or another somewhere - and is not quite ready for video games yet in its current form. The preprocessing step done for each aperture is fine and while I opted for accuracy here, it can be significantly accelerated and apertures can feasibly be dynamically updated every frame, but the real killer is the convolution step which involves at least 6 large Fourier Transforms of dimensions at least equal to the dimensions of the target image + the dimensions of the aperture (minus 1). To give you an order of magnitude, we're looking at 2500 × 2500 transforms for a 1080p game with a 512 × 512 aperture, which is not happening today (but will tomorrow). So hacks are required to approximate the convolution, such as heavily blurring the diffraction distribution and pasting it on top of prominent light sources in the player's field of view, which is good enough for games (and happens to be a near perfect approximation for unoccluded spherical light sources).

One note on the Fourier Transform dimensions. Because all general purpose FFT algorithms are extremely sensitive to the size of their inputs, some dimensions work better than others (for instance, power of two dimensions are the fastest). Fortunately, the convolution step can accept dimensions larger than the minimum required without any loss of accuracy, which gives us some leeway in choosing transform dimensions which will give good performance. For instance, in the case of my demo, both the aperture and image were 600 pixels by 600 pixels, giving a minimum convolution transform dimension of (600 + 600 - 1) = 1199 pixels by 1199 pixels. However, such a transform is not efficient (and gave me around 12 fps). But by simply padding the transform to 1280 pixels by 1280 pixels, framerate shot up to a steady 60 fps. In practice, you want to select dimensions with many small prime factors, such as 2 or 3. As you can see 1199 = 11 × 109 while 1280 = 28 × 5.

If you are wondering what happens if you use smaller dimensions than the minimum required by the convolution, the answer is that because what we are doing is essentially a circular convolution, and that the Fourier Transform is periodic, the convolution would "wrap around" from the left edge to the right edge and from the top edge to the bottom edge and vice versa. This is not what we want. So if you can guarantee that no lens flare will ever come close enough to the edges to bleed offscreen, you can get away with a smaller convolution, but this is of course very situational.


Updates in lens diffraction rendering

Posted by , 22 August 2013 - - - - - - · 984 views

Hey guys,
so I've been working on and off on my lens diffraction project for the past few weeks and I think I finally reached a milestone yesterday. I've got something that looks correct, is relatively efficient (read: not quite 60 fps yet, but certainly interactive), and simple to implement. However I am still having problems with getting the diffraction pattern to look right, which leads me to believe I am either not using plausible aperture functions or that I'm still missing a parameter somewhere, we are getting close however.

Here is a proof-of-concept render, the left image is a stock HDRI picture of a high contrast scene without any lens flare artifacts (here a forest with a bright sky) and the right image is the same picture "augmented" with a lens diffraction pattern according to a roughly circular aperture with some noise/dust (click to enlarge):

Posted Image


The pattern itself is still a bit noisy because this is really the superposition of three different diffraction patterns at wavelengths 450, 525, and 650 nanometres (blue, green, and red) which leaves a bit of ringing due to aliasing, but I have an idea for dealing with those. Either way I want this implementation to be realtime, as soon as I iron out all the bugs I'll write a post-processing shader for it in DirectX and play around with modifying the aperture interactively to see what works and what doesn't.

I will unveil the tech later once I've got it all sorted out (perhaps in a GameDev article?) but for now it looks promising. I also need to develop methods to tweak it to video game HDR exposure levels, since at the moment, being physically based and all, it really only works well for real-life dynamic ranges (the effect tends to either be too weak or too noticeable otherwise). Ideally I'd like to have a few artist-friendly parameters to be able to rescale and intensify the effect as desired. I understand how rescaling would work but I am not sure about the other just yet.

Thanks to openfootage.net for the test photo.


Some progress!

Posted by , 26 March 2013 - - - - - - · 2,068 views

Hello Gamedev!

So you probably thought my ideas had been swept under the carpet since I posted my last blog entry almost four months ago. Well, not at all! I'm actually making some good progress on my latest renderer, which you can already check out at my github page. There aren't too many features and it's not optimized yet, but I've been trying to get the underlying architecture right before going into feature creep, just so I can last as long as possible before the whole thing inevitably collapses under its own weight Posted Image but there are already some juicy renders to check out, and it's certainly a colossal improvement (engineering-wise) over my previous renderer.

But this isn't what this entry is about. I recently came across a nice paper, which you can download here for free, which explains how to render physically based lens flares, in real time on conventional hardware. One section of the paper caught my eye, and I'll share the details with you in this entry. The concept is simple. You have probably noticed that when you look at bright objects, they tend to display intricate patterns of alternating light and darkness, called lens flares:

Posted Image


There are two features in this image - the bright star-like object, which I will refer to as "starburst", and the the round thingies which I call "ghosts" (following the paper's terminology). The ghosts didn't interest me, so we will just ignore them. But look at the starburst, does its shape evoke anything? No? Here's a little hint:

Posted Image


That's right, it's a Fourier power spectrum. But why? The answer lies in the concept of far-field diffraction (also called Fraunhofer diffraction), which states that when a beam of parallel waves is partially blocked by an aperture, the waves will diffract, and every point across the aperture becomes a wave emitter. This is a special case of the more general diffraction phenomenon:

Posted Image


And since light is a wave, this applies to our starburst. So what happens is that all those diffracted waves now interfere, that is, they add constructively and destructively. And it turns out that this is just what the Fourier transform does, and therefore, our observer (which is behind the aperture) doesn't see the parallel waves, but instead sees the Fourier transform of the aperture! And this is pretty cool, because we know an algorithm to efficiently compute Fourier transforms: the FFT.

Now you know that each wavelength of light corresponds to a different color, and so, we don't just have one Fourier transform, but we have infinitely many, one for each wavelength. Fortunately, it turns out that the relationship between two such transforms is linear, that is, the Fourier transform for, say, 400nm (blue) is equivalent to the Fourier transform for 550nm (green), but just a different size. So we only need to calculate the transform once, and then superpose scaled copies of this transform over the visible spectrum. The result is called the "chromatic Fourier transform", and guess what? It looks just like our starburst, and all we need to do is slap it on a point light or area light, and... that's it.

There are a few extra things to consider, but I've written up a little prototype. Here are a couple computer-generated raw starbursts (the starbursts you would see in practice are less colorful because they necessarily exclude all wavelengths not emitted by the light source, and so often appear just yellow, what you see below is the "baseline" starburst, with all visible wavelengths):

Posted Image


Posted Image


There is some visible ringing in the center, which is probably due to my hasty implementation and poor choice of parameters, but it was a proof-of-concept. Here is a monochromatic one:

Posted Image


As you can see there is no ringing anymore, so it's definitely coming from the way I'm handling colors in my prototype. But it's getting there!

The following is the result of using a circular aperture, with a bit of dirt (simulated by some random dots and lines) on a section of the disk:

Posted Image


You might have expected an Airy diffraction pattern, it is actually there, but because the starburst is so bright you can't discern the fringes - a normalized (not tone-mapped) version of this image clearly shows the pattern (in the center):

Posted Image


Of course, the renders above assume the entire lens is fully illuminated, which is not true in general, only specific parts of the lens will be illuminated, which means the starburst will appear quite different depending on what you are looking at (and how you are looking at it).

Just to give you an idea of how efficient this process is, each of the starbursts above was rendered in about 4 seconds, on 1 CPU core, on a poorly optimized FFT algorithm, at a resolution of 1024x1024 pixels. Of course the starbursts need not be that large in, say, a game, this was for demonstration purposes. On any decent GPU, the FFT can be completed in a matter of milliseconds, and so the aperture could be updated each frame (with, for instance, particles of dust, rain, or even blood stains) for improved realism.

If you have ever wanted to have cool lens flare effects in your game, I'd recommend checking out the paper. It even comes with code examples to generate your own apertures based on optical lens configurations, and while the paper itself doesn't go into too much detail, it's definitely worth taking a look at.

Who thinks an animated version of this would make a killer screensaver? I certainly do Posted Image

EDIT: after tweaking my prototype, I think I've come a bit closer to "realistic" starbursts, though I'm really uncertain, you can't just tweak the code randomly until it works, I need to sit down and slowly work through the theory to get it right, but the results do seem more consistent with reality:

http://i.imgur.com/o23HQeB.jpg




Scattering, Transmission, Absorption, Reflection

Posted by , 03 December 2012 - - - - - - · 2,682 views

Hello Gamedev!

So my last entry of almost two months ago left on a note about light scattering, and this entry, which I'm sure you've all been waiting for with bated breath, will address this question at last, among others. I haven't been writing much code lately, due to time issues and - mostly - procrastination, however I had a lot of time to think.

I think I've succeeded in creating a sound conceptual framework based on relatively simple physical theories which should be able to model essentially everything non-magnetic/quantum/relativistic, from basic reflection to complicated cloud scattering. The basic idea is that any light-matter interaction can be modelled through only four essential processes:

- Scattering
- Transmission
- Absorption
- Reflection

Or STAR for short (yes, I rearranged them just to make this pun). However, that isn't all - each of these interactions can be studied independently.

If you've been doing computer graphics, you probably know reflection well, through the use of BRDF's, which are reflectance distributions used to describe where light is going to get reflected from a surface. In the STAR framework, none of this exists. Reflection is described purely by the Fresnel equations, modulated by a surface distribution. In essence, a variant of Cook-Torrance.

This is because this model is really the only physically correct one, as Fresnel reflection must occur any time a light ray crosses a medium boundary (so all we need is to make this medium boundary more or less smooth, depending on how rough you want your surface to be, which can be done statistically with a microfacet distribution coupled with a self-shadowing term - or you can add microscopic triangles to your geometry for complete control, but that's not very practical).

Transmission is easy, as the Fresnel equations handle transmission at the same time, so no extra complexity there. This includes total internal reflection, by the way.

Absorption is described by the Beer-Lambert law, which states that the intensity of radiation through some medium decreases exponentially with distance travelled, depending on the medium's absorption (or extinction) coefficient, and density (this assumes a homogeneous medium, but you can use numerical integration to handle local density fluctuations). The simplest real-life example of this is red ink - inside its bottle, it appears black, because light must travel through many cm of ink before reaching your eyes and most of it gets absorbed. But as soon as you put the ink on paper, it appears deep red, as now light only travels half a mm or so inside.

Finally, scattering can occur in any impure medium, whenever a light ray encounters a particle different from the medium surrounding it (for instance, a dust particle in the atmosphere, or an air bubble in milk). When this happens, the light ray is scattered in any direction (potentially back where it came from) depending on the medium's phase function, which is really just like a BRDF, but for scattering. Those phase functions are typically much more complicated than BRDF's depending on the size of the particle, as they must often consider interference effects. For instance, the phase function for a 10μm silver particle in air is (for red light, on a logarithmic scale):

Posted Image


As you can see, most of the incident light (at 0 degrees) is scattered backwards, but some is randomly scattered in every direction. The wavy nature of the phase function is due to interference between light rays (which are getting reflected back and forth inside the silver particle). This plot was made with the MiePlot software.

Scattering and absorption are generally simulated together, just as reflection and transmission are. So really, there are only two different interactions to consider: surface interactions (reflection & transmission -> Fresnel Equations), and volume interactions (scattering & absorption -> Phase Function). That's it.

When it comes to light rays, we really don't need the ray formulation of light anymore and we can consider them waves for all intents and purposes. That means that our light waves have a frequency (which is constant), a wavelength, and a phase (which varies with time and with interactions with matter). Therefore, we can model any physical phenomenon which uses those quantities, in particular diffraction, and interference, which is responsible for a wide range of effects, such as... mirrors! Your mirror uses a reflective coating based on the concept of interference:

Posted Image

A similar principle applies to anti-reflective coatings (the idea is to make the reflected rays interfere destructively with each other such that only the transmitted rays remain). Let us see how STAR applies to some common materials found in the real world:

Hard wood:
- rough surface
- very high absorption (which stops transmitted rays)
- no visible scattering since the absorption coefficient is so high

Marble:
- smooth surface
- low absorption, enough to make a thin marble layer translucent (e.g. not opaque, but not transparent either)
- high backscattering (this is the famous subsurface scattering phenomenon)

Clear atmosphere:
- no surface
- extremely low absorption (although this depends on height)
- generally forward scattering, but mostly for blue wavelengths (this is why mountains in the distance appear blue/gray)

Pure glass:
- ordinary dieletric surface (reflection/transmission as usual)
- absorption zero, or very very low
- no scattering

Reflective object:
- multiple surfaces to enhance reflection, as shown in the mirror diagram above
- extremely high absorption (the object is opaque)
- no visible scattering

Soap bubble:
- multiple surfaces (the layers go air || soap || water || soap || air)
- no absorption, as the soap/water layers are extremely thin
- no visible scattering for the same reason
- strong interference effects (thin film interference) responsible for the colorful aspect of soap bubbles

And so on.. which is much larger array of materials than what can be represented with traditional approaches.

At this point, you're probably thinking, "cool, but does it blend how long does it take to render something like that", which is a fair remark, and I suspect it will take quite long to render complicated stuff like multiple scattering. On the other hand, the simplicity of the framework makes implementation easier, especially on limited platforms such as the GPU: there are few "moving parts" and the only hard thing is implementing diffraction, which isn't even important for most scenes in contrast to interference - even complex phase functions can be conveniently fitted by analytical models, for instance the mean cosine distribution, which is kind of like Blinn-Phong, but for scattering.

Sure, you get a few more intersection tests with your geometry when it comes to multiple surface layers, but you're going to be doing that a lot regardless if you want to accurately model scattering (approximations only get you so far), so you might as well get correct reflections too. One important thing to note is that adding extra layers to a mesh does not add any memory overhead, since the layer has the exact same geometry as the mesh, except slightly enlarged, so all you need to do is keep track of how many layers you have, and their distances from each other.

In conclusion, the main selling point of the STAR framework is that it eliminates completely the notion of BRDF, using instead a phase function for scattering (for which a general-purpose analytical model exists, by the way: Mie scattering - though it's hell to compute, so approximations are required). This increases the array of materials that can be represented and generalizes volume rendering nicely. Of course, BRDF's can still be used, but it defeats the point, as many BRDF's are actually already approximations to some form of scattering or are just glorified surface distributions.

I am confident I am not the first person to come up with this, and a lot of high-end rendering systems probably have already all of this implemented in one way or another - minus perhaps interference - but I look forward to implementing this for myself to see how it would work in practice. After all, this is all interesting new territory for me, and the point of all this is to have fun discovering new stuff, not necessarily produce the most efficient and comprehensive algorithm ever. It's pretty likely that in a few years, nobody except me will remember this article I wrote today, but that's all right.

In any case, I think I am nearing the state of the art in rendering at this point, so once I finish exploring this whole scattering thing, I'll probably move on to something different, perhaps game design, who knows (this is GameDev, after all). I will have spent over a year on ray-tracing and lighting theory, but I feel it has been worth it and I learned a lot of good physics and mathematics. I would definitely recommend to anyone interested in this to do the same.


Rough Glass Microfacet Model

Posted by , 07 October 2012 - - - - - - · 1,445 views

Rough Glass Microfacet Model Transmission models are a bit more involved than reflection models. The basic ideal dielectric refractive model seen in many renders all over the net is nice and all, but it just doesn't cut it when you want rough transmission, such as in frosted glass.

The one displayed here is based on microfacet theory, which means we assume that the surface is not in fact planar but has many microscopic imperfections - facets, facing in more or less random directions which all reflect and transmit light ideally. Of course, we don't actually model all those facets, instead we use statistical methods to evaluate them:

Posted Image

We can select ourselves the distribution of the facets, that is, the probability that any one facet is incident to the view vector, and find the surface normal of this microfacet. For this render, I chose the Beckmann distribution, but there are many more. The Beckmann distribution has a "roughness" parameter, which in simple terms defines the amount of variance in the facets' directions.

A roughness of zero means there are no facets - they are all pointing in the same direction, forming a uniform surface, so in this case the model degenerates into a long-winded form of ideal dielectric refraction. On the other hand, a very high roughness corresponds to diffuse transmission, where facets are all over the place, which looks rather odd (as evidenced by the lower-right sphere in the render).

The next step after having selected a normal vector for our facet, is to simply find the direction the new ray will go and its intensity. Easier said than done: the direction is straightforward, you calculate the Fresnel reflection/transmission coefficients and apply the law of reflection or Snell's Law accordingly to find the reflected/transmitted directions (remember to check for total internal reflection).

The actual intensity of the light ray is - of course - unchanged because there is no energy loss at the interface (well, there is, because light is a wave, but we are ignoring it here). But we're not talking about the intensity of individual rays, but of all of them on average, because we select the microfacet (and hence, the direction of reflection/transmission) probabilistically. This intensity is dependent on many things, but in particular we need to take into account self-shadowing: microfacets hiding each other. This is harder to take into account because we can't just go in there and test for intersection ourselves, so we must introduce a self-shadowing term, which is the probability of a microfacet being blocked from our view by another.

The exact specifics of the math are too elaborate to go into detail here, but the reference I used is "Microfacet Models For Refraction Through Rough Surfaces" (it's free!). These models are more difficult to use and understand than standard reflection BRDF's, but they are worth it!

Another cool thing is that if you use this model on a material that is incapable of refraction then you will only get Fresnel reflection, and the model will simplify down to Cook-Torrance. One material to rule them all! (almost, anyway, we're still missing scattering - more on that later).


Spectral Path Tracing

Posted by , 23 September 2012 - - - - - - · 2,554 views

Spectral Path Tracing I haven't posted in a long time, but I have been quite busy with other things. I have recently turned my attention to spectral path tracing in an attempt to investigate the "purest" form of rendering. The results are... interesting.

Spectral path tracing is essentially the same as normal path tracing, except that instead of working with RGB colors, you work with wavelengths in the visible spectrum (380nm to 780nm). By doing this, you will obtain a spectral power distribution for each pixel, which describes the amount of incoming radiance, per wavelength. To obtain a color from this, you first convert it to a CIE XYZ color - this is done by integrating the distribution over a color-matching curve (this curve varies among people and species, an average "human" one is available for free at the CIE website at 1nm intervals). I am currently sampling wavelengths at 5nm intervals, which seems good enough.

CIE XYZ is a "perceptual color" meaning that it is independent of the device being used to display it. As such, you need to convert it to device color before being able to get actual colors out of it. There are many different ways to do this, the simplest one is to assume a given "color system" which encodes a particular RGB gamut (the range of colors and intensities that can be displayed). The simplest color system is the CIE one, but there are others, like the HDTV color system, NTSC, PAL, etc...

Remember that normalized CIE XYZ colors do not encode any absolute brightness information, but this can be obtained rather easily. Indeed, you can obtain the pixel's total radiance by summing up the per-wavelength radiance for each wavelength, and multiply your resulting RGB color by that. This will yield a large luminance range. At this point, you will probably need to tone-map it, as even in simple scenes the per-pixel brightness can vary enormously. Then, you can gamma-correct the render (using the gamma-correction settings of your chosen color system), and you are finished!

There are no real advantages to using spectral rendering when your materials do not depend on wavelength beyond what can be emulated by standard RGB rendering, the only difference being slightly more realistic color interaction, so the attached renders are more of a proof of concept that anything else. However, for materials that change depending on the wavelength, spectral rendering is an absolute must - this includes dispersion, fluorescence, iridescence, etc... It also tends to simplify the material tweaking, since you no longer have to rely on "magic numbers" for your light brightnesses, for instance - just use a physically based emission spectrum, such as a black-body curve.

The light source used in the middle render was a black-body radiator with a temperature of 3500K (so it appears yellow-ish, if the temperature increased, it would tend to white, then to light blue). The left and right renders had a light source with a temperature of 5500K. The white materials were just a flat spectral power distribution - which results in white under the CIE color system - and the other materials were colored using an empirical distribution following a bell curve with the following equation:



Where is the dominant wavelength in nanometers (so setting that to 650 would give a red object) and is the fraction of incoming radiance reflected for this wavelength (as you can see, it never exceeds 1), is some global albedo constant between 0 and 1, and 22.63 is an empirical constant of mine. Choosing different constants would either make the distribution too spiky, which is no good as the color-matching curve has a limited precision, or it would smear the peak across far too many wavelengths, resulting in bizarre spectral interaction. Of course, you can combine more than one peak by averaging multiple such equations, and you can even work out better, possibly physically based, distributions.

The left and right renders have a glass refractive material (with Sellmeier-based refractive indices), and a Cook-Torrance specular material (not sure I got Cook-Torrance quite right, especially regarding energy-conservation). Dispersion is not visible here because with area lights, the dispersed spectrums tend to sum up to white, with only barely visible, very thin blue/red fringes at the edges. A point light would work significantly better, but cannot be done efficiently just yet - I need to implement direct light sampling.


More GPU Path Tracing, honored guest HDR

Posted by , 07 July 2012 - - - - - - · 2,414 views

More GPU Path Tracing, honored guest HDR Hello,
in my last entry (almost a month ago) I said I was working on implementing HDR to make renders look more... plausible. Here you go. These renders (a few glass spheres with fresnel reflection and caustics) are lit completely by an HDR environment map (quite low resolution), providing realistic direction-based lighting. The floor in the left render is also normal-mapped (but not the right render as I did not find a corresponding normal map texture).

The separation between the ground and the surrounding environment map is still far too sharp, I believe an interpolation-based approach to simulate the gradient effect that occurs when sky and earth blend together at the horizon (if that makes sense) could work. Or perhaps the glass spheres are simply too perfect to be physically plausible and could use some random noise to make them look more familiar.

Each render didn't take very long, but then again the geometry is very simple, the only reason the scene looks so complex is thanks to the environment map.

The two (separated) renders are available in my Gallery as well (but no high-resolution version this time as I forgot to render them and changed the code too much since then).


GPU-powered path tracing, take two

Posted by , 14 June 2012 - - - - - - · 2,028 views

GPU-powered path tracing, take two This image was generated with my GPU. I added minimal texture support, for the floor plane, and as an environment map (basically, when a ray does not intersect any object, its direction vector is used to sample the environment map and that is used as an approximate lighting value). I was impressed at the convergence rate, the noise disappeared in literally less than ten seconds thanks to the radiance contribution from the entire environment map.

If you are wondering why the sky looks overexposed, it's because I was using a low dynamic range environment map (so it was necessary to sort of fake the high dynamic range by raising the color values to some exponent, making the sky look weird). With a proper HDR image this would not occur as the sun in the environment map would naturally have a high radiance contribution whereas the rest of the image would have "standard" contributions, which would work better.

But high-resolution HDR images are hard to find and I need to find a way to parse them as well. Perhaps for the next entry!

A HQ (2048x2048) version of this render is available in my Gallery album. Beware - it's 10MB large!


GPU-powered path tracing

Posted by , 25 May 2012 - - - - - - · 1,694 views

I haven't posted in a long time - busy at uni. I have nevertheless been spending some time attempting to use OpenCL for path tracing. I made a quick, throwaway program, mostly used to check out how a GPU architecture would translate to simple, hardcoded ray tracing. The results are promising to say the least... this little cornell box scene attached has - brace yourselves - 1.6 million samples per pixel, for ultimate crispiness. It is also very high resolution (3840x2160, i.e. twice a 1080p image). And it only took 20 hours of work on a mainstream GPU (I shudder to think how long it would have taken on the processor instead).

I am going to spend some time writing a basic ray tracing wrapper around OpenCL (without acceleration structures, or any bells and whistles), just to test the limits of the GPU. I have to say it is refreshing to be able to obtain a result so quickly. I don't intend to use it for real-time display, as even powerful GPU's are not yet able to do that (they can get away with perhaps at best 64 samples per pixel per frame, which still results in a lot of visual noise).

It will also give me an opportunity to rehearse those shader coding skills I've neglected for so long...

A low-resolution version of the final render is attached to this entry (the full-resolution image is available in my Gallery, and is quite big ~ 5MB in PNG format). Feel free to use it in any way you wish!

Attached Thumbnails

  • Attached Image







Recent Comments

Latest Visitors