Blurring shadows in ray tracing

Started by
4 comments, last by Luke Hutchinson 18 years, 3 months ago
Hi, if an area light doesn't use enaught samples, the resulting shadow is noisy. But once I saw that Lightwave offers a switch that enables a kind of 'filter' that blurs the shadows, in order to make few samples enaught in many cases (with quality loss, of course, but without annoying artifacts). Does someone know how to achieve this effect? Is it done in the raytracing phase or are all the shadows written in a separate buffer, then blurred and then applied to the rendered scene using z-buffer as a mask? Would the latter work?
Advertisement
Not exactly a filtering of the shadow, but Single Sample Soft Shadows might interest you.
Interesting paper. Not exactly what I was looking for, but definitely something I could try to implement in my next RT. ++
I don't know how Lightwave would do it, but I'd personally favor something like this:

- From the shadowed point P to the light source L, sample L the required number of times. For this method a regular or highly stratified sampling strategy will probably work best.

- Take groups of three samples and project them onto a hypothetical plane. On this plane, demark a triangle with each of the three sample points as vertices. Assign each vertex a value equal to the light occlusion at that sample point.

- Interpolate the values across the triangle and integrate them to determine a "loose" occlusion value on the interval [0, 1].

- Repeat for the remaining samples and combine the results.


The interpolation process should be fairly easy to mathematically reduce to a few simple operations, and will definitely be cheaper than repeatedly sampling the scene. This should give you relatively good and smooth results without needing a huge number of samples, but it won't be precisely accurate; it should still preserve singularities correctly though. An adaptive approach might also pay off very well, but I'd have to think more about how to parameterize the error value to set up a proper terminating condition.


Storing shadow data in a secondary buffer might work, but it would be extremely difficult to blur/blend that buffer without it bleeding into edges where it shouldn't. For instance, if you look down at your feet in the sunlight and see your shadow, the shadow shouldn't blur onto your shoes [wink] There would have to be some kind of visual occlusion buffer to know what pixels were camera-visible at what depth, to "clip" the shadow blurring. It's possible I guess to have that produce a good result, but it'd be a lot of work, and may or may not be very good in terms of performance.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

Uh, thank you. It sounds reasonably doable, but it's anyway a lot of work, so I think I will work on it more in the future.

Quote:
Storing shadow data in a secondary buffer might work, but it would be extremely difficult to blur/blend that buffer without it bleeding into edges where it shouldn't.

Yep, I thought that too. I wonder if playing a bit with the z-buffer could lead to a working solution.
Another approach that i've used before is to adaptively cast more rays from the light source in the penumbra areas of the shadow. If you've already implemented area light sources by casting multiple rays, then this should be a reasonably straight forward improvement.

The idea is very similar to Whitted adaptive supersampling. Basically for a rectangular area lightsource, you always cast a minimum of four rays, one from each corner. If some are blocked, but not all are, then you recursively subdivide the lightsource area and cast more rays. The results for my implementation turned out pretty well.

I've described the algorithm in more detail (including an important optimisation) here, and my source code is also on that page.

This topic is closed to new replies.

Advertisement