Visually Correct Soft Shadows Using Shadowmapping

Started by
11 comments, last by Icebraker 17 years, 6 months ago
I figured out how to generate pseudo-correct soft shadows using shadowmapping. The basic idea is to increase the blurring of the shadow based on distance from the shadow caster. The basic algorithm I came up with (which seems to be one of many possible similar ways) is this: 1) Use the post projected w value (which is the view transformed z value) instead of the projected z value. This results in an even distribution of depths rather than the hyperbolic distribution that using the projected z values gives. 2) Sample the shadowmap using a box or gaussian square filter rather than lerping between 4 samples. 3) When determining the shadowmap coords of the filter locations, scale each based upon the pixels distance from the depth stored in the shadowmap. This has the effect of widening the sampling range of the filter and results in a larger blur effect as the shadow gets further from the caster. I use a 5 x 5 box filter right now, but I hope to tweak it to some kind of circular gaussian filter this weekend. The numbers I use for the scaling are kind of up in the air, but it seems like multiplying the sample locations (or in other words "spreading them out" by something like up to 2.0 works well. There are alot of things that need to be tweaked and fixed, for instance the fact that there should be some kind of cutoff so at long distances the shadow doesn't just become multiple dimmer copies of itself. I haven't gotten to that yet but will this weekend. As far as performance goes, the pixel shader takes a very long time to compile but doesn't take much longer at all during rendering. It seems like texture sampling operations and shadowmap comparisons using step() are extremely cheap. I get comparable render times with the 5x5 box filter and using the 2x2 lerping method. Here is a pic using the virus that comes in the directx sdk. [Edited by - JakeOfFury on September 8, 2006 8:59:15 AM]
Advertisement
Quote:the pixels distance from the depth stored in the shadowmap


Could you please tell us how you calculate that distance?
I tried this once, but it didn't work because simply calculating "shadow_map_dist - pixel_dist_to_light" discardes the penumbra regions and you get a ugly edge inside the penumbra regions.

The image looks very cool BTW.
Hey,
that sounds interesting! I understand No. 3 very well. I do not get No. 1. If the reason behind this is the even distribution of depth you might just use the techniques that are commonly used to get a linear depth distribution.

I am looking forward to hear more.

- Wolf
That's basically the same idea as Percentage Closer Soft Shadows by Randima Fernando (NVIDIA) a few years ago.

One thing that you're missing though is that it's necessary to do some sort of area search for blockers as well, since all of the blockers in your chosen filter region will affect the attenuation of the light source. PCSS pretty much brute forces both of these steps (block search and filtering), but there are potentially more clever ways of doing it.

In particular, one might note that a mipmap chain gives you the sort of thing that you're looking for: i.e. the shadow map, filtered to various resolutions. Mipmapping of course is incompatible with standard shadow mapping, but works fine with variance shadow maps... I'll leave it at that, since I'm still working on some ideas for the latter and wouldn't want to lead people down a potentially dead end.

Note that you still probably want to use the frac of the initial projected texture coordinates to lerp your shadow attenuation, otherwise you're gonna get a blocky image since you are restricted to only N attenuation levels, where N is the number of filter samples that you take.
To address some of your posts:

1) I calculate the distance via a simple subtraction -- yes, this discards the penumbra region and will result in a nasty artifact if the scaling is too high. I haven't implemented the "blocker search" mentioned but will this weekend. As is, the procedure gives sort of strange results, as you can see from the picture (although better than a sharp shadow, to be sure).

2) I just read that nvidia article, yes this is exactly my idea. This is like the 100th time I have had a great idea only to find out that someone else beat me to it. Arrrrrgh!!
Also, to AndyTX, I tried the mipmap thing but it doesn't really work. Initially my idea was to simply bias the sampling according to the shadow depth using the HLSL tex2Dlod() instruction (available with shader model 3.0 finally) and use the standard 2x2 PCF lerping technique from there -- this would be much faster than using a large filter like I do now. It *does* work, but unfortunately gives very bad results, because to generate a mip chain on the shadowmap you need to use nearest neighbor filtering and that produces huge "shifts" in the shadow edges. As you can imagine, the location of shadow pixels in a 1024x1024 shadowmap might be very different from those in a 128x128 shadowmap.

I could get the mipmap idea to work, but I would have to generate the mipchain myself using a custom filter that simply keeps the highest or lowest pixel value of the sampling rather than using nearest neighbor. This looses any hardware support for mip chain generation however, not to mention the pain in the ass it is to get access to render target textures in D3D.

I downloaded a variance mapping paper and I will see whats up with that this weekend.
Check out the 2 most recent soft shadow mapping techniques in EG and CGF. One of which can approximate supersampled quality at zoomed out ranges.
Quote:2) I just read that nvidia article, yes this is exactly my idea. This is like the 100th time I have had a great idea only to find out that someone else beat me to it. Arrrrrgh!!

Yeah that tends to happen quite a bit. It can be minimized by keeping up-to-date on all the new papers that are published though, at least by skimming all of the abstracts.

Quote:Original post by JakeOfFury
It *does* work, but unfortunately gives very bad results

Right: as I mentioned it is totally improper to interpolate depth values in this manner.

Quote:Original post by JakeOfFury
I downloaded a variance mapping paper and I will see whats up with that this weekend.

The entire point of variance shadow maps is to represent the depth distribution in a manner that *can* be linearly interpolated, so mipmapping (and lod biasing), bilinear filtering, aniso filtering, msaa, or whatever linear filtering you desire works properly.
Quote:Check out the 2 most recent soft shadow mapping techniques in EG and CGF.
What is EG and CGF? Can you provide more description here.

Thanks in advance,
- Wolf
hi wolf!

EG - eurographics
CGF - computer graphics forum

Soft Shadow Maps: Efficient Sampling of Light Source Visibility

Real-time soft shadow mapping by backprojection

Edwin

This topic is closed to new replies.

Advertisement