Sign in to follow this  
myers

PCSS - unusable?

Recommended Posts

myers    143
It seems to me there's a fundamental problem with Percentage Closer Soft Shadows, which makes them unsuited to all but the simplest scenes. For those who are unfamiliar with the algorithm, it's described in this paper. Basically you average the depth values in a particular area of the shadow map, and then perform standard PCF using a kernel size based on that average. In theory, this makes shadows appear "softer" the further they are from an occluder. The problem occurs when a shadow receiver is occluded by more than one shadow caster. Because the shadow map will contain only the depth of the nearest occluder to the light, any more distant occluders will effectively be ignored. This means that shadows' softness will only be "correct" for receivers that are occluded by one caster: a pretty major drawback. This is similar, I think, to the "light bleeding" artifact in Variance Shadow Maps. The only ideas I can think of to solve this are to use very small penumbrae so you don't notice any inaccuracies (which pretty much negates the whole point of PCSS); or somehow use a "deep" shadow map, with the depth of each occluder stored in a different component (which would be costly, horrible to implement, and would *still* break at a certain number of overlapping occluders); or have a shadow map per caster. But I haven't seen this problem mentioned in any discussions of PCSS, which makes me wonder if I'm missing something obvious. Am I? Is there a simple solution? Otherwise, it seems to me that PCSS is unusable.

Share this post


Link to post
Share on other sites
myers    143
"Each shadow map"? How do you mean? There is only one shadow map.

Do you mean, have one per caster? That stops PCSS being a general solution, and certainly rubbishes Nvidia's claim, in the paper I linked to, that it "seamlessly replaces a traditional shadow map query" and is "independent of scene complexity". If there's no straightforward way around this, PCSS is in fact highly dependent on scene complexity. Are Nvidia just plain wrong?

Share this post


Link to post
Share on other sites
osmanb    2082
I haven't read the paper, but I think I get the idea from what you're describing. And I don't think it matters. Pretend that I'm a light, and I'm looking at an occluder and a receiver (behind the occluder). To the limits of what we're trying to simulate here, the size of the penumbra is entirely determined by the distance to the closest occluder and the receiver. The fact that there might (or might not) be another 100 objects between those two seems (to me) irrelevant. Put another way, how would you expect the shadows to change due to a second occluder which is, itself, occluded?

Share this post


Link to post
Share on other sites
myers    143
Quote:
Original post by osmanb
I haven't read the paper, but I think I get the idea from what you're describing. And I don't think it matters. Pretend that I'm a light, and I'm looking at an occluder and a receiver (behind the occluder). To the limits of what we're trying to simulate here, the size of the penumbra is entirely determined by the distance to the closest occluder and the receiver. The fact that there might (or might not) be another 100 objects between those two seems (to me) irrelevant. Put another way, how would you expect the shadows to change due to a second occluder which is, itself, occluded?


I would expect the penumbra to be correct for the nearest occluder to the receiver, not the nearest occluder to the light. If a second occluder is occluded, then to be visually correct, the penumbra should "restart" at that occluder.

Imagine a big tower casting a shadow on a small hut, which is within the penumbral region of the tower's shadow. Any shadows cast by the hut will look very soft, because they're far away from the nearest occluder to the light (the tower) - but they should in fact be hard, because they're near to the hut.

Here's a visual example - the sun shining on a large rectangle, behind which is a small triangle:



The shadow cast by the triangle should be hard, because the receiver (the ground) is so close. But because the shadow map contains the depth values of the large rectangle (which is far from the receiver), not the triangle, the shadow is soft - until the point at which the occluders are no longer overlapping, and only the triangle is between the light and the ground. At this point, the triangle's shadow suddenly becomes hard again, resulting in a discontinuity. Clearly this is not a minor inaccuracy that we can just live with - it looks pretty terrible, and far worse cases could be constructed.

As I say, Nvidia are so insistent that PCSS maps seamlessly onto a vanilla shadow map implementation that I must, surely, have misunderstood something. But I can't see what!

[Edited by - myers on August 3, 2008 2:09:52 PM]

Share this post


Link to post
Share on other sites
The screen shot you've posted is a pathological case, and frankly does look like a "minor inaccuracy". I'm surprised it handles it that well... Your reaction seems oddly extreme, because the benefits of the method appear to outwiegh the small problem in the extreme cases.

That said, Its almost inconceivable that any of these soft shadow mapping methods are going to be perfect; all have flaws..PCSS seems to have fewer than VSM and EXPSM.

Share this post


Link to post
Share on other sites
nullsquared    126
Quote:
Original post by Matt Aufderheide
The screen shot you've posted is a pathological case, and frankly does look like a "minor inaccuracy". I'm surprised it handles it that well... Your reaction seems oddly extreme, because the benefits of the method appear to outwiegh the small problem in the extreme cases.

It barely handles these cases 'well' - it fails like this even for small occluders that are behind each other with a slight offset, the penumbra estimation fails to produce correct (or at least visually pleasing) results.
Quote:

That said, Its almost inconceivable that any of these soft shadow mapping methods are going to be perfect; all have flaws..PCSS seems to have fewer than VSM and EXPSM.


You can't compare PCSS to VSM, since PCSS is a different usage method for PCF, and can in much the same way be applied to VSM (VSMSS).

Share this post


Link to post
Share on other sites
myers    143
Quote:
Original post by Matt Aufderheide
The screen shot you've posted is a pathological case, and frankly does look like a "minor inaccuracy". I'm surprised it handles it that well... Your reaction seems oddly extreme, because the benefits of the method appear to outwiegh the small problem in the extreme cases.


I don't think this is a pathological example at all. In fact, I'm kind of regretting posting it, because it's probably about as minor as the artifact gets and makes the problem look less severe than it actually is. In a scene as simple as that, it's not a big deal, but in an even slightly more complex one it would be.

Consider my example of a hut which falls within the penumbra of a shadow cast by a distant tower. All shadows within that hut will use such a large PCF kernel that they'll be effectively invisible. If you have a detailed room inside that hut, it'll look like none of the objects in it are casting shadows. The same goes for shadows cast by the hut itself: the whole room will be lit as if it was outside, because the shadow cast by the hut's ceiling and walls will be basically invisible. Remove that distant tower, and suddenly the room goes dark.

Or what if the room is half inside the tower's penumbra, and half outside. Half of the room will be illuminated by the sun while the other half will be dark.

Or forget the tower, and just imagine an enclosed room, with a light in the street outside. The room is dark because it's enshadowed by its walls and ceiling. Suddenly, someone walks past the building, between the room and the light. Inside the room, a person-shaped "patch" of light moves across the darkness of the room.

It's easy to see the major issues that could result in a city scene, with lots of buildings casting shadows on each other. The only way to reduce it to a "minor inaccuracy" is to make all penumbrae very small, but then the visual benefit of PCSS is almost insignificant.

Quote:
Original post by rozz666
myers, could you add some jitter to your shadows? It should reduce banding.


I suppose it's a matter of taste: I've always thought jittered shadows look too grainy and actually worse than banding, which is pretty unnoticeable when shadows are cast on a high-res textured surface (unlike the extremely low-res texture I used in the pic). Anyway, I think it's much less of an issue (and solveable by brute force) than the PCSS artifacts!

Share this post


Link to post
Share on other sites
rozz666    896
Quote:
Original post by myers
Quote:
Original post by Matt Aufderheide
The screen shot you've posted is a pathological case, and frankly does look like a "minor inaccuracy". I'm surprised it handles it that well... Your reaction seems oddly extreme, because the benefits of the method appear to outwiegh the small problem in the extreme cases.


I don't think this is a pathological example at all. In fact, I'm kind of regretting posting it, because it's probably about as minor as the artifact gets and makes the problem look less severe than it actually is. In a scene as simple as that, it's not a big deal, but in an even slightly more complex one it would be.

Consider my example of a hut which falls within the penumbra of a shadow cast by a distant tower. All shadows within that hut will use such a large PCF kernel that they'll be effectively invisible. If you have a detailed room inside that hut, it'll look like none of the objects in it are casting shadows. The same goes for shadows cast by the hut itself: the whole room will be lit as if it was outside, because the shadow cast by the hut's ceiling and walls will be basically invisible. Remove that distant tower, and suddenly the room goes dark.

Or what if the room is half inside the tower's penumbra, and half outside. Half of the room will be illuminated by the sun while the other half will be dark.

Or forget the tower, and just imagine an enclosed room, with a light in the street outside. The room is dark because it's enshadowed by its walls and ceiling. Suddenly, someone walks past the building, between the room and the light. Inside the room, a person-shaped "patch" of light moves across the darkness of the room.

It's easy to see the major issues that could result in a city scene, with lots of buildings casting shadows on each other. The only way to reduce it to a "minor inaccuracy" is to make all penumbrae very small, but then the visual benefit of PCSS is almost insignificant.


Maybe use some kind of a feedback. It may be very slow, but you could try it. When you take samples from the shadowmap, adjust the kernel size. Start with a size determined by your first sample and then if you find a sample that is closer, reduce the size accordingly.



[Edited by - rozz666 on August 3, 2008 5:59:07 PM]

Share this post


Link to post
Share on other sites
AndyTX    806
I went over this a bit in my chapter in GPU Gems 3, but indeed you cannot get "correct" soft shadows from a single shadow map. You actually do need the information from different projections across the light's area to do it correctly. And it gets even worse... PCSS generally sparsely samples the filter region which also introduces artifacts. This is just a fact of life... the integral required to correctly compute soft shadows is extremely expensive. Theoretically you could do it properly with a "deep" shadow map that stored multiple layers (depth peeled or similar), but that would almost certainly cut your framerate by some large factor.

That said, PCSS and pretty much every other real-time soft shadows method aims for plausibility, not correctness. Whether or not it is usable for your scene really does depend on your scene.

Quote:

You can't compare PCSS to VSM, since PCSS is a different usage method for PCF, and can in much the same way be applied to VSM (VSMSS).

Yes I was going to mention the same thing. VSM/ESM/CSM/PCF/etc. are all filtering methods. PCSS is a way to choose a filter width, the result of which can be applied using any of those filtering methods. It's interesting to note that you can also more cheaply choose a filter width/average occluder depth in constant time by simply computing mu - sigma in VSM (ideally summed-area for constant time accesses) as discussed in an earlier thread here. You can do a similar thing with CSMs as per a recent paper from the authors of CSM. This is all discussed in depth in this presentation from GDC 2008.

Share this post


Link to post
Share on other sites
myers    143
Quote:
Original post by AndyTX
I went over this a bit in my chapter in GPU Gems 3, but indeed you cannot get "correct" soft shadows from a single shadow map. You actually do need the information from different projections across the light's area to do it correctly. And it gets even worse... PCSS generally sparsely samples the filter region which also introduces artifacts.


Yeah, I think you can see that artifact in the screenshot too (there are "holes" in the overlapping part of the penumbra). But it can always be improved by taking more samples - so if that were the main issue, PCSS would at least have a future. I abandoned VSM for similar reasons, but using more moments would fix a lot of the problems with it. With PCSS, there seems to be no such prospect of technology making it better.

Quote:
This is just a fact of life... the integral required to correctly compute soft shadows is extremely expensive. Theoretically you could do it properly with a "deep" shadow map that stored multiple layers (depth peeled or similar), but that would almost certainly cut your framerate by some large factor.


I considered this upthread, but expense aside, we'd still have some limit of overlapping occluders, above which the artifact would return.

Quote:
That said, PCSS and pretty much every other real-time soft shadows method aims for plausibility, not correctness. Whether or not it is usable for your scene really does depend on your scene.


Unless you use very small penumbrae, the technique seems unviable for anything where overlapping occluders can be a significant distance apart: which is to say, just about any indoor environments, and pretty much all outdoor ones too. It'd work quite well for a bunch of characters standing around with very little scene geometry, I guess - but for anything more than that, nah.

It's strange that Nvidia describe PCSS as "independent of scene complexity" - it just isn't. But oh well. I've come to terms with the fact that I can't really use it. At least dropping it will speed up my pixel shaders.

Share this post


Link to post
Share on other sites
coderchris    304
Just thought I would point this recent paper out:
http://www.mpi-inf.mpg.de/~dong/SIG08_CSSM.html

I dont know nearly enough math to comprehend how they are doing it, but it seems as though they have done a good job hiding most of the inaccuracies (including the cases you are talking about)

What do you guys think?

Share this post


Link to post
Share on other sites
wolf    852
Quote:
As I say, Nvidia are so insistent that PCSS maps seamlessly onto a vanilla shadow map implementation that I must, surely, have misunderstood something. But I can't see what!
You are talking about a company that sells graphics hardware. It is nice that they try to help with their dev team to get game developers started on their hardware but you can't expect any game development wisdom ... they are not game developers. If you want to know if anything works you have to implement it in a game and see how it works ... usually you will find out that there are a gazillion things you haven't thought about before that suddenly start showing up.
My main advice for colleagues is: never trust anything that did not ship in a game. If you have the choice between a technique that is a bit simple but efficient and has shipped in a game and a different technique, always pick the first one. At least you have something running that works in a game. You can get back to more complex stuff still later.

That being said: it is well known that this does not work well ... nevertheless a lot of people can live with it. Games have shipped with much rougher approximations :-)
We are running filter kernels in screen-space :-) ... how is that?

What I did for shadows is hit the hardware until you find a sweetspot .. in my case it was on a 360 and a PS3 :-)

Share this post


Link to post
Share on other sites
osmanb    2082
Okay, I understand what you're talking about now, but I still think it's not as bad as you claim. Remember that "softer" does not equal "lighter". Consider your example scenarios (like a walled room with a light outside). Just because something gets in the way of the light outside, doesn't automatically make the shadows inside get lighter, it just makes them softer. But softer just means a larger filter kernel. If the room is entirely walled off, then ALL of the samples (even in the larger kernel) are STILL going to be in shadow, so it isn't actually going to affect the shadow at all. There are scenarios where the two concepts will intermix and create artifacts, but I think those are much less common than you're assuming.

Share this post


Link to post
Share on other sites
wolf    852
I read the paper AndyTX was refering to. It shows why most of the approaches so far failed. If you go through the whole paper you can see that nothing works fully .. the last technique proposed by the author is much too expensive for real world usage.
Other than this it is an interesting idea.

That being said: with enough resolution you can get over most of those challenges easily. From all the approaches that won't work in a game in the moment I like the Virtual Shadow Map approaches in the ShaderX4 - 6 books quite a lot. If there would be hardware that would be powerful enough to run those, you would have solved many problems :-)
I also like Marco Salvi's approach because it is cheap enough to be used in a game ... having only 4 or maximal 8 texture fetches in a game is a quite limiting factor :-) ... most low-end cards can't even do this (the ones with 128-bit or 64-bit busses).

Share this post


Link to post
Share on other sites
Quote:
Original post by agi_shi
You can't compare PCSS to VSM, since PCSS is a different usage method for PCF, and can in much the same way be applied to VSM (VSMSS).


I'm not comparing the specific algorithms but the fact that all soft shadowing methods using a single shadow map involve some kind of approximations based on filtering (VSM uses bilinear filtering and pre-bluring, PCSS uses multiple samples and a convolution kernel), which is inherently fake. You have to hope that it looks pretty good in most cases not all cases. People seem to expect too much from image based methods.

The subject of the original post was "PCSS - unusable?" .. I think the answer is obviously "no". Its already been used in a commercial game (Hellgate-London) although I admit the shadowing in that game is not very demanding.


Share this post


Link to post
Share on other sites
AndyTX    806
Quote:
Original post by myersBut it can always be improved by taking more samples - so if that were the main issue, PCSS would at least have a future.

Well like I said, accept the fact that a single z-buffer does not have enough information to do correct soft shadows :) Thus you're never going to be able to build a robust, perfect soft shadows method on a single shadow map.

Quote:
Original post by myersI abandoned VSM for similar reasons, but using more moments would fix a lot of the problems with it.

As an aside, check out the Layered VSM paper (and Exponential VSM in that paper) for a way to generalize and expand VSM to improve quality at the cost of storage. But there again as I described in my GPU Gems 3 chapter, you'll always run into the fact that visibility is a step function and thus you can always construct cases where *any* "optimized" algorithm that does insufficient sampling will fail.

Quote:
Original post by myers
I considered this upthread, but expense aside, we'd still have some limit of overlapping occluders, above which the artifact would return.

Yup, but it's analygous to building a projected uniform grid accelerator over your scene using the rasterizer and querying it with shadow rays on the fly. i.e. it'll be at fast/least as good as soft shadows via ray tracing :)

Quote:

Unless you use very small penumbrae, the technique seems unviable for anything where overlapping occluders can be a significant distance apart: which is to say, just about any indoor environments, and pretty much all outdoor ones too.

I agree with you, but people's needs are very different. Hell HL2 got away with projected shadows for a long, long time... people are *still* using screen-space blurred shadows which absolutely blows my mind... etc. etc.

Quote:
Original post by myers
It's strange that Nvidia describe PCSS as "independent of scene complexity" - it just isn't.

They mean that the *performance* is independent of scene complexity since it is an image space algorithm. Certainly the quality is not, but that's true of most image-space algorithms too.

Quote:
Original post by Matt Aufderheide
I'm not comparing the specific algorithms but the fact that all soft shadowing methods using a single shadow map involve some kind of approximations based on filtering (VSM uses bilinear filtering and pre-bluring, PCSS uses multiple samples and a convolution kernel), which is inherently fake.

Yes agreed, but VSM is not a soft shadows algorithm (nor is PCF)! That's the only point I/we were making. PCSS + VSM is a soft shadows algorithm, but VSM by itself deals with *filtering* which is a very different beast.

Quote:
Original post by wolf
If you want to know if anything works you have to implement it in a game and see how it works ... usually you will find out that there are a gazillion things you haven't thought about before that suddenly start showing up.

Sure, but conversely it's baffling how many nonsensical shaders and "algorithms" are hacked into games that work in pretty much one case... just realize the train goes both ways :)

Quote:
Original post by wolf
My main advice for colleagues is: never trust anything that did not ship in a game.

Well that's not entirely fair to PCSS... it has been used in several games and NVIDIA did not release that white-paper until Helgate: London had shipped (which uses it). This is the case even though they've been sitting on the idea for ages.

Quote:
Original post by wolf
That being said: it is well known that this does not work well ... nevertheless a lot of people can live with it. Games have shipped with much rougher approximations :-)
We are running filter kernels in screen-space :-) ... how is that?

Ahaha, okay we're definitely on the same page here - I agree :)

Quote:
Original post by coderchris
Just thought I would point this recent paper out:
http://www.mpi-inf.mpg.de/~dong/SIG08_CSSM.html

That's precisely the paper that I mentioned above and a very similar technique was actually covered slightly earlier in the presentation that I linked above. It will have similar artifacts to PCSS, as will any image-space single-shadow-map soft-shadows algorithms.

But in any case, the end result is YES - PCSS and similar are very wrong and have some significant artifacts in some/many scenes. That said, there are very few alternatives within several orders of magnitude of performance... Thus you have to decide how much you need plausible soft shadows.

Share this post


Link to post
Share on other sites
Quote:
Original post by AndyTX
Yes agreed, but VSM is not a soft shadows algorithm (nor is PCF)! That's the only point I/we were making. PCSS + VSM is a soft shadows algorithm, but VSM by itself deals with *filtering* which is a very different beast.


A filtering algorithm and a penumbra approximation via PCF are certainly inherently different...but I dont think its wrong to term them all "soft shadow" methods..as opposed to vanilla shadow mapping or stencil buffer methods..."soft" just implies non-hard-edges that's all. All soft-shadows are meant to somehow simulate a penumbra with more or less physical accuracy...

This includes screen-space methods as well, so i cant see how there is any less viability there...screen space is a remarkaby efficient place to do filtering. A rasterizer is all a big fake anyway.


Share this post


Link to post
Share on other sites
hplus0603    11347
I think that, to get good results, you'll have to pre-calculate "distance to occluder" for different directions for each point. Such a function could be stored in a function per pixel, such as a second-order spherical harmonic, perhaps. Then the value of that function for the texel being shadowed (or not) can be used to drive the shadow softness. And, in fact, a lot of different things, like ambient occlusion and even very simple indirect lighting.

Unfortunately, that approach dies when you want objects that move (although it works for moving lights and moving viewers).

Share this post


Link to post
Share on other sites
AndyTX    806
Quote:
Original post by Matt Aufderheide
"soft" just implies non-hard-edges that's all. All soft-shadows are meant to somehow simulate a penumbra with more or less physical accuracy...

That's entirely my point - VSM is not primarily targeted at softening edges at all! Being able to cheaply blur the shadow map is a useful side-effect, but the main goal is to be able to properly mipmap and aniso filter shadow maps.

Quote:
Original post by Matt Aufderheide
screen space is a remarkaby efficient place to do filtering.

No not at all! Screen space is a convenient place to do bluring and other such fun, but *not* filtering. Indeed you *can't* do filtering in screen space, which is why I always bring up the point that VSM is a shadow filtering method, not an edge softening/bluring whatever... it is not comparable in the slightest to screen-space bluring.

Quote:
Original post by Matt Aufderheide
A rasterizer is all a big fake anyway.

Nonsense - it produces 100% physically accurate results to a specific set of ray queries. It's just an efficient way of building a "perfect" projected uniform grid accelerator for primary ray queries. In terms of shadows it's a bit more of a "hack" in that you're querying a lossy accelerator in typical implementations, but "deep" shadow maps or irregular rasterization can solve that. There's nothing inherently wrong with rasterization - indeed it is particularly useful for primary and shadow rays. But sometimes you need more complicated queries which just means you often need more complicated data structures and query mechanisms.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this