Sign in to follow this  

issue with PSVSM

This topic is 3581 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I have implemented PSVSM (parallel split + variance shadow map). I have one little issue, with some shadow corruption occuring depending on the position of the camera. What is strange is that the issue doesn't appear if I don't blur the shadow map. Have a look at these two screen shots, the problem appear on the foreground in the second screenshot (blur kernel width = 6) : no blur blur I am using the same blur shader that the VSM author published in GPU Gems. I have also checked my sampler states (tried clamp, wrap, mirror) but it didn't have an influence. Please note the projection matrix is expanded in order to blur correctly. So I am a bit lost, any feedback would really help me.

Share this post


Link to post
Share on other sites
I am scaling the blur filter kernel size according to the area covered by each tile.

However, the problem occurs in a single slice. See the colors, red being the first tile, green the second, blue the third, etc...

Share this post


Link to post
Share on other sites
See my response @ B3D but there's definitely something odd going on in the last shot - the unblurred part of the shadow right near the viewer looks to be at a much higher resolution that the blurred part, which seem to contradict your "cascade" visualization colours.

Are the visualizations at the top the shadow maps themselves? I don't see the plane in them anywhere...

Share this post


Link to post
Share on other sites
Hi,

First thanks for having taken some time answering my thread; Andy, I am going to continue on gamedev, as it seems there are more people here (and the guy I was secretely trying to get involved answered on both fora ;)

OK, first, wolf, by expanding my projection matrix i mean expand the clip region covered in order to have a correct blurring at the edges, as suggested by Andy in his chapter.

Then, the colors indicate which split is currently used, red being the first one. So the problem is not related to any slice boundary change.

I think I have found the problem. It is due to the fact that my Z-Far was set using the frustum points only; I thought, i don't need to render objects that are not in the view frustum, and not between light and frustum. However, this is why the blur then fails. The rendered area is being blurred with a part of the shadow map that hasn't been rendered into (I guess).



See the drawing above. Initially I was setting the Z-Far plane using the frustum points only.

I use a directional light. What I do now to create the matrices:
- first put light position in frustum center.
- with all frustum points and caster bounding boxes visible (in frustum or with sweep test along light direction), project on this axis, get min and max Zlight
- move the light backward (using ZMinLight computed at previous step) to set the light at the right position, to make sure all object will be rendered
- then compute projection matrix and crop matrix

In the first method (the one that has shown the issue), I was using far plane number 1 on drawing. So some region (near the near plane of the camera frustum in the drawing) were blurred with (0,0) (texels not being rendered) producing wrong result. I this theory possible ?

However, in my configuration where I have one big low-res terrain mesh, I get huge ZFar and ZNear values (when computing dot product between bounding box and light pos/dir). In the example, I would get plane 2 as Z Far (in my scene, ZFar can be 44000.0 with this method; same problem with ZNear).

Right now I am clamping my values to some constant, to reduce the ZFar-Znear range. Otherwise precision would be very bad. However this is a bit of a hack.

A solution would be to decompose the big mesh into smaller chuncks.

Do you see any other solution available ?



Another problem I now have is this. Look at the screen shot below, I am using SH to render the scene, and get some issue with shadow boundaries, as I don't use NDotL but rather a SH representation of the sky. Using NDotL would reduce in a too dark scene...

See the red arrows (only first texture slice is debug-displayed):




First, I don't understand why I get a hard shadow on the tail of the aircraft ? I know theorically the tail should be all dark, because the sun is from the other side. But because of the SH lighting I have no way of differentiating ambient and diffuse lighting. I simply compute the SH light factor, and scale it according the shadow result.
If I scale using NDotL, the shadows are nice, but the scene gets too dark (because ground NDotL is nearly 0, so ground gets black). Is there a solution to my problem ?

Thanks a lot guys for your help, I appreciate. And to be honnest I am quite happy to get help from 2 famous guys ;)

Once I have solved this, I have 1-2 other questions for you Andy regarding the article... going to bother you ;)

PS: if you want to see some source code just ask..

[Edited by - gjaegy on February 8, 2008 3:01:13 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by gjaegy
In the first method (the one that has shown the issue), I was using far plane number 1 on drawing. So some region (near the near plane of the camera frustum in the drawing) were blurred with (0,0) (texels not being rendered) producing wrong result. I this theory possible ?

Yeah that's definitely possible. Often it works to simply clear your shadow map to (1, 1) or similar (or even higher if you're using a fp32 shadow buffer) before rendering so that pixels that are "not rendered to" are considered to be at the far plane. You may still have some trouble if you're aggressively clamping the depth range near the camera: the problem being that the "support" of the pixels onscreen covers a *range* in the shadow map, particularly when you blur. The same is actually true of mipmapping but those issues will be harder to notice.

Thus what you want to try to ensure is that everything "around" each of the shadow map pixels that you may sample is rendered and valid in the shadow map. That's potentially easier said than done, but I suspect you can play with a reasonable heuristic to get this to work in the vast majority of cases. If you put a bound on the "z" component of the face normal of any surfaces rendered into the shadow map you can determine precisely how far to extend the depth range, but that may be overkill.

Probably a "fuzzy boundary" where you just increase the depth ranges near the camera, etc. by some % would be sufficient for your purposes.

Quote:
Original post by gjaegy
If I scale using NDotL, the shadows are nice, but the scene gets too dark (because ground NDotL is nearly 0, so ground gets black). Is there a solution to my problem ?

Yeah shadows/shadow maps don't really make much sense unless you're using NdotL or similar in addition to the shadow attenuation. I'd suggest that if you want to use them together treat the shadow casting light (the "sun") as a standard directional light with NdotL and shadow attenuation, and then treat the SH part as an ambient term that you *add* to that lighting. You can play with the multipliers appropriately to get the desired effect. Note that this will brighten your shadows a bit (due to the ambient term) but that's fairly realistic to what SH is trying to model anyways.

Quote:
Original post by gjaegy
Once I have solved this, I have 1-2 other questions for you Andy regarding the article... going to bother you ;)

No problem - feel free to post here or send me an e-mail. I'll get back to you as soon as I can.

Share this post


Link to post
Share on other sites
Hi Andy,

Thanks a lot for your answer. I am on the way solving the blur issue properly.

Concerning the NDotL issue, I am not sure to model the sun light as a simple directional light. This would remove any indirect lighting effect produced by light bounces.
An option would be to create two SH light sources, splitting the original one into two one (ambient and diffuse). This would keep the benefit of SH concerning indirect lighting, but would require some additional VS instructions ;(
I will have a try.

One question on another subject, as to be honnest I am not sure about this, what happens if fDepth is < 0 ? From what I have seen it seems to work, but as your mathematic background is better as mine, your point of view would interest me. I could of course avoid this situation, if negative distance work, the source code remains simpler...

Another question I have, in the book the following snippet, found on the CD, is different. To be honnest I know it works, but as I am a bit curious, I would like to understand why you commented out the last part of the function. To be honnest I didn't understand the


float2 ComputeMoments(float Depth)
{
// Compute first few moments of depth
float2 Moments;
Moments.x = Depth;
Moments.y = Depth * Depth;

// Ajust the variance distribution to include the whole pixel if requested
// NOTE: Disabled right now as a min variance clamp takes care of all problems
// and doesn't risk screwy hardware derivatives.
//float dx = ddx(Depth);
//float dy = ddy(Depth);
//float Delta = 0.25 * (dx*dx + dy*dy);
// Perhaps clamp maximum Delta here
//Moments.y += Delta;

return Moments;
}


And I forgot to say: your algo simply rocks. I now use a 4 x 1024 x 1024 texture array, using geometry instancing and the GS to render to the 4 slices in one pass, without any warping technique. An the result is much more realistic and actually good-looking as with my previous method (CSM with 4 slices, all packed into one big 4k x 4k shadow map texture, using a modified TSM warping method). The old method looked nice, but the texture needed to be 4x bigger (2x2) to get a decent result (for a 2000m / 3000m shadowed area) !

Share this post


Link to post
Share on other sites
Quote:
Original post by gjaegy
One question on another subject, as to be honnest I am not sure about this, what happens if fDepth is < 0 ?

It shouldn't be a problem as the mean (Moment 0) that you read back from the VSM will always be >= 0. Thus your fDepth <= Mean will always be true and the shadow attenuation will always be 1 (i.e. fully lit). Optionally you can clamp fDepth to [0, 1] but it doesn't really matter either way.

Quote:
Original post by gjaegy
Another question I have, in the book the following snippet, found on the CD, is different.

Yeah that's an interesting section of code and related to the discussion about biasing and VSMs. Basically the concept is that unlike with standard shadow maps we can actually represent the fact that a pixel in a shadow map is not actually a single, discrete depth but rather covers a "range" of depths over the relevant portion in the triangle. That formula there uses the derivative instructions to approximate this range of depths and adjust the "pixel variance" accordingly.

The problem is that the hardware-provided derivatives are sometimes a bit flaky as they are achieved simply by forward differencing. Optionally one can use the surface normal to compute the pixel depth coverage but that requires passing it into the shaders while rendering the shadow map.

The larger problem is that representing the range of a pixel depth works well when the triangle covers the whole pixel, but breaks down a bit at triangle edges. At these edges - when the triangle is almost aligned with the light rays - it's possible for the derivatives to get extremely high, even though the triangle doesn't actually span that whole range (it might if it didn't stop). There's no way to know this with a typical rasterizer though, so in practice you just have to try and clamp out these huge values. Otherwise, you can get some weird light splotches "randomly" appearing in shadowed regions depending on the participating geometry.

The final nail in the coffin for that section of code is that simply clamping the minimum variance before computing Chebyshev's Inequality eliminates all biasing artifacts effectively in practice. In the really extreme cases of long polygons parallel to the light rays NdotL ~ 0 anyways so a little bit of improper shadow "darkening" is going to be totally unnoticeable anyways.

Thus it's a mathematically interesting result, but in practice I recommend just clamping the minimum variance.

Quote:
Original post by gjaegy
And I forgot to say: your algo simply rocks. I now use a 4 x 1024 x 1024 texture array, using geometry instancing and the GS to render to the 4 slices in one pass, without any warping technique. An the result is much more realistic and actually good-looking as with my previous method (CSM with 4 slices, all packed into one big 4k x 4k shadow map texture, using a modified TSM warping method). The old method looked nice, but the texture needed to be 4x bigger (2x2) to get a decent result (for a 2000m / 3000m shadowed area)!

Awesome I'm glad it's working well for you :) For really kick-ass shadows, try enabling MSAA when rendering the shadow map if you aren't already. Then you effectively get a shadow map of 4x the resolution (with 4x MSAA) with very little additional cost :)

Share this post


Link to post
Share on other sites
Hi Andy,

sorry for this late answer, I have been quite busy with other things..

I finally managed to make the shadow good looking. I have tried to split the SH light into two ambient and diffuse lights. However, the result wasn't convincing.

The solution I used is to multiply the shadow value by pow(max(0.0, -fNDotL), 0.3f) which gives nice result without darkening the scene too much (using NDotL makes my terrain too dark at dusk and dawn).

Anyway. I have another question regarding the minimum variance value (g_VSMMinVariance). It seems that when I increase the depth range (light Z min / Z max) I need to decrease this value so that the shadow doesn't get too bright. I wonder, what happens visually if this value is too low (i.e. clamping is not aggressive enough) ? You mention "improper darkening", do you mean some areas would be shadowed where they shouldn't be ? Could that cause the shadow to "flicker" at the edges or is that just because of my shadow texture resolution is low (I had that impression when I decrease the min variance) ?

Other question, have you tried SAT in combination with PSVSM ?

Finally, as I am curious, what will be the next improvement to VSM ?

Thanks !

[Edited by - gjaegy on February 18, 2008 4:09:20 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by gjaegyIt seems that when I increase the depth range (light Z min / Z max) I need to decrease this value so that the shadow doesn't get too bright. I wonder, what happens visually if this value is too low (i.e. clamping is not aggressive enough) ?

The min variance clamp only prevents improper shadow acne, etc. so if you're not getting that, decrease it! You can actually compute it so that it's independent of the depth range as well, but usually you don't need to mess with that much.

Quote:
Original post by gjaegy
Other question, have you tried SAT in combination with PSVSM ?

No, but there's no reason why it wouldn't work.

Quote:
Original post by gjaegy
Finally, as I am curious, what will be the next improvement to VSM ?

I just had a Layered VSM poster @ I3D 2008 two days ago and a paper on the way on that topic. I've also been doing some cool work with a collegue on some really exciting stuff, but that's pretty preliminary right now. Stay tuned :)


Share this post


Link to post
Share on other sites
OK, thanks for all those useful information.

Any link available for this new Layered VSM ? I couldn't find any... Not sure what this could be ! Looking forward to read it !

For information, I managed to use instancing in order to render to all the slices using 1 single draw call per object. It is a kind of a mix between chapter 3 and chapter 1 of GPU Gems ;) it seems to be very efficient and avoid the additional render passes. However, I didn't use their cube map approach as I don't need the HW depth compare. You could update your demo with this as well ;)

Thanks for you very useful help, and keep on finding amazing new techniques ;)

Share this post


Link to post
Share on other sites
Quote:
Original post by gjaegy
Any link available for this new Layered VSM ? I couldn't find any... Not sure what this could be ! Looking forward to read it !

Not yet, but stay tuned :)

Quote:
Original post by gjaegy
You could update your demo with this as well ;)

Hehe yeah, as I mentioned my PSVSM implementation is more of a "proof of concept" than completely optimal or even partially optional ;) Definitely there's room for improvement, so thanks for letting me know about that one!

Share this post


Link to post
Share on other sites
Uh oh...

Andy I have a big problem :( It seems the light bleeding issue I get is crazy.

Please have a look to the following screenshots. The two first ones use a LBRAmount=0.3. In order to reduce the light bleeding I need to set LBRAmount=0.95 (last screenshot). However in that case, the shadows are nearly hard shadows !

I am using a blur filter width of 6.

Why is the effect so intense in my scene ? What can I do ? Is there a solution ?

LBRAmount=0.3


LBRAmount=0.3


LBRAmount=0.95


Thanks !!

Share this post


Link to post
Share on other sites
Here is one solution (well, I think there are more ways how to solve that) - Summed area variance shadow maps - look here at NVidia developer and there is one presentation about that... You could try google too ;)
http://developer.nvidia.com/object/all_documents.html (presented on ACM Siggraph 2007)

And here's the demo http://forum.beyond3d.com/showthread.php?t=38165

Share this post


Link to post
Share on other sites
Hi Vilem,

I have the book, but the part of the article that concerns the Summed-Area VSM doesn't mention any improvement for the light bleeding problem ?

Also, if you try the demo, the light bleeding issue is still here when switching to SAVSM...

Share this post


Link to post
Share on other sites
Have you tried demo, with the link i posted? There really is some kind of light bleeding solution (but there's no source), I haven't got the book (thinking about buying).

What about this, light bleeding never reaches full scale (that means 1.0), so what about to test if sorrounding pixels has got size 1.0 -> If yes, then no light bleeding reduction, cause we're testing in shadow penumbra, but if any of sorrounding pixels hasn't got full scale, then here is light bleeding ;). This should work, but it has huge disadvantage - performance, you have to test sorrounding area same big as VSM filter (But you need to test just pixels, which have value less than 1.0 and greater than 0.0). I haven't tested that, so that just SHOULD work, it's just my crazy idea...

Share this post


Link to post
Share on other sites
I would like to share the solution I found to resolve my issue.

First of all, I noticed this light bleeding was only visible when shadows cast by aircrafts and by buildings/trees meet. There is no visible artifact when shadows cast by buildings meet together. This is because the difference between aircrafts Z and buildings Z is high, so when blurred, the artifacts get worse.

This is because aircrafts are quite high in the sky. Then I re-red the part of the article that concerns light bleeding, and this time, tried to understand the math (a little bit ;). The article says that the light bleeding issue is proportional with Δa/Δb (figure 8-7).

And this is exactly what happens in my case ! Well done Sherlock (hmm, actually everything was written in the article, so it wasn't a big victory ;)

Anyway, the only solution I can see to decrease the light bleeding is to reduce the Zlight difference between the aircrafts and the buildings/ground.

In my case, aircrafts casting their shadow on the ground are often out of frustum (not visible in main view). This is why I get a high Zlight range.

I came up with a solution that involves Zlight value remapping, as shown on the drawing below:



it seems that I can now use a reasonable light bleeding reduction amount (0.3) in order to solve all the problems.

Maybe this will help somebody.

Anyway, thanks to everybody for your help. And I would be happy to get any feedback/suggestion !

Share this post


Link to post
Share on other sites
hmmm, I think I spoke to quickly.

I still have this light bleeding problem. The only way of solving it is to set the light bleeding reduction amount to a very high number (0.8), which reduces the quality of the shadow transition...

Any idea ?

thanks !

Share this post


Link to post
Share on other sites

This topic is 3581 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this