Jump to content
  • Advertisement
Sign in to follow this  
gjaegy

issue with PSVSM

This topic is 3941 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I have implemented PSVSM (parallel split + variance shadow map). I have one little issue, with some shadow corruption occuring depending on the position of the camera. What is strange is that the issue doesn't appear if I don't blur the shadow map. Have a look at these two screen shots, the problem appear on the foreground in the second screenshot (blur kernel width = 6) : no blur blur I am using the same blur shader that the VSM author published in GPU Gems. I have also checked my sampler states (tried clamp, wrap, mirror) but it didn't have an influence. Please note the projection matrix is expanded in order to blur correctly. So I am a bit lost, any feedback would really help me.

Share this post


Link to post
Share on other sites
Advertisement
I am scaling the blur filter kernel size according to the area covered by each tile.

However, the problem occurs in a single slice. See the colors, red being the first tile, green the second, blue the third, etc...

Share this post


Link to post
Share on other sites
looks to me as if the normal of this triangle in your terrain is flipped and therefore not lit or something.

Share this post


Link to post
Share on other sites
See my response @ B3D but there's definitely something odd going on in the last shot - the unblurred part of the shadow right near the viewer looks to be at a much higher resolution that the blurred part, which seem to contradict your "cascade" visualization colours.

Are the visualizations at the top the shadow maps themselves? I don't see the plane in them anywhere...

Share this post


Link to post
Share on other sites
Hi,

First thanks for having taken some time answering my thread; Andy, I am going to continue on gamedev, as it seems there are more people here (and the guy I was secretely trying to get involved answered on both fora ;)

OK, first, wolf, by expanding my projection matrix i mean expand the clip region covered in order to have a correct blurring at the edges, as suggested by Andy in his chapter.

Then, the colors indicate which split is currently used, red being the first one. So the problem is not related to any slice boundary change.

I think I have found the problem. It is due to the fact that my Z-Far was set using the frustum points only; I thought, i don't need to render objects that are not in the view frustum, and not between light and frustum. However, this is why the blur then fails. The rendered area is being blurred with a part of the shadow map that hasn't been rendered into (I guess).



See the drawing above. Initially I was setting the Z-Far plane using the frustum points only.

I use a directional light. What I do now to create the matrices:
- first put light position in frustum center.
- with all frustum points and caster bounding boxes visible (in frustum or with sweep test along light direction), project on this axis, get min and max Zlight
- move the light backward (using ZMinLight computed at previous step) to set the light at the right position, to make sure all object will be rendered
- then compute projection matrix and crop matrix

In the first method (the one that has shown the issue), I was using far plane number 1 on drawing. So some region (near the near plane of the camera frustum in the drawing) were blurred with (0,0) (texels not being rendered) producing wrong result. I this theory possible ?

However, in my configuration where I have one big low-res terrain mesh, I get huge ZFar and ZNear values (when computing dot product between bounding box and light pos/dir). In the example, I would get plane 2 as Z Far (in my scene, ZFar can be 44000.0 with this method; same problem with ZNear).

Right now I am clamping my values to some constant, to reduce the ZFar-Znear range. Otherwise precision would be very bad. However this is a bit of a hack.

A solution would be to decompose the big mesh into smaller chuncks.

Do you see any other solution available ?



Another problem I now have is this. Look at the screen shot below, I am using SH to render the scene, and get some issue with shadow boundaries, as I don't use NDotL but rather a SH representation of the sky. Using NDotL would reduce in a too dark scene...

See the red arrows (only first texture slice is debug-displayed):




First, I don't understand why I get a hard shadow on the tail of the aircraft ? I know theorically the tail should be all dark, because the sun is from the other side. But because of the SH lighting I have no way of differentiating ambient and diffuse lighting. I simply compute the SH light factor, and scale it according the shadow result.
If I scale using NDotL, the shadows are nice, but the scene gets too dark (because ground NDotL is nearly 0, so ground gets black). Is there a solution to my problem ?

Thanks a lot guys for your help, I appreciate. And to be honnest I am quite happy to get help from 2 famous guys ;)

Once I have solved this, I have 1-2 other questions for you Andy regarding the article... going to bother you ;)

PS: if you want to see some source code just ask..

[Edited by - gjaegy on February 8, 2008 3:01:13 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by gjaegy
In the first method (the one that has shown the issue), I was using far plane number 1 on drawing. So some region (near the near plane of the camera frustum in the drawing) were blurred with (0,0) (texels not being rendered) producing wrong result. I this theory possible ?

Yeah that's definitely possible. Often it works to simply clear your shadow map to (1, 1) or similar (or even higher if you're using a fp32 shadow buffer) before rendering so that pixels that are "not rendered to" are considered to be at the far plane. You may still have some trouble if you're aggressively clamping the depth range near the camera: the problem being that the "support" of the pixels onscreen covers a *range* in the shadow map, particularly when you blur. The same is actually true of mipmapping but those issues will be harder to notice.

Thus what you want to try to ensure is that everything "around" each of the shadow map pixels that you may sample is rendered and valid in the shadow map. That's potentially easier said than done, but I suspect you can play with a reasonable heuristic to get this to work in the vast majority of cases. If you put a bound on the "z" component of the face normal of any surfaces rendered into the shadow map you can determine precisely how far to extend the depth range, but that may be overkill.

Probably a "fuzzy boundary" where you just increase the depth ranges near the camera, etc. by some % would be sufficient for your purposes.

Quote:
Original post by gjaegy
If I scale using NDotL, the shadows are nice, but the scene gets too dark (because ground NDotL is nearly 0, so ground gets black). Is there a solution to my problem ?

Yeah shadows/shadow maps don't really make much sense unless you're using NdotL or similar in addition to the shadow attenuation. I'd suggest that if you want to use them together treat the shadow casting light (the "sun") as a standard directional light with NdotL and shadow attenuation, and then treat the SH part as an ambient term that you *add* to that lighting. You can play with the multipliers appropriately to get the desired effect. Note that this will brighten your shadows a bit (due to the ambient term) but that's fairly realistic to what SH is trying to model anyways.

Quote:
Original post by gjaegy
Once I have solved this, I have 1-2 other questions for you Andy regarding the article... going to bother you ;)

No problem - feel free to post here or send me an e-mail. I'll get back to you as soon as I can.

Share this post


Link to post
Share on other sites
Hi Andy,

Thanks a lot for your answer. I am on the way solving the blur issue properly.

Concerning the NDotL issue, I am not sure to model the sun light as a simple directional light. This would remove any indirect lighting effect produced by light bounces.
An option would be to create two SH light sources, splitting the original one into two one (ambient and diffuse). This would keep the benefit of SH concerning indirect lighting, but would require some additional VS instructions ;(
I will have a try.

One question on another subject, as to be honnest I am not sure about this, what happens if fDepth is < 0 ? From what I have seen it seems to work, but as your mathematic background is better as mine, your point of view would interest me. I could of course avoid this situation, if negative distance work, the source code remains simpler...

Another question I have, in the book the following snippet, found on the CD, is different. To be honnest I know it works, but as I am a bit curious, I would like to understand why you commented out the last part of the function. To be honnest I didn't understand the


float2 ComputeMoments(float Depth)
{
// Compute first few moments of depth
float2 Moments;
Moments.x = Depth;
Moments.y = Depth * Depth;

// Ajust the variance distribution to include the whole pixel if requested
// NOTE: Disabled right now as a min variance clamp takes care of all problems
// and doesn't risk screwy hardware derivatives.
//float dx = ddx(Depth);
//float dy = ddy(Depth);
//float Delta = 0.25 * (dx*dx + dy*dy);
// Perhaps clamp maximum Delta here
//Moments.y += Delta;

return Moments;
}


And I forgot to say: your algo simply rocks. I now use a 4 x 1024 x 1024 texture array, using geometry instancing and the GS to render to the 4 slices in one pass, without any warping technique. An the result is much more realistic and actually good-looking as with my previous method (CSM with 4 slices, all packed into one big 4k x 4k shadow map texture, using a modified TSM warping method). The old method looked nice, but the texture needed to be 4x bigger (2x2) to get a decent result (for a 2000m / 3000m shadowed area) !

Share this post


Link to post
Share on other sites
Quote:
Original post by gjaegy
One question on another subject, as to be honnest I am not sure about this, what happens if fDepth is < 0 ?

It shouldn't be a problem as the mean (Moment 0) that you read back from the VSM will always be >= 0. Thus your fDepth <= Mean will always be true and the shadow attenuation will always be 1 (i.e. fully lit). Optionally you can clamp fDepth to [0, 1] but it doesn't really matter either way.

Quote:
Original post by gjaegy
Another question I have, in the book the following snippet, found on the CD, is different.

Yeah that's an interesting section of code and related to the discussion about biasing and VSMs. Basically the concept is that unlike with standard shadow maps we can actually represent the fact that a pixel in a shadow map is not actually a single, discrete depth but rather covers a "range" of depths over the relevant portion in the triangle. That formula there uses the derivative instructions to approximate this range of depths and adjust the "pixel variance" accordingly.

The problem is that the hardware-provided derivatives are sometimes a bit flaky as they are achieved simply by forward differencing. Optionally one can use the surface normal to compute the pixel depth coverage but that requires passing it into the shaders while rendering the shadow map.

The larger problem is that representing the range of a pixel depth works well when the triangle covers the whole pixel, but breaks down a bit at triangle edges. At these edges - when the triangle is almost aligned with the light rays - it's possible for the derivatives to get extremely high, even though the triangle doesn't actually span that whole range (it might if it didn't stop). There's no way to know this with a typical rasterizer though, so in practice you just have to try and clamp out these huge values. Otherwise, you can get some weird light splotches "randomly" appearing in shadowed regions depending on the participating geometry.

The final nail in the coffin for that section of code is that simply clamping the minimum variance before computing Chebyshev's Inequality eliminates all biasing artifacts effectively in practice. In the really extreme cases of long polygons parallel to the light rays NdotL ~ 0 anyways so a little bit of improper shadow "darkening" is going to be totally unnoticeable anyways.

Thus it's a mathematically interesting result, but in practice I recommend just clamping the minimum variance.

Quote:
Original post by gjaegy
And I forgot to say: your algo simply rocks. I now use a 4 x 1024 x 1024 texture array, using geometry instancing and the GS to render to the 4 slices in one pass, without any warping technique. An the result is much more realistic and actually good-looking as with my previous method (CSM with 4 slices, all packed into one big 4k x 4k shadow map texture, using a modified TSM warping method). The old method looked nice, but the texture needed to be 4x bigger (2x2) to get a decent result (for a 2000m / 3000m shadowed area)!

Awesome I'm glad it's working well for you :) For really kick-ass shadows, try enabling MSAA when rendering the shadow map if you aren't already. Then you effectively get a shadow map of 4x the resolution (with 4x MSAA) with very little additional cost :)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!