Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Variance Shadow Map fades when shadow is close to occluder


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
8 replies to this topic

#1 sobeit   Members   -  Reputation: 440

Like
0Likes
Like

Posted 01 July 2014 - 01:56 AM

I've implemented the variance shadow map, but still got a problem that is the shadow fades when it approaches the occluder. The same issue was posted here

 

It seems like it's the weakness of the algorithm because p_max ( = variance / [variance + d * d]) would inevitably go to 1 when d becomes smaller. But I don't see the same problem in the authors' paper or demo, neither did they mention it. I guess I probably did something wrong. Do you have the same issue? how do you address it? Thanks.

vsm_shadow_fades.PNG



Sponsor:

#2 imoogiBG   Members   -  Reputation: 1222

Like
1Likes
Like

Posted 01 July 2014 - 06:58 AM

guessing that might be light bleeding or small map size?
http://http.developer.nvidia.com/GPUGems3/gpugems3_ch08.html

 

Ignore that comment


Edited by imoogiBG, 01 July 2014 - 07:01 AM.


#3 Hodgman   Moderators   -  Reputation: 31096

Like
0Likes
Like

Posted 01 July 2014 - 07:10 AM

I'd have to read it again, but I thought this was mentioned by the author...
IIRC, the solution is to always render front-faces into the shadow-map (some other algorithms recommend drawing the back-faces of the casters) and that there was a 'bias' parameter added to the VSM algorithm that's usually set to 0.5, which results in less-soft edges, but also reduces this "light bleeding" problem. The bias parameter could be tweaked to choose between more/less bleeding vs softer/harder edges.
I'll check my code and see if I've remembered this bias param correctly... Maybe it was added after the initial paper was published?

[edit] with the above, I was thinking of the "second receiver" issue that's mentioned in the nVidia link above...

Are you rendering the ground plane into the shadow map too?

And are you clamping variance so that it never goes below some small value (and have tweaked that small value)?

#4 Promit   Moderators   -  Reputation: 7341

Like
1Likes
Like

Posted 01 July 2014 - 11:28 AM

This is a natural and obnoxious part of VSM that can be addressed with the light bleeding fix, to some extent. It's soured me on VSM considerably in recent years. You might find this presentation helpful: http://developer.download.nvidia.com/presentations/2008/GDC/GDC08_SoftShadowMapping.pdf

ESM, or VSM+ESM, may be a useful approach.



#5 sobeit   Members   -  Reputation: 440

Like
0Likes
Like

Posted 01 July 2014 - 01:10 PM

Are you rendering the ground plane into the shadow map too?

And are you clamping variance so that it never goes below some small value (and have tweaked that small value)?

thanks for the reply.

Yes, I rendered the ground plane into the shadow map. I tried not, but the artifact I mentioned became worse.

Yes, I clamped the variance to 0.002. But I think it doesn't matter, because p_max = variance / (variance + d * d).

 

 

This is a natural and obnoxious part of VSM that can be addressed with the light bleeding fix, to some extent. It's soured me on VSM considerably in recent years. You might find this presentation helpful: http://developer.download.nvidia.com/presentations/2008/GDC/GDC08_SoftShadowMapping.pdf

ESM, or VSM+ESM, may be a useful approach.

But the reason for the light bleeding is different, right? and it was mentioned in the paper. That's why I think I probably did something wrong. 

thank you for the resource mentioned, I will try to implement that and compare the results.


Edited by sobeit, 01 July 2014 - 01:10 PM.


#6 Hodgman   Moderators   -  Reputation: 31096

Like
1Likes
Like

Posted 01 July 2014 - 06:55 PM

Yes, I clamped the variance to 0.002. But I think it doesn't matter, because p_max = variance / (variance + d * d)

Depending on the units you're using, that may be a large value.
In the example of a flat caster, variance will be zero, so you get p=0/(0+d*d), so in theory, there is no fading/bleeding. When you clamp with a minimum variance you're avoiding the practical issues that occur when using very small numbers, but you're also deliberately introducing light bleeding.

With v=0.002, and d=sqrt(v)~=0.045, then we end up with p=1/2.
So if your units are in meters, then at 4.5cm behind the caster, the shadow will be at 50% intensity. It will reach 99% intensity at 4.43m behind the caster.
Reducing your minimum variance value will reduce these distances.

Also, if your units are larger than meters, then things could be much worse. Often you use normalized depth, where 1.0=the distance to the far plane; imagine the case where the far plane is 10000m away! In that case, the 50% shadow value is reached at ~450m, and the 99% value at 44km... :(

Ignoring the artificial case where v=0, pmax is never going to actually reach 0 / shadow intensity never reaches 100%... Whih isn't ideal; this
Means that an extremely bright light will always *somewhat* shine through a brick wall...
Usually we'd want the shadow to reach
100% intensity at some point, so I'f recommend remapping pmax just for this reason.
E.g. With p=saturate(p*2-1), then at the point where the shadow would've originally reached 50% intensity (4cm), it will now reach 100%.

#7 sobeit   Members   -  Reputation: 440

Like
0Likes
Like

Posted 03 July 2014 - 11:34 AM


With v=0.002, and d=sqrt(v)~=0.045, then we end up with p=1/2.
So if your units are in meters, then at 4.5cm behind the caster, the shadow will be at 50% intensity. It will reach 99% intensity at 4.43m behind the caster.
Reducing your minimum variance value will reduce these distances.

 

sorry, I'm not sure if I get what you mean?

so you are suggesting using normalized distance? divide d by (light_far - light_near), like this?

thanks.


Edited by sobeit, 03 July 2014 - 11:34 AM.


#8 Hodgman   Moderators   -  Reputation: 31096

Like
0Likes
Like

Posted 03 July 2014 - 07:05 PM

No. It's common to store normalized units in the buffer, but I was just saying that your magi number 0.002 is measured in the same units as your z units.
So with normalized depth units it's "0.002 far planes", or if working in meters it's "2mm".

Also, if you are working in meters, the math shows that your value of 0.002 will result in 50% light leaking at a distance of ~4cm, and 1% light leaking at ~4m.

To reduce the leaking, you can either replace 0.002 with a smaller value, or you can scale&bias the pmax result (pmax = (pmax-f)/(1-f), where f is from 0..1), or both.

#9 Brejlounek   Members   -  Reputation: 169

Like
1Likes
Like

Posted 06 August 2014 - 11:19 AM

I used smaller minimum variance value (something like 0.0000001 instead of 0.002) and changed texture format from 16bit float to 32bit float or 16bit unsigned int (better accuracy because there is no exponent, but your values are not allowed to overflow), that solved it for me.


Edited by Brejlounek, 06 August 2014 - 11:29 AM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS