Jump to content
  • Advertisement
Sign in to follow this  
lonesock

Extending VSM - cheap PCSS & alternate representation

This topic is 3775 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, All. I've just started playing with shadow maps, and I started implementing vanilla VSM because I found the concept to be quite elegant. I've found 2 things which are simple ways to extend the concept: 1) Since we already have the sigma values, I can use that to estimate the blocker position (a la PCSS). If you were going to do an 8x8 search for a blocker, instead get the average depth & depth^2 from the 3rd mipmap level, and: d_blocker ~= avg_depth - 1.5*sigma; where: sigma = sqrt( avg_depth2 - avg_depth*avg_depth); You then need to clamp the blocker distance between 0 and the fragment distance, or you could get negative/undefined numbers. Using the standard PCSS calculation I get a penumbra width estimate...take the log2() of that, and use that as my mipmap level for the regular VSM calculation! 2) Storing the depth^2 values gets tricky: instead of storing depth & depth^2, I first scale all my depth values so that they are in the range [0..1] (by knowing something about the scene in question and using a uniform float inverse_max_depth). Then I use the knowledge that is X is in the range [0..1], then X-X^2 is always in the range [0..1/4]. Also, X-X^2 is very linear near X=0 and X=1, and flat near X=0.5, but X is linear there. So I store depth and encoded = 4.0*(depth - depth^2). I think this gives me better precision over the entire range, but I'm not entirely sure, if any math gurus wish to verify this I would be most grateful! (it seems to work, so just thought I'd share). Then to recover: sigma2 = avg_depth - 0.25*encoded - avg_depth*avg_depth; The mandatory simple screenshot: the demo (with source code). [M] to toggle mouse control to toggle mouse invert Y-Axis [Esc] to quit [Arrow Keys] movement [PageUp/Down] change the light diameter [Space] toggle light rotation (and will show fps info) Sorry, I haven't gotten it working on ATI cards yet. Also, this is using RGBA16F because I couldn't get my 7300 to use 16-bit integers or LA16F or GR16F! 16-bit integer formats work great on my 8600 at home, and I hope they will also be supported on ATI HW because I need mipmaps and trilinear filtering. Future: * I want to do a simple Gaussian blur on the mipmaps of the shadowmap (as Mintmaster has mentioned). Right now I do a little 4-sample fakery in the final shader. * I want to reduce the light bleeding (I've seen AndyTX say there is a simple way, but haven't found the code yet) * I want this to work on as many platforms/HW as possible. Please hit me with any criticism, ideas, comments, questions, etc.!

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by lonesock

* I want to reduce the light bleeding (I've seen AndyTX say there is a simple way, but haven't found the code yet)


The "AndyTX" way is to simply clip off the tail end of Chebyshev's Inequality. In HLSL it looks something like this:



// calc p_max using Chebychev's inequality
float m_d = moments[0] - depth;
float p_max = variance / (variance + m_d * m_d);

// clip off tail end of the inequality to reduce light bleeding.
p_max = smoothstep(0.3f, 1.0f, p_max);


...where smoothstep is defined like so:


float smoothstep (float min, float max, float input)
{
return (clamp((input - min) / (max - min), 0, 1));
}

Share this post


Link to post
Share on other sites
Quote:
Original post by MJP
awesomely helpful stuff

Thanks! I will work that into the next revision!

Related note: I found why my code wasn't working on ATI cards. glCreateProgramObjectARB() was returning a negative number on success. I had assumed that >0 meant success, so even though the shaders compiled fine I was not using glUseProgramObjectARB because the ID was < 1.

One thing I forgot to mention: the demo will load .OBJ files, just drag them onto the exe to see what this looks like on a random scene. Don't expect greatness on large scenes, however, this is using only a single 512x512 map, and no true blurring.

Share this post


Link to post
Share on other sites
Very cool! I actually use a similar approximation of blocker depth: mu - sigma. I chose this because it's exact at the "center" of the filter (when 50% of the filter is covered by the blocker, and 50% receiver). Curiously is there a reason why you chose 1.5*sigma? It shouldn't matter too much anyways, but I'm interested. I particularly use this approximation with the summed-area variance shadow maps stuff (see GPU Gems 3 chapter) where you can get very nice results that don't suffer from the blocky artifacts of using mipmapping for blurring. Of course Mintmaster's custom mipmap generation stuff (Gaussian blur, etc) may work quite well also but I haven't had the time to try it.

That's an interesting depth/variance encoding. I'll have to try that out and run the math on it. Thanks for posting it!

MJP already posted the simple "light bleeding reduction by over-darkening" (although what he calls "smoothstep" should actually be called "linstep" there) that I discussed in GPU Gems 3. Further work in Layered Variance Shadow Maps (to be published very soon - just cleaning it up) and Exponential Variance Shadow Maps (as well as Exponential Shadow Maps themselves - see ShaderX6) provide more ways to generalize and approximate the depth distribution, each with associated trade-offs. There's a recent thread at Beyond3D in the console section about the GDC presentations wherein a couple of us are discussing some possibilities in more depth.

Share this post


Link to post
Share on other sites
Quote:
Original post by AndyTX
Very cool! I actually use a similar approximation of blocker depth: mu - sigma. I chose this because it's exact at the "center" of the filter (when 50% of the filter is covered by the blocker, and 50% receiver). Curiously is there a reason why you chose 1.5*sigma? It shouldn't matter too much anyways, but I'm interested...

Well, the 1.5 came about because I was using Excel's solver to minimize error of this approximation on a step function, using a few different smoothing widths. The solver almost always came back with a value around 1.5. Ironically, in my demo I just use 1.0 because it didn't really make any difference that I could see. [8^)

Quote:
Original post by AndyTX
That's an interesting depth/variance encoding. I'll have to try that out and run the math on it. Thanks for posting it!

You are welcome! (btw, I forgot to mention that when initializing the depth/variance map, I need to use 1.0/0.0 values, instead of 1.0/1.0)

I don't have any of the GPU Gems * or ShaderX* series, though they look cool. I just do this stuff as a hobby, and none of my local bookstores carry them so I can't even "impulse buy" them [8^). Thank you for the feedback. I will check out the resources you mentioned and get caught up on the more recent work.

Share this post


Link to post
Share on other sites
Quote:
Original post by lonesock
Related note: I found why my code wasn't working on ATI cards. glCreateProgramObjectARB() was returning a negative number on success. I had assumed that >0 meant success, so even though the shaders compiled fine I was not using glUseProgramObjectARB because the ID was < 1.

Which is the reason why a program handle is an unsigned integer (GLuint). ATI returns large numbers for texture handles, which become negative if interpreted as signed.

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
Which is the reason why a program handle is an unsigned integer (GLuint). ATI returns large numbers for texture handles, which become negative if interpreted as signed.

I see. I am using GLee 5.21, and it defines GLhandleARB as type "int". Thank you for the response.

Share this post


Link to post
Share on other sites
Quote:
Original post by lonesock
Quote:
Original post by swiftcoder
Which is the reason why a program handle is an unsigned integer (GLuint). ATI returns large numbers for texture handles, which become negative if interpreted as signed.

I see. I am using GLee 5.21, and it defines GLhandleARB as type "int". Thank you for the response.

Hmm, I just looked this up, and it seems the ARB did define GLhandle as a signed integer. However, when the extension was approved into the main library, GLhandles were removed, and became GLuint.

Since you are using GLee, there is really no point warting your code with *ARB all over the place - use the standard GL 2.0 functions and types, and GLee will deal with it for you ;)

Share this post


Link to post
Share on other sites
OK, I've updated the original zip file with new code and demo. Also updated the original screenshot

DONE
* Reduced light bleeding
* Should run on ATI HW too
* uses RGBA16 textures to get semi-cross-HW compatibility (some older NV cards drop this to RGBA8, you'll know if this happens)
* do a simple 3x3 Gaussian blur on the shadow-map just after rendering it

STILL TO DO
* larger Gaussian blurs (separable)
* get the RGBA8 encoding working (has some weird artifacts)
* enable MSAA (which I've never done before, just need to Google I guess [8^)

and here is a new screenshot using the OBJ loading feature:


@swiftcoder: Thanks for all your help and info!

Share this post


Link to post
Share on other sites
Lookin' good! I'm definitely interesting in your low-precision encodings especially and how well they scale to larger depth ranges. Any details that you're willing to provide would be appreciated :)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!