Opinions on "Pixel-Correct Shadow Maps with Temporal Reprojection .."

Started by
16 comments, last by coderchris 16 years, 6 months ago
Hey, lots of posts (sorry I've been a little busy).

Quote:Original post by coderchris...

From what I understand, the conf(x,y) term basically just says that points (in eye space) that, when projected into light space, that are closer to the respective 2d point in shadow map space have a higher confidence of being the correct result (either in shadow or not). Seems to be a little bit of a hack, imo, in the case where the points aren't equal (ie, center_x!=x or center_y!=y).

The main reason I don't use pre-baked shadows is I have a sizeable world, so they wouldn't all fit in texture memory (especially when you consider all the other textures, render targets, etc). I realize I could use a mega-texturing technique or something similar, but this is just for a student/class project I'm working on, so I'd rather not spend hours baking shadows either (with some GI method or something) whenever I change one of my meshes.

Yeah, so the fact this is just a little "advanced real-time rendering" class project (we get the entire semester to make something somewhat mildly impressive), I'm not really bound by the same restrictions as a normal game (I have lots of memory, not doing too many things other than culling on the CPU, no animations to skin, etc, etc). Talking to some people it seems PSVSM is going to be a popular choice for shadows so I thought I might try to make something a little more impressive with regards to shadows in addition or added on to PSVSM.

After looking at the GH paper more, I do agree with AndyTX that it seems to be a more general solution, applicable to more problems than simply shadows.

Anyways, if I do implement this technique or anything similar, I probably won't post any demo until late December / January (when my class ends). So I'm just saying not to expect any demo soon or anything like that.
Advertisement
The thing that's confusing me a bit is the video ( http://www.cg.tuwien.ac.at/research/publications/2007/Scherzer-2007-PCS/ ). When the camera doesn't move at all, I was expecting the shadows edges to not flicker / update at all.

Y.
Quote:Original post by Christer Ericson
There is no such thing. I don't mean anything rude here, but in my experience only people who don't actually do games for a living talk about "extra GPU power." Another 100% mythical beast would be "enough memory."

Oh not at all, I completely agree with you :) Indeed that's the reason why IMHO these iterative techniques don't end up getting used much: a "modern" game is often running potentially far below 100fps, in which range they make very little difference and add unnecessary overhead. That said if you make - say - a puzzle game (or something fairly static) targeting 60fps on GeForce 6's, you could certainly do some supersampling on GeForce 8800's :)

Quote:Original post by coderchris
However, in this paper, these coordinates are used as "x" and "y" (is that right?). Then "centerx" and "centery" is the x and y clamped onto the nearest texel (correct?)

I believe that's correct.

Quote:Original post by Ysaneya
The thing that's confusing me a bit is the video ( http://www.cg.tuwien.ac.at/research/publications/2007/Scherzer-2007-PCS/ ). When the camera doesn't move at all, I was expecting the shadows edges to not flicker / update at all.

The samples are still being "faded out" with an exponential falloff, so there will continually be new samples introduced (every frame) and old ones eliminated. Unless convergence is very good, or there is a very slow falloff (which would produce artifacts when the camera moves quickly), there will always be a little bit of "flicker". Probably not a huge issue in practice.

And as I missed something in the original post:
Quote:Original post by wyrzy
My PSSM implementation is pretty basic (I don't do any scene analysis), but even the GPU Gems 3 demo that does has very noticeable artifacts.

Did you check out the PSVSM demo in Gems 3 (in the demo provided with the SAVSM chapter)? It's not covered in the actual chapter text, but the implementation is straightforward. I don't claim that it's a perfect implementation (I threw it together fairly quickly), but it produces pretty good results in practice IMHO and eliminates many of the artifacts associated with PSSM. In particular once you get to something like 3x 1024 VSM's with 4x shadow MSAA things are getting pretty near sub-pixel accuracy even for large framebuffers.

Now there are some bugs and inefficiencies in the split computation code, but generally they're explained in the source pretty well with "TODO"'s :D
Quote:Original post by AndyTX
Did you check out the PSVSM demo in Gems 3 (in the demo provided with the SAVSM chapter)? It's not covered in the actual chapter text, but the implementation is straightforward. I don't claim that it's a perfect implementation (I threw it together fairly quickly), but it produces pretty good results in practice IMHO and eliminates many of the artifacts associated with PSSM. In particular once you get to something like 3x 1024 VSM's with 4x shadow MSAA things are getting pretty near sub-pixel accuracy even for large framebuffers.


Yeah, I checked it out briefly. I have an 8-series card, but I'd really like to have it run on 7-series cards under D3D9 too. In that case, I can't really use MSAA (using RGBA32F) and the manual bilinear filtering of RGBA32F slows me down a little too.

I have coded up an initial PSVSM implementation, but I don't claim its optimal/perfect to say the least. I have had troubles with shallow angles with respect to the light and VSM (shadow seems to become unnaturally soft), though shallow angles are a difficult case in general. I'll probably try improving upon my PSVSM before attempting other techniques, or doing something like VSM on the terrain and then a jittered PCF on objects close to the camera (otherwise their shadows appear unnaturally soft). I'm just looking to plan ahead so I have a rough schedule as my deadline can't be moved or delayed.

The 3x 1024x1024 blur (with manual filtering on RGBA32F targets) pushes the 7-series somewhat as well. However, I haven't really taken any time to optimize my implementation yet.

Anyways, thanks for the comments, I think I have enough to work with.
Ok, so I have it all implemented as described in the paper, and im almost getting results. My shadows edges are soft and "swimming" (almost looks like the shadow is "boiling"), with no movement going on in the scene. I also have noticed that if I increase the amount of jitter, I get jittery penumbra shadows; which looks cool, but i was under the impression that its suppose to converge towards crisp shadows... Im almost positive it has something to do with my confidence function; does anybody see anything wrong with this:

//projected is the texture coordinates of the shadowmap projected onto the current pixel in the history bufferfloat getConfidence(float2 projected){	float2 realPos = projected * shadowMapSize;        //compute the 'centerx' and 'centery' by clamping	float2 clampedPos = floor(realPos) + float2(0.5, 0.5);	float2 posAbs = abs(realPos - clampedPos);	return 1 - max(posAbs.x, posAbs.y) * 2.0;}


Perhaps i shouldnty be using the projected shadow map coords for these calculations?

Heres a pic of whats going on in case your curious...as you can see the shadow edges arent what they should be and theres some wierd artifacts near where the boxes touch the floor

http://img502.imageshack.us/my.php?image=shadowpicta2.jpg
Quote:Original post by coderchris
which looks cool, but i was under the impression that its suppose to converge towards crisp shadows... Im almost positive it has something to do with my confidence function; does anybody see anything wrong with this:

I haven't had a chance to look at your confidence function in detail, but one thing to note is that you shouldn't be jittering your samples more than a single pixel. Since you're effectively using a box filter to reconstruct the multisampling, you definitely don't want to make this too "wide" a filter, or you'll get over-blurring like you're seeing.

I believe in the paper they also used a power function to control some of these things (the video shows C^2, C^15, etc. which I assume is the confidence function). I may be wrong as I didn't read it over extensively, but that's my recollection... higher powers will tend to disfavor samples that are a long way away from the pixel centers, although conversely not converge as fast to a nice solution.

Quote:Original post by coderchris
I also have noticed that if I increase the amount of jitter, I get jittery penumbra shadows

Do you jitter using the Halton random number sequence as suggested in the paper? I don't know if just using rand()/float(RAND_MAX) would be an acceptable random jitter sequence or not, but there might be a reason they specifically recommend Halton.


Regarding getConfidence(), it looks fine to me, but I've been busy lately and haven't had time to even start on this.

Also, as AndyTX mentioned, they use exponential smoothing (see page 3).
I am using the hamilton sequence; got the code outline form the very site you linked to. what range of values should i be passing into the hamilton sequence? At the moment, I just have a counter veriable that gets incremented every frame, and I pass this value along with a dimension to get the hamilton. WHen my counter gets to 1000, i reset it to 1; is this a good idea? Also, for the x jitter i use dimension 2, y jitter i use 3, and rotational jitter i use 5...It seems pretty random and well distributed so I dont think my jittering is too off; except i guess i could scale it a little

I also raise my confidence by a power as they suggest; Ill just have to mess around with different configurations of the confidence computation untill it actually converges; because at the moment it doesnt really converge at all; just stays random and jittery

This topic is closed to new replies.

Advertisement