• Advertisement
Sign in to follow this  

Extended PSM (research paper, demo, code)

This topic is 3888 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Extended PSM (XPSM) provides practical solution for real time shadows rendering. Distinguishing features of this method from other shadow map reparameterization techniques is: * high quality shadows, based on suboptimal projective space parameterization * complete artifact reduction with correct post perspective space Z bias * stability, free from singularities * easy integration into any shadow map based engine * patent clearance Research paper, demo, code at http://xpsm.org XPSM screenshot XPSM screenshot [Edited by - the_xmvlad on August 12, 2007 2:31:42 AM]

Share this post


Link to post
Share on other sites
Advertisement
Wow, I've only checked out the demo but that looks REALLY great. Especially the long stretched shadows with the light source being very low at the start of the demo. Could this be the holy grail of shadow mapping?

Just one small question, maybe I overlooked it but is there an fps counter missing?

Share this post


Link to post
Share on other sites
wolfganw:
thanks ) yes, for any reasonable case XPSM suboptimal, you can just check the paper for complete analysis. In terms of shadows resoluton XPSM is very close to TSM. But doesn't have usual heavy artifacts of TSM produced by problems like: z acne, z bias non-linearity, etc, and patents. ) Another very good XPSM feature is absolutely correct(per-pixel) Z bias, that completely reduce "visual crap" associated with usual "high perspective space z bias non-linearity" in other methods.

sorry, no fps counter, XPSM is just parameterization technique and have exactly same perfomace as standart shadow maps.

Share this post


Link to post
Share on other sites
After trying to demo, I have to say this is not particularly impressive, there is a lot of distortion at certain camera angles, and the resolution chages are very apparent, not any better than regular perspective shadow mapping. I prefer lower res but static shadows that dont pop and distort, especially when up close.

Share this post


Link to post
Share on other sites
Matt Aufderheide:
it is "miner's lamp" case, in which any parameterization method (LiSPSM/TSM/PSM/etc) reduces to standart "low res" focused shadow maps. And of course any dynamic shadows is just incomparable with perfect static light maps.

Share this post


Link to post
Share on other sites
Quote:
Original post by the_xmvlad
Matt Aufderheide:
it is "miner's lamp" case, in which any parameterization method (LiSPSM/TSM/PSM/etc) reduces to standart "low res" focused shadow maps. And of course any dynamic shadows is just incomparable with perfect static light maps.


What he means by "static" is that you don't see the shadow texels flicker when the camera moves. Not static as in lightmaps.

I made lots of research on shadowing techniques recently (particularly TSM and LiSPSM) so I had a look at your demo and paper. It looks pretty nice, especially as it fixes a few of the problems remaining with TSM (which I already found to be a nice improvement over LiSPSM). But it's far from being the "Holy Grail" of shadowing IMO. Just yet another little improvement.

The worst case (light direction parallel to view dir), aka the "miner's lamp" case as you call it (first time I hear such term), is what prevents all those techniques from being practical IMO. They're fine in specific cases when you have a fixed viewpoint (like in real-time strategy games), or when the range at which you want to apply shadows is a few hundred meters in your typical scene, at most. Unfortunately I'm still looking for a solution that can apply dynamic shadows to an entire scene, which in my case can have an horizon of up to tens of KM, with a constant/good quality.

I think I will continue my research by falling back to simple uniform shadow maps, but adapted to the view frustum, and use a serie of them (like in cascaded shadow maps, or parallel-split shadow maps). It's easy to implement, but I think they'll give the best trade-off between quality (lack of artifacts) and performance.

Share this post


Link to post
Share on other sites
Ysaneya:
oh, i really talking about suboptimality of XPSM parameterizaion, that is more like "holy grail of parameterization", sorry for misinterpretation. CSM and PSSM is great techniques but have they problems, too: different resolution in each split, that produce some artifacts in places where different splits jointed. And parameterization methods can be applied to CSM/PSSM splits for resolution improvement, as to a standart shadow maps.

i know at least one game where TSM was applied to shadow, large scale environment - S.T.A.L.K.E.R and with "bluring" (soft shadows) it looks really good.

Share this post


Link to post
Share on other sites
Hi,

this paper looks quite interesting to me. I decided to try to integrate this algorithm into my engine, but I face some issue.

First I am not sure what is the ViewMatrix you use: is it the main scene view matrix (normal view) or the view matrix from the position of the light ?

The other issue I have is that my engine is using a scene graph approach, so I don't have access to individual objects. So my approach is to find the intersection of the bounding box of the casters with the view frustum (and extend it a little bit to include casters outside the frustum that produce shadow inside this frustum).
OK, so once this is done, I transform the 8 points of this box into the View Space (which I am not sure which it is - see above). This replace the step 4 of your algorithm.
Steps 11 to 14 also had to be modified because I only have this single bounding box.

However, I don't manage to make it works. And I have no idea what the problem is.

What I don't understand is, if the view matrix is the main camera view matrix, the scene is rendered from the main camera point of view and not the light point of view? Is that what happens? Usually, using directional lights, I render the scene from the light point of view (which basically is "bounding box center - bounding sphere radius * lightdir)..

Also, I don't understand this code line as well:

viewLightDir = -viewLightDir;

why do you negate the light dir ? I haven't found any mention of that in the paper...

thanks for your help!

[Edited by - gjaegy on August 17, 2007 4:47:05 AM]

Share this post


Link to post
Share on other sites
This is light space definition from paper: "In the light space the light direction is parallel to Z axis and the viewer origin translated to (0, 0, 0)". Simplest way to get into light space is:
1. take view space (usual camera space)
2. transform light vector into view space Lview
3. build LightSpace = Look At ((0,0,0), Lview) matrix, that aligns light direction with Z axis (look at Figure 2 from paper). so (View * LightSpace) matrix transform from some space (usually world space) into light space.

Next, you need to transform all you bounding volumes points (it don't seems optimal, to use box-frustum intersection but at least should work) into LIGHT SPACE, let's refer to it as LS Bouding Volume.

To process forward, you need to determine "warping effect"(fix projection vector direction and length). So this is done in two steps:
a) find direction of the projection vector
1. Transform camera view vector into light space. (camera direction in view space) - V = (0, 0, 1) * LightSpace
2. Project V to XY plane and normalize, that completely determine projection vector direction. unitP = V.xy / |V.xy|

b) find length of the projection vector
1. Project all points of LS Bounding Volume (that is already in light space) to unitP vector and find minimum. That is just: minProj = [for all Points in LS Bounding Volume] (unitP.X * Points.X + unitP.Y * Points.Y)
2. Compute maximal bound of the projection vector. maxLengthP = (epsilonW - 1) / minProj. (if you have separated Receivers\Casters more optimal version can be used look at paper)
3. Compute optimal projection vector length: lengthP = coef / cos(gamma),
cos(gamma) = unitP.x * V.x + unitP.y * V.y (figure 4)
4. Clip optimal length with maximal length to avoid singularities. (3.5 from paper)

Having projection vector it is possible to make transformation into this suboptumal "warped space" (3.4\3.5) (post projection light space or just PPLS). Then PPLS Transform = Projection * ZRotation - transformation from the light space into this "warped space" (PPLS). So we need to take all points of LS Bounding Volume and apply PPLS Transform to it (+ divide by w component, to put all points into real space, after transformation). Let's refer to it as PPLS Bounding Volume (bounding volume points transformed into "warped space").

Next, we need to transform part of this "warped space" into unit cube (device normalized coordinates). So we just build AABB in PPLS over PPLS Bounding Volume points. From this AABB linear basis can be easily constructed, and then inversion of this basis is transformation into unit cube. (UnitCubeTransform)

In last step simple transformation applied to transform unit cube into device normalized coordinates. And finally you need to combind all this transformations: XPSM Transform = View * LightSpace * PPLS Transform * UnitCubeTransform * NormalizedSpace. + you need to check left\right hand cordinate system, row-vector\column-vector math, issues if you using OpenGL.

and negotation to viewLightDir applied just because in demo light vector look from the light source, and in paper assumed that light vector look at the light source. i hope this will be helpful to resolve issues.

Share this post


Link to post
Share on other sites
Hi Vladislav,

first thanks for your quick answer.
I guess I might be stupid, as despite your very clear explanation I don't manage to make it work.

Actually I am using a combination of cascade shadow maps and XPSM; maybe the problem comes from that (i had this combination working for TSM before)? I split the frustum into 4 parts, each one being rendered in one quarter of the shadow map texture.

First, I am still not sure whether I have to negate the light dir or not. My light direction is like in your code ((0,-1,0) when sun is vertical) so I should have to negate. However, when I do that I get this (sun is vertical, no bias for the moment):

http://g.jaegy.free.fr/temp/xpsm01.jpg

look at the top left corner of the bounding box, gears of aircraft are visible, which means 1/Z is rendered to shadow map instead of Z...

When I don't negate I get this:

http://g.jaegy.free.fr/temp/xpsm02.jpg

Also, I found out that I get a good result only if I reduce the XPSMCoef to a very low value (0.0001) which is, in my opinion, not normal. Any idea? When the value get higher, the shadow moves to the back until it disappear...

Finally I found out that the fact I use the frustum/caster bounding box intersection instead of what you explain in your paper cause issue, as some parts don't get rendered into the shadow map, so I will have to check this part. I am not sure however what should be rendered in each quarter of the Cascade Shadow Map.

thanks again for your great support !

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement