Sign in to follow this  

Filtering (dual) paraboloid shadow maps

This topic is 393 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

After some time away I'm revisiting my lighting system and in trying to shape up my point light shadow mapping, came across the paper Practical Implementation of Dual Paraboloid Shadow Maps by Osman, Bukowski & McEvoy.
The paper itself doesn't go into very much depth, but the following passage caught my eyes:

 

The standard approach to DPSM suggests always doing the paraboloid transform in the vertex shader. This causes texture coordinates (in paraboloid space) to be interpolated linearly, giving incorrect results, as seen in figure 3. Our approach is to instead send the world-space position of the vertex in question. With this approach, the hardware’s linear interpolation gives us the worldspace location of the pixel to be shaded, and the interpolation is correct. Then, we transform the position into the light’s paraboloid space for each pixel.

This sounds good, but I cannot quite figure out what they mean with "doing the paraboloid projection in the pixel shader instead". Surely this cannot be done during shadow map generation (the pixel coordinate is determined by the vertex shader's output). The other interpretation is that it is done when sampling the shadow map during scene rendering, but if you just render the depth map via a normal orthographic projection, won't you loose data that do not "fit" the projection but that would be captured through the paraboloid projection?
Also if this really worked, you wouldn't need to have tessellated shadow casters either would you?

 

 

Finally a bonus question; it would seem that the most recent papers / articles I can find on the DPSM technique are from 2008, with most dating as far back as 2002. Has this approach generally been made obsolete by modern hardware being more readily capable of cubemapping or is there yet another (superior) technique that I'm unaware of?

 

 

Edit: I sort of messed up with the title here; I was originally going to state that I'm investigating this because applying VSM and blurring to DPSM's exaggerate their inherent problems, thus it would be very nice to be able to do this in linear space indeed.

Edited by Husbjörn

Share this post


Link to post
Share on other sites

The other interpretation is that it is done when sampling the shadow map during scene rendering, but if you just render the depth map via a normal orthographic projection, won't you loose data that do not "fit" the projection but that would be captured through the paraboloid projection?

Since you're rendering the depth map in a normal orthographic projection, you're not doing the dual paraboloid projection anymore. And of course the whole depth map is invalidated if you use different projections for constructing the depth map and the lighting pass respectively. The whole point of any projection in a shadow mapping paper is to use the projection consistently across the two steps.

Share this post


Link to post
Share on other sites

Finally a bonus question; it would seem that the most recent papers / articles I can find on the DPSM technique are from 2008, with most dating as far back as 2002. Has this approach generally been made obsolete by modern hardware being more readily capable of cubemapping or is there yet another (superior) technique that I'm unaware of?

Paraboloid mapping never really took off because it's a curved projection, and all our rasterization hardware is built around linear projections... Therefore you either get artifacts from the incorrect interpolation, or expend significant computational time to correct for the curvature and lose the hypothetical speed benefits. 
 
One way to fix the artefacts would be to render a cube-shadow-map (six 90º projections) and then transform that into the two parabola maps (or one or two sphere maps). This could make your shadow sampling cheaper, but obviously has the same rasterization cost of cube-shadows, plus the intermediate/conversion step.

Share this post


Link to post
Share on other sites

Unprofessional opinion, based on my experience after implementing exactly THIS: It's not worth it. I wasn't able to get over the artifacts, even though some papers claim it would be possible. I got the impression, that dpm depend on the tesselation of your geometry too much. If you have gl_Layer available as vertex shader ouput, use layered rendering to a cubemap target, and perhaps use a lower cubemap resolution. Otherwise, do 6-pass rendering to each single cubemap side with per-face-culling. Still better results, I bet :)

Edited by hannesp

Share this post


Link to post
Share on other sites

Thanks guys. I was starting to think cubemaps may be the way to go after all, and I think that I will indeed switch to that route. The cube-to-paraboloid map approach brought up by Hodgman sounds like it could be worth a look though; it could probably be combined with the last step of my variance map post-processing.

 

Unprofessional opinion, based on my experience after implementing exactly THIS: It's not worth it.

Thanks for your input. I'm leaning that way as well now, however out of interest, could you perhaps explain more in detail how you went about implementing this? I'm still having trouble understanding exactly what the referenced paper is suggesting and how / where it would be applied. 

Share this post


Link to post
Share on other sites

Sorry for the long waiting time.

 

The implementation is not that difficult. There's the old method: Render your geometry with paraboloid projection...that means do the transformation in your vertex shader. As you might know, (linear) interpolation will be done for all pixels, covered by a triangle, based on their position on the triangle. That's nice, but is incorrect if your projection is not a linear. And paraboloid projection is not linear. Imagine you have two vertices of your triangle and project them onto a sphere somehow. If you want your attributes for a non-existent vertex in the middle between your two vertices, you can't linearly interpolate between them, because you won't end up with a position on the sphere's surface, but somewhere in the sphere.

 

The problematic step (for simple meshes with unsufficient tesselation), is the interpolation (of your positions). This step can be deferred and be done in the fragment shader. But then, you would have to send world space coordinates to your pixel shader, while the transformation happens during your lighting phase. Take a look at http://gamedevelop.eu/en/tutorials/dual-paraboloid-shadow-mapping.htm and http://advancedgraphics.marries.nl/presentationslides/10_dual-paraboloid_shadow_mapping.pdf and http://diaryofagraphicsprogrammer.blogspot.de/2008/12/dual-paraboloid-shadow-maps.html , to get more details :)

Share this post


Link to post
Share on other sites

This topic is 393 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this