Sign in to follow this  

Shadow wedge algorithm

Recommended Posts

Hello all, having thought and read quite a lot about soft shadows algorithms (using PCF, single sample soft shadows, smoothies, penumbra maps, shadow wedges, ...) I decided to implement the shadow wedge algorithm. Since there are quite different methods to implement them, I'd be happy to get some answers of someone actually using them. So here are my questions: 1. How to implement the Li-Buffer: None of the papers/implementations I looked at are using floating-point render targets but 32bit rgba-pBuffers (probably weren't available at that time). The original Assarsson-Paper ( claims using a 12 bit coverage value split across the 4 channels. In the peroxide-thesis ( they use 3.5 bit fixed point in an 8 bit channel as also mentioned in this presentation ( by Eric Lengyel. I'm thinking of using a FPRT for the Li-Buffer, since I have to deal with a lot of overlapping shadow-casting geometry. I'm aware, that I DO need floating point blending, but I don't really have to support cards older than an GeForce6800. What do you think? 2. How to render the hard shadows: There are some alternatives: Render the hard shadows in the stencil buffer and use NV_copy_depth_to_color to access the stencil information (to benefit from GL_STENCIL_TEST_TWO_SIDE_EXT) or just use floating-point blending (with glBlendFunc(GL_ONE, GL_ONE) ) to simulate the stencil-buffer. Is ATI planning on implementing the copy_depth_to_color functionality? Can you guys think of some other technique? 3. How to classify the pixels covered by the wedges: Should I use the stencil and depth buffer to mask out the pixels inside the wedges or use the approach by Eric Lengyel by explicitly testing (using the wedge-plane equations and KIL in fragment program) if the pixels are located inside the wedge? I guess I just have to try it out to see whats faster... 4. Working in view- or world-space to get to the fragments position for projecting the silhouette edge onto the light: Almost all of the techniques I looked at use view-space-coordinates, Assarsson uses world-space-coordinates. Me, too, wants to use view-space-coordinates, so I'm just wondering why one should use world-space-coordinates (apart from not having to store a depth-texture - which I do have anyway when using NV_copy_depth_to_color)? Anyway, I appreaciate any comments, GuentherKrass

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this