Jump to content
  • Advertisement
Sign in to follow this  

Shadow wedge algorithm

This topic is 4380 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello all, having thought and read quite a lot about soft shadows algorithms (using PCF, single sample soft shadows, smoothies, penumbra maps, shadow wedges, ...) I decided to implement the shadow wedge algorithm. Since there are quite different methods to implement them, I'd be happy to get some answers of someone actually using them. So here are my questions: 1. How to implement the Li-Buffer: None of the papers/implementations I looked at are using floating-point render targets but 32bit rgba-pBuffers (probably weren't available at that time). The original Assarsson-Paper (http://www.cs.lth.se/home/Tomas_Akenine_Moller/pubs/soft_sig2003.pdf) claims using a 12 bit coverage value split across the 4 channels. In the peroxide-thesis (http://www.peroxide.dk/papers/realtimesoftshadows.pdf) they use 3.5 bit fixed point in an 8 bit channel as also mentioned in this presentation (http://www.terathon.com/gdc_lengyel.ppt) by Eric Lengyel. I'm thinking of using a FPRT for the Li-Buffer, since I have to deal with a lot of overlapping shadow-casting geometry. I'm aware, that I DO need floating point blending, but I don't really have to support cards older than an GeForce6800. What do you think? 2. How to render the hard shadows: There are some alternatives: Render the hard shadows in the stencil buffer and use NV_copy_depth_to_color to access the stencil information (to benefit from GL_STENCIL_TEST_TWO_SIDE_EXT) or just use floating-point blending (with glBlendFunc(GL_ONE, GL_ONE) ) to simulate the stencil-buffer. Is ATI planning on implementing the copy_depth_to_color functionality? Can you guys think of some other technique? 3. How to classify the pixels covered by the wedges: Should I use the stencil and depth buffer to mask out the pixels inside the wedges or use the approach by Eric Lengyel by explicitly testing (using the wedge-plane equations and KIL in fragment program) if the pixels are located inside the wedge? I guess I just have to try it out to see whats faster... 4. Working in view- or world-space to get to the fragments position for projecting the silhouette edge onto the light: Almost all of the techniques I looked at use view-space-coordinates, Assarsson uses world-space-coordinates. Me, too, wants to use view-space-coordinates, so I'm just wondering why one should use world-space-coordinates (apart from not having to store a depth-texture - which I do have anyway when using NV_copy_depth_to_color)? Anyway, I appreaciate any comments, GuentherKrass

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!