Jump to content
  • Advertisement
Sign in to follow this  
mv348

Modified stenciled shadow volumes algorithm - any good for gpu implementation?

This topic is 2072 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello all,

 

Since I am not working with an artist I get my my models from sites like turbosquid. This being the case I can't really place constraints on my models so I need to use fairly robust techniques.

 

In my last thread, someone suggested an algorithm proposed in this paper for how to create stencil shadow volumes with no constraints placed on the model.

 

The algorithm essentially comes down to first, creating a unique set of triangle-edges, where each edge has a signed counter variable initialized to 0. You create the shadow volume in two steps. First you render the front facing triangles, and while doing so, you either increment or decrement the counter value of its edges based on a certain condition (specifically, based on whether the edge is directed the same as the rendered triangle).

 

After all triangles have been processed, the silohuette edges are rendered based on the value of the counter in each edge object.

 

The  exact details are not so important, my concern is a good GPU implementation. The shader would need to keep a list of counters for each edge and update them per-triangle render. Then process only those edges who's final counter value is non-zero. Since they potentially update the same value counter variable, there could be some bottleneck I think.

 

Is this technique just not practical for GPU optimization?

 

 

Share this post


Link to post
Share on other sites
Advertisement

I'm not sure how such an algorithm would work using classic GLSL, since primitive assembly doesn't really share data across invocations--that is, edges are generated for each polygon that uses them; there's no place for a single value to go.

 

That said, I can think of a number of workarounds, that make this nice for GPU computation.

 

The easiest and most obvious thing is to try OpenCL. An OpenCL kernel operating on vertex indices, for example, could atomically update a counter variable for each edge (and you only need process each edge once, too!). This counter can then be stored in an attribute VBO passed to the shader you use to render the data, on a per-vertex basis. It would be easiest to handle edges at the geometry stage.

Share this post


Link to post
Share on other sites

Thanks for your reponse, Geometrian!

 

I haven't really had time to dig into OpenCL just yet. So you think this isn't suitable for GLSL?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!