• FEATURED

View more

View more

View more

### Image of the Day Submit

IOTD | Top Screenshots

### The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

# Modified stenciled shadow volumes algorithm - any good for gpu implementation?

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

3 replies to this topic

### #1mv348  Members

Posted 17 January 2013 - 04:26 PM

Hello all,

Since I am not working with an artist I get my my models from sites like turbosquid. This being the case I can't really place constraints on my models so I need to use fairly robust techniques.

In my last thread, someone suggested an algorithm proposed in this paper for how to create stencil shadow volumes with no constraints placed on the model.

The algorithm essentially comes down to first, creating a unique set of triangle-edges, where each edge has a signed counter variable initialized to 0. You create the shadow volume in two steps. First you render the front facing triangles, and while doing so, you either increment or decrement the counter value of its edges based on a certain condition (specifically, based on whether the edge is directed the same as the rendered triangle).

After all triangles have been processed, the silohuette edges are rendered based on the value of the counter in each edge object.

The  exact details are not so important, my concern is a good GPU implementation. The shader would need to keep a list of counters for each edge and update them per-triangle render. Then process only those edges who's final counter value is non-zero. Since they potentially update the same value counter variable, there could be some bottleneck I think.

Is this technique just not practical for GPU optimization?

### #2Geometrian  Members

Posted 17 January 2013 - 09:12 PM

I'm not sure how such an algorithm would work using classic GLSL, since primitive assembly doesn't really share data across invocations--that is, edges are generated for each polygon that uses them; there's no place for a single value to go.

That said, I can think of a number of workarounds, that make this nice for GPU computation.

The easiest and most obvious thing is to try OpenCL. An OpenCL kernel operating on vertex indices, for example, could atomically update a counter variable for each edge (and you only need process each edge once, too!). This counter can then be stored in an attribute VBO passed to the shader you use to render the data, on a per-vertex basis. It would be easiest to handle edges at the geometry stage.

And a Unix user said rm -rf *.* and all was null and void...|There's no place like 127.0.0.1|The Application "Programmer" has unexpectedly quit. An error of type A.M. has occurred.

### #3mv348  Members

Posted 18 January 2013 - 12:41 AM

I haven't really had time to dig into OpenCL just yet. So you think this isn't suitable for GLSL?

### #4PolyVox  Members

Posted 18 January 2013 - 04:51 AM

I'm not sure if you seen it (or if it's useful) but Game Engine Gems 3 had an article on doing traditional stencil shadows in the geometry shader. Maybe you can modify that?

See it here: http://http.developer.nvidia.com/GPUGems3/gpugems3_ch11.html

Edited by PolyVox, 18 January 2013 - 04:52 AM.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.