Sign in to follow this  
Metus

hardware vertex skinning

Recommended Posts

For a long time now, I have been discouraged to continue the work on my game engine due to the implementation of [insert favourite animation technology here] but yesterday I managed to transform 3 vertices separatly by 2 matrices using a vertex shader. I just realized that this is the first step to implement hardware skeletal animation. My first basic steps were: 1. calculate the joints transformation 2. pass the joints transformation into the vertex shader 3. extend my vertex format to include an transformation index and i suppose the chain of animating a character is slightly this simple 1. caclulate the entire skeletan hierarchy transformation 2. pass the non-identity transformations to the vertex shader 3. determine if current vertex is supposed to be transformed. 4. implement some kind of blending / transformation weighting system but before I'll start the next step i just want to know if this is "the real way" of doing it or if I'll be restricted in some way.

Share this post


Link to post
Share on other sites
Yeah, it'd work fine that way. But depending on the models you use, you'll run out of vertex shader constants for bone transformation storage. A couple of solutions:
1) Cut your model into independently-skinned parts that can be rendered separately
2) Convert your matrices to quaternions, send them through the shader, and convert them back to matrices in the shader before doing the transformation (I think Dave Eberly wrote a sample that did this sometime)

Share this post


Link to post
Share on other sites
Ofcourse, I mean, each mesh has the need of being independently textured / shaded / transformed so that is just logical

Share this post


Link to post
Share on other sites
But in, say, shader 1.1, you can only fit so many bone transforms into constant registers. We reserve some registers for light, some for fog, some for texture transforms, some for material constants, some for view and projection matrices, etc. This leaves us with room for 20 bone matrices, if we use 4x3 matrices (only projection really needs to be 4x4). If we transformed the normals the "correct" way, using inverse transpose matrices, we'd only fit 10 bones worth of transforms. Note that we have characters with 80+ bones, so they can't be rendered all at once.

How do we solve this? We programatically break the model into sections during pre-processing. First we break by material, as each material will need seperate rendering anyway. Next, if the faces used by that material use over 20 bones, we break it into sub-sections. The algorithm I use is a bit long-winded to describe.

Even in shader 2.0 you run out of constants easily enough.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this