Sign in to follow this  

Skinned Mesh Questions

This topic is 3722 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi all, I'm writing a MD5 loader for my engine. I can convert a MD5 model to a keyframed mesh, but that obviously defies the point in it being skinned in the first place. I'm not using D3DXMesh, I'm managing the VB and IB myself. The problem I have is that the DX sample I've looked at (SkinnedMesh) assume that there's only going to be up to 4 weights per vertex (I think, it's entirely possible I'm not understanding it correctly), whereas some of the vertices I'm looking at (the imp model) have 5 weights, and there's probably some which have more. Is there some way to nicely and efficiently pass in more weights (I'd like up to 8 I think) to the vertex shader? Does it sound like I'm doing something wrong here (I'm new to all this skinned mesh malarky)? Any hints are appreciated. For reference, MD5 models come as an array of weights (Position and rotation quaternion), and an array of vertices (texture coordinates, start index into the weights array, and number of weights [consecutive]). My test model (the imp, again) has 891 vertices and 1401 weights. Here's the vertex shader from the SkinnedMesh sample:
struct VS_INPUT
{
    float4  Pos             : POSITION;
    float4  BlendWeights    : BLENDWEIGHT;
    float4  BlendIndices    : BLENDINDICES;
    float3  Normal          : NORMAL;
    float3  Tex0            : TEXCOORD0;
};

struct VS_OUTPUT
{
    float4  Pos     : POSITION;
    float4  Diffuse : COLOR;
    float2  Tex0    : TEXCOORD0;
};

VS_OUTPUT VShade(VS_INPUT i, uniform int NumBones)
{
    VS_OUTPUT   o;
    float3      Pos = 0.0f;
    float3      Normal = 0.0f;    
    float       LastWeight = 0.0f;
     
    // Compensate for lack of UBYTE4 on Geforce3
    int4 IndexVector = D3DCOLORtoUBYTE4(i.BlendIndices);

    // cast the vectors to arrays for use in the for loop below
    float BlendWeightsArray[4] = (float[4])i.BlendWeights;
    int   IndexArray[4]        = (int[4])IndexVector;
    
    // calculate the pos/normal using the "normal" weights 
    //        and accumulate the weights to calculate the last weight
    for (int iBone = 0; iBone < NumBones-1; iBone++)
    {
        LastWeight = LastWeight + BlendWeightsArray[iBone];
        
        Pos += mul(i.Pos, mWorldMatrixArray[IndexArray[iBone]]) * BlendWeightsArray[iBone];
        Normal += mul(i.Normal, mWorldMatrixArray[IndexArray[iBone]]) * BlendWeightsArray[iBone];
    }
    LastWeight = 1.0f - LastWeight; 

    // Now that we have the calculated weight, add in the final influence
    Pos += (mul(i.Pos, mWorldMatrixArray[IndexArray[NumBones-1]]) * LastWeight);
    Normal += (mul(i.Normal, mWorldMatrixArray[IndexArray[NumBones-1]]) * LastWeight); 
    
    // transform position from world space into view and then projection space
    o.Pos = mul(float4(Pos.xyz, 1.0f), mViewProj);

    // normalize normals
    Normal = normalize(Normal);

    // Shade (Ambient + etc.)
    o.Diffuse.xyz = MaterialAmbient.xyz + Diffuse(Normal) * MaterialDiffuse.xyz;
    o.Diffuse.w = 1.0f;

    // copy the input texture coordinate through
    o.Tex0  = i.Tex0.xy;

    return o;
}

Everything after the o.Pos = mul(float4(Pos.xyz, 1.0f), mViewProj); line I understand fine, and everything before the float BlendWeightsArray[4] = (float[4])i.BlendWeights; line. A few questions I have about this:
  • Because of the for loop there, does that not mean that each vertex has to have the same number of weights? If so, is there any way around this?
  • Could I pass in more than one BLENDWEIGHT and BLENDINDICES type? That would let me use up to 8 weights.
  • Am I going about this all wrong? [smile] This is in C++, D3D9, just using a vertex shader - not through effects. Cheers, Steve

    Share this post


    Link to post
    Share on other sites
    Quote:
    Original post by Evil Steve
    The problem I have is that the DX sample I've looked at (SkinnedMesh) assume that there's only going to be up to 4 weights per vertex (I think, it's entirely possible I'm not understanding it correctly), whereas some of the vertices I'm looking at (the imp model) have 5 weights, and there's probably some which have more.


    Fixed-function vertex blending supports up to 4 weights, with the 5th weight implied as 1-w0-w1-w2. With a vertex shader, you can have an arbitrary number of weights, limited only by the number of matrices you can define in the constant array. If you have 5 weights per vertex and you need to squish that into 4 weights per vertex, you can drop the weight with the smallest magnitude (and therefore the smallest contribution to the weighted average) and normalize the weight factors of the remaining 4 weights. Or you can write a vertex shader, which can even be run in software vertex processing, depending on your scene.

    Quote:

    Is there some way to nicely and efficiently pass in more weights (I'd like up to 8 I think) to the vertex shader?


    You can declare multiple vertex blend weight indices with a vertex declaration.

    Quote:
    Does it sound like I'm doing something wrong here (I'm new to all this skinned mesh malarky)? Any hints are appreciated.


    Nope, doesn't sound wrong to me.

    Quote:
    • Because of the for loop there, does that not mean that each vertex has to have the same number of weights? If so, is there any way around this?



    Yes, each vertex has to have the same number of blend weights. If you need to mix chunks of a mesh that have fewer weights with chunks of a mesh that have more weights, you can simply set the unused weights to 0.0f. Alternatively, you can write a variation of this shader that accepts fewer weights and split them up into batches based by weight size, but its probably easier to just set the weights to 0.0f.

    Quote:
    • Could I pass in more than one BLENDWEIGHT and BLENDINDICES type? That would let me use up to 8 weights.


    With the fixed-function pipeline: no; with a vertex shader and declaration: yes.

    Quote:
    • Am I going about this all wrong? [smile]



    Nope :-)

    I cover all the gory details of fixed-fuction vertex blending and tweening in Chapter 6. Vertex Transformations of my book. Doing it in HLSL is relatively straightforward; if you want to see VS.1.x assembly implementing vertex blending, take a look at Chapter 9. Vertex Shaders from my book.

    Share this post


    Link to post
    Share on other sites
    Generally, I simply sort the weights high to low, drop all but the top four, and renormalize them. It rarely causes anything noticeable, and if it does, you take the bat to the artist.

    While you're at it, you'll probably need to tell him he doesn't need 1,000 polys for the trigger of your rifleman's rifle, since he's obviously that kind of artist.

    :D

    Share this post


    Link to post
    Share on other sites
    Awesome, thanks legalize. I'll take a look at this tomorrow. I'm staying clear of fixed function now I've started to get to grips with vertex shaders (It took me long enough [smile]). I never thought of just setting unused weights to 0.0f (d'oh).

    At the moment, I'm just using Quake 4 / Doom 3 models for testing my converter / loader, so no artist to beat. Well, not one I could get away with anyway.

    Share this post


    Link to post
    Share on other sites
    Quote:
    Original post by Evil Steve
    I'm staying clear of fixed function now I've started to get to grips with vertex shaders (It took me long enough [smile]).


    Even though that chapter talks about fixed-function vertex blending, you may want to look it over. One thing I tried to do with my book is cover the conceptual framework first, and then show how the API exposes that concept. So there is lots of discussion and diagrams (and math, sorry mathophobes) of how vertex blending works.

    The sample code for that chapter is a modified version of the SDK vertex blending sample to have more weight distributions and matrix variation to make visualizing the blending process easier.

    Share this post


    Link to post
    Share on other sites
    Ok, I'm lost again [smile]

    I have the following structs, as read from my file (Obviously my vertex struct will be different):

    struct ModelWeightedVertex
    {
    u32 nWeightIndex; // Index into weights array of start weight
    u32 nWeights; // Number of weights
    float tu;
    float tv;
    };

    struct ModelWeight
    {
    float fWeight;
    EVector3 vPos;
    EVector3 qOrient; // XYZ components of unit quaternion
    };

    struct ModelFrameSkinned
    {
    char szName[64]; // Null terminated
    EVector3 vBBoxMin;
    EVector3 vBBoxMax;
    ModelWeight pWeights[1]; // Array of ModelHeader::nNumWeights entries
    };


    So, each vertex refers to a number of weights, which are blended together (I'm also not sure if I have my terminology correct, I seem to be getting confused with Weights, Bones and Joints).

    Some questions:
  • How do I get the vertex weights to the vertex shader? Do they go in constant registers, or do they go in another vertex stream or something? This is the main part I don't understand, the SDK sample uses ID3DXMesh which confuses me...
  • This model has 1401 weights, which means I need to use D3DDECLTYPE_SHORT4 rather than D3DDECLTYPE_UBYTE4 (Or D3DDECLTYPE_D3DCOLOR) for the weight index - is that correct?
  • Does anyone have some sample code for indexed skinning I could borrow, and/or does anyone know of any good links I should look at (I've already had a look at the two chapters Richard linked, thanks)


    EDIT: I think I want to bin support for MD5 in my convertor program, this all looks absolutely horrible, and counter-intuitive; E.g. weights with positions...


    Thanks again,
    Steve

    [Edited by - Evil Steve on October 2, 2007 8:44:16 AM]

    Share this post


    Link to post
    Share on other sites
    Quote:
    Original post by Evil Steve
    How do I get the vertex weights to the vertex shader? Do they go in constant registers, or do they go in another vertex stream or something? This is the main part I don't understand, the SDK sample uses ID3DXMesh which confuses me...


    ID3DXMesh just has vertices and indices. The weights are stored per-vertex in the vertex data. Since the weights are different for each vertex, its the only place that makes sense. Constant registers can only be changed at most once per primitive.

    Quote:
    This model has 1401 weights, which means I need to use D3DDECLTYPE_SHORT4 rather than D3DDECLTYPE_UBYTE4 (Or D3DDECLTYPE_D3DCOLOR) for the weight index - is that correct?


    Nope. You aren't referencing 1401 weights per vertex. Earlier I believe you said you had at most 5 weights per vertex. That's 4 floats per vertex, with the 5th float implied as 1-(sum of other weights). So, that's the weights part.

    For the indices part, what you're doing is associating a transformation matrix with each weight. The idea of indexed vertex blending is that you have a large array of transformation matrices (the fixed function pipeline provides 256; you could have more with the constant register file of a vertex shader). Each vertex weight is associated with an index that identifies the transformation matrix that goes with the weight. By using indices this way you can batch together larger groups of primitives, because while each vertex uses at most 5 weights and therefore at most 5 matrices, any triangle can pick any 5 matrices from the array. If you didn't use indexed vertex blending, you'd have to batch the primitives based on the matrices that they used. This would lead to lots of very small batches.

    Quote:
    Does anyone have some sample code for indexed skinning I could borrow, and/or does anyone know of any good links I should look at (I've already had a look at the two chapters Richard linked, thanks)


    The SkinnedMesh sample in the SDK has all the variations I thought.

    In my sample I generate the weights programmatically and don't load them from a .x file; I add them to an existing mesh. Perhaps that will help.

    Share this post


    Link to post
    Share on other sites
    Hmm. The main thing that's confusing me here, is that the MD5 format seems to be different from a traditional skinned mesh. In a MD5 model, vertices don't have positions at all, just a reference to a weight, and a weight has position and orientation.
    That would be fine, except vertices are affected by multiple weights, so the position of the vertex depends on up to 5 weight positions and orientations, and I can't figure out a way of translating that into a vertex struct and code I can run in a shader.
    MD5 animations just move the positon of joints (Which is the same as a normal skinned mesh), and that drags the weights around, which drag the vertices around.

    From what I can tell, the only way to do the interpolation with the data in this format is on the CPU rather than in a shader...

    EDIT: Err, wait... I screwed up - the weights don't have orientation, only position. For some reason I copied the joint orientation into the weight. That means I can use the weights as vertices, I think...

    EDIT 2: Ok, how does this sound...

    I could store 5 positions, 5 weights, 5 indices and 1 set of texture coordinates in each vertex (I know that seems way too many). Any unused, I set the weight to 0. That's basically the weight data from the file, with the texture coordinate for each vertex. That'll give me a pretty big vertex struct, but it'll be static.
    Then, each frame, I set the matrices for the joints (bones), or maybe two frames worth and interpolate in the shader.

    That seems way over the top, but I can't see how else to do it, since vertices don't have a position, just a list of weights, and the weights have positions.

    [Edited by - Evil Steve on October 3, 2007 8:43:01 AM]

    Share this post


    Link to post
    Share on other sites
    Quote:

    The main thing that's confusing me here, is that the MD5 format seems to be different from a traditional skinned mesh.

    It is. It's optimized for software skinning; dispite the names of particular fields in the format being the same, their values and semantics aren't exactly the same as you'd expect for hardware skinning.

    You have to do a bit of mucking around with the data to massage it into the format you'll need for hardware skinning. In particular I remember having to do something to rebuild the bind pose, or compute the inverse bind pose matrices, or some such... but unfortunately I abanoned MD5 loading a while ago to homogenize my asset pipeline using Collada and my own format.

    You might want to poke on IRC and ask mittens or PfhorSlayer (in #graphicsdev) about it, as they've both done it more recently I think. If you show up when I'm on I can try to dig up my old code and remember what exactly I had to do to make everything click.

    Share this post


    Link to post
    Share on other sites

    This topic is 3722 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

    If you intended to correct an error in the post then please contact us.

    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now

    Sign in to follow this