Sign in to follow this  
dpadam450

Skeletal Animation (skinning..?)

Recommended Posts

Right now I have an infantry model with 9 parts/bones. I think what I'm trying to do is "skinning". The picture explains it all. The bones vertices don't meet after they are rotated. Each part is stored as a VBO. I have no idea what or where to put the code to have them draw correctly. I'm wondering if the VBO's are going to make it harder to do? Photo Sharing and Video Hosting at Photobucket

Share this post


Link to post
Share on other sites
Picture doesn't really explain much. It looks like you aren't actually skinning, but rather just transforming the vertices individually. What file format you using? How are you transforming each vertex? With a matrix? With glTranslate & glRotate, etc etc?

A bit of code would go a long way too [smile]

Share this post


Link to post
Share on other sites
Ok there is the upper leg and lower leg. They are stored in 2 seperate VBO's. I put them through their own seperate matrices each frame (as well as any parent matrices).

Basically where the two parts meet, when they are exported, they share the same like 8 vertices (around the knee). So where they are seperated, their are 16 vertices that should be joined as 8.

I'm pretty sure I need to do "skinning"..? maybe? I would understand how to do it in software to take all the vertices and transform them in software and then draw, but since I have to transform on the GPU I know there's a way to do it.

Share this post


Link to post
Share on other sites
I think I may just use a shader and use the glColorPointer data to point to which bone transforms the current vertex. Something like

if(color.r < .001)
{
//transform by bone one
}
if(color.r > .98)
{
//transform by bone two
}


Then I only use on VBO per model, plus , I don't ever use the glColorPointer data anyway.

Share this post


Link to post
Share on other sites
Right - that is not skinning. Skinning works by giving each vertex a set of weights and indices. Indices being which bone you use to transform it, and the weight being how much influence that bone has on the final vertex.

If you use, say, second set of texture coordinates to store indices to the bones, and, say, the colour as the weights, you would have 4 bones per vertex max - which is fine for most purposes. You don't really want/need more.

You pass in the bone matrices as uniforms to the vertex shader, and use the second set of texture coordinates ( cast to int ) to pick out which matrix you are using. Multiply the vertex by this. Then multiply that vertex by the colour value that is associated with it. r for first influence, g for second influence, b for third influence, and a for fourth influence. Add these all up to get your final vertex.

Sorry if my explanation is a bit, well, crap. I am useless at explanations, I am hoping that someone else here might be able to explain a little better, but I shall post my skinning vertex shader ( Cg, although easily turned into GLSL ) for reference.

Another thing to note - each bone matrix needs to be: Inverse bind pose * bone matrix. The inverse bind pose matrix will transform the vertex about the origin, ready to be transformed by the real bone matrix.

Anyways...some maybe helpful code, its got some half written lighting in there, but you can safely ignore it:


/**************************************************************************
*
* File: RsVsSkin.cgfx
* Author: Neil Richardson
* Ver/Date:
* Description:
* Vertex Skinning + Per-pixel lighting
*
*
*
*
**************************************************************************/


//////////////////////////////////////////////////
// Input
float4x4 ModelView;
float4x4 ModelViewProj;
float4x4 BoneMatrices[12];

// Lighting
float4 LightPos[4];
float4 LightDiffCol[4];
float4 LightSpecCol[4];

float LightAttnCons[4];
float LightAttnLine[4];
float LightAttnQuad[4];

// Material
float MatSpecular;

// Diffuse Map
sampler2D TexMap0 = sampler_state
{
minFilter = Linear;
magFilter = Linear;
};

// Specular Map
sampler2D TexMap1 = sampler_state
{
minFilter = Linear;
magFilter = Linear;
};

// Normal Map
sampler2D TexMap2 = sampler_state
{
minFilter = Linear;
magFilter = Linear;
};

//////////////////////////////////////////////////
// Structures
struct appdata
{
float4 TexCoord : TEXCOORD0;
float4 Normal : NORMAL;
float4 Tangent : TEXCOORD1;
float4 Position : POSITION;
float4 Weights : TEXCOORD2;
float4 Indices : TEXCOORD3;
};

struct vfconn
{
// Basic Params.
float4 Position : POSITION;
float4 TexCoord : TEXCOORD0;
float4 FragPos;
float4 LightDir[4];
};

//////////////////////////////////////////////////
// Vertex
void RsVsSkinVert( appdata IN,
out vfconn OUT,
uniform float4x4 ModelView,
uniform float4x4 ModelViewProj,
uniform float4x4 BoneMatrices[12] )
{
// Calculate vertex position, normal and tangent
float4 Position = float4( 0.0, 0.0, 0.0, 0.0 );

float3 Normal = float3( 0.0, 0.0, 0.0 );
float3 Tangent = float3( 0.0, 0.0, 0.0 );

Position = Position + ( mul( BoneMatrices[ IN.Indices.x ], IN.Position ) * IN.Weights.x );
Position = Position + ( mul( BoneMatrices[ IN.Indices.y ], IN.Position ) * IN.Weights.y );
Position = Position + ( mul( BoneMatrices[ IN.Indices.z ], IN.Position ) * IN.Weights.z );
Position = Position + ( mul( BoneMatrices[ IN.Indices.w ], IN.Position ) * IN.Weights.w );

Normal = Normal + mul( (float3x3)BoneMatrices[ IN.Indices.x ], IN.Normal.xyz ) * IN.Weights.x;
Normal = Normal + mul( (float3x3)BoneMatrices[ IN.Indices.y ], IN.Normal.xyz ) * IN.Weights.y;
Normal = Normal + mul( (float3x3)BoneMatrices[ IN.Indices.z ], IN.Normal.xyz ) * IN.Weights.z;
Normal = Normal + mul( (float3x3)BoneMatrices[ IN.Indices.w ], IN.Normal.xyz ) * IN.Weights.w;
Normal = normalize( Normal );

Tangent = Tangent + mul( (float3x3)BoneMatrices[ IN.Indices.x ], IN.Tangent.xyz ) * IN.Weights.x;
Tangent = Tangent + mul( (float3x3)BoneMatrices[ IN.Indices.y ], IN.Tangent.xyz ) * IN.Weights.y;
Tangent = Tangent + mul( (float3x3)BoneMatrices[ IN.Indices.z ], IN.Tangent.xyz ) * IN.Weights.z;
Tangent = Tangent + mul( (float3x3)BoneMatrices[ IN.Indices.w ], IN.Tangent.xyz ) * IN.Weights.w;
Tangent = normalize( Tangent );

// Transform the normal and tangent.
Normal = mul( (float3x3)ModelView, Normal );
Tangent = mul( (float3x3)ModelView, Tangent );

// Build TBN Matrix
float3 BiTangent = cross( Normal.xyz, Tangent.xyz );
float3x3 TBNMatrix = float3x3( Tangent, BiTangent, Normal );

OUT.FragPos = mul( ModelView, Position );
OUT.Position = mul( ModelViewProj, Position );
OUT.TexCoord = IN.TexCoord;

// Calculate light directions
// NOTE: Light positions should be in world space already.
//for( int i = 0; i < 4; ++i )
//{
// float3 LightDir = OUT.FragPos - LightPos[ i ].xyz;
// LightDir = normalize( LightDir );

// OUT.LightDir[ i ] = float4( mul( TBNMatrix, LightDir.xyz ), 1.0 );
//}
}

//////////////////////////////////////////////////
// Fragment
float3 expandNormal( float3 N )
{
return ( N - 0.5 ) * 2.0;
}

float4 RsVsSkinFrag( vfconn IN,
uniform sampler2D TexMap0,
uniform sampler2D TexMap1,
uniform sampler2D TexMap2 ) : COLOR
{
float4 DiffMap = tex2D( TexMap0, IN.TexCoord.xy );
float3 SpecMap = tex2D( TexMap1, IN.TexCoord.xy ).xyz;
float3 NormMap = expandNormal( tex2D( TexMap2, IN.TexCoord.xy ).xyz );

// Diffuse Lighting
float3 TotalDiffuse = float3( 0.0, 0.0, 0.0 );

for( int i = 0; i < 4; ++i )
{
float3 LightDir = normalize( IN.LightDir[ i ].xyz );
float4 LightVec = IN.FragPos - LightPos[ i ];

float Distance = length( LightVec );
float Attenuation = 1.0 / ( ( LightAttnCons[ i ] ) +
( LightAttnLine[ i ] * Distance ) +
( LightAttnQuad[ i ] * Distance * Distance ) );

float3 DiffuseColour = LightDiffCol[ i ] * Attenuation * ( ( dot( LightDir, NormMap ) + 1.0 ) * 0.5 );

TotalDiffuse = TotalDiffuse + DiffuseColour;
}

return float4( DiffMap.xyz * TotalDiffuse, DiffMap.w );
}

//////////////////////////////////////////////////
// Techniques

technique RsVsSkin_nv40
{
pass
{
FragmentProgram =
compile fp40 RsVsSkinFrag( TexMap0, TexMap1, TexMap2 );
VertexProgram =
compile vp40 RsVsSkinVert( ModelView, ModelViewProj, BoneMatrices );
}
}

technique RsVsSkin_nv30
{
pass
{
FragmentProgram =
compile fp30 RsVsSkinFrag( TexMap0, TexMap1, TexMap2 );
VertexProgram =
compile vp30 RsVsSkinVert( ModelView, ModelViewProj, BoneMatrices );
}
}

Share this post


Link to post
Share on other sites
Quote:
Original post by dpadam450
Basically where the two parts meet, when they are exported, they share the same like 8 vertices (around the knee). So where they are seperated, their are 16 vertices that should be joined as 8.



You just explained the solution right there. Those 8 verticies on both mesh pieces will need to be in the same location (and idealy should be the same vertex).

Having a all the different parts in a *static* VBO is not going to give you the correct results since this is a dynamic situation. You should process all the vertices on cpu to calculate their final location and present them all with one draw call. Not a matrix switch and draw call for each bone.

Share this post


Link to post
Share on other sites
can't be bothered reading all the jargon so here it is... you need to have the points that separate when the knee is bent as one section, not two sections as you currently have it (one for upper leg, one for lower leg). If you are trying to join together two completely separate models then you are making things more difficult on-top of what can already be a pain to start with.

Basically what you have done is the equivalent of getting a hacksaw and cutting through your kneecap, and then wondering why when you walk you have a big open gash in your leg. Your leg isn't two separate things that meet at certain angles, it is one continuous thing but with parts that rotate around different points... your upper leg rotates around the hip and the lower leg rotates around the knee and then the hip etc.

Share this post


Link to post
Share on other sites
Yea, I understand the explanation, as well as the one about putting 2 seperate parts together. Thats why I was asking what I could do to join them.

As for computing on the CPU, well what is the point of using VBO's if you do everthing on the cpu? I mean the cpu could process stuff fast enough, but if you have loads of skeletons being drawn at once, especially depending on poly's, then your cpu power diminishes greatly. When computing on the CPU, you are still sending data across the GPU bus for every model each frame, and if you have 10 infantry models, your eating up 10x more bandwidth than you should. That's why I/everyone uses VBO's in the first place.

Share this post


Link to post
Share on other sites
hey

Well I made some pesudo code for you...


// firstly we reset every point/normal of model
for(lp1=0;lp1<model.vertexNum;lp1++) {

// reset vertex point/normal
model.vertexPoint[lp1] = 0;
model.vertexNormal[lp1] = 0;
}

// now we loop through each joint of model
for(lp1=0;lp1<model.jointNum;lp1++) {

// and loop through each vertex index of joint
for(lp2=0;lp2<model.joint[lp1].indexNum;lp2++) {

// translate vertex point/normal from bind space into bone space
// note, just need to translate it by 3x3 rotation part of offsetMatrix
// note, we are multiplying a vector by a matrix
tempPoint = model.vertexBindPoint[ model.joint[lp1].vertexIndex[lp2] ] * model.joint[lp1].offsetMatrix;
tempNormal = model.vertexBindNormal[ model.joint[lp1].vertexIndex[lp2] ] * model.joint[lp1].offsetMatrix;

// now we translate the vertex point/normal into character space
// note, we are multiplying a vector by a matrix
tempPoint = tempPoint * model.joint[lp1].combineMatrix;
tempNormal = tempNormal * model.joint[lp1].combineMatrix;

// now we transform the vertex point/normal according to weight
// note, we are multiplying a vector by a scaler
model.vertexPoint[ model.joint[lp1].vertexIndex[lp2] ] += (tempPoint * model.joint[lp1].weight[lp2]);
model.vertexNormal[ model.joint[lp1].vertexIndex[lp2] ] += (tempNormal * model.joint[lp1].weight[lp2]);

// and that's it...

// almost forgot, I think you should also normalise your normal
normalise(model.vertexNormal[ model.joint[lp1].vertexIndex[lp2] ]);
}
}







BTW, I think you should store all your vertex point/normal data for each joint together, and they should not be split at each joint.

I hope this makes sense :).

cyas

Share this post


Link to post
Share on other sites
dpadam450,

You need to export the model as one continous mesh. If you are generating the model sections by hand, don't. If you are exporting a different mesh object for each section, don't. There should be one geometry file for the whole mesh. Each vertex will have a flag associated with it indicating which bone(s) affect it. This flag would have to be created in some sort of modelling package. Either an existing one, or your own custom content creation pipeline.

I highly recommend you check out this tutorial for a popular modeling format Milkshape. RSN

Share this post


Link to post
Share on other sites
hey

@honayboyz: There are many ways you can associate each vertex to a joint/bone. If you look at the pesudo code in my previous post, you can see I have a JOINT structure, which for each joint holds an array of weights, and vertex indices. I think the way I have mentioned is alike how the skeletal data is represented in the DirectX *.X format, which it might be worth for dpadam450 to check out. I prefer this way as you have no limit to the number of weights per vertex.

cyas

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this