Skinned animation of meshes from FBX files - How to calculate the matrices?

Started by
4 comments, last by Endgegner85 7 years, 8 months ago

Dear community,

I started developing my own video game and my very own game engine some months ago. I made some good progress and I was able to realize everything I wanted so far. Sometimes I needed some online research, but I figured everything out after some time. Now, I am totally stuck. I do not know whether I did something wrong in my concept or if I am on the right path and I just have a minor bug. I am thankful for any help!

I wrote a C++ application, that is able to read FBX files (using Autodesks FBX SDK) and write all the information I need into a custom file format I designed for my game (engine). I am able to convert meshes, material and texture information and some more stuff into my file format and I can display all meshes properly in my game. I bought quite some stuff for the game and everything renders fine! Now, I want to extend my file format, the FBX file converter and my game engine to support skinned animations.

I read several online articels, like http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/how-to-work-with-fbx-sdk-r3582, http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/skinned-mesh-animation-using-matrices-r3577, book chapters, and many more and I think I understand the concept.

Data structures

I have added a new data structure for joints. Each joint has a name, stores its local transformation (translation, rotation, scaling) and has a set of children joints. Each of my mesh's vertices has a set of weighted links to joints.

So, every vertex knows the joints which affect the vertex's position including their weights. And each joint knows his place in the skeleton hierarchy (parent joint / children joints) and its local transformation.

What I wanted to do

I have already read and converted the Animation Stack including all Key Frames in the FBX file. However, before starting with the animation itself, I wanted to be able to show the model in its initial pose. I imported the FBX file in Unity first to make sure, that the FBX file is valid and to know how the initial pose looks like.

How I tried to do it

Every vertex in my mesh has a set of weighted links to joints. I wanted to calculate the vertex's position of the initial pose in a local world matrix I called JointPosition. Each vertex has its own JointPosition which is calculated using the weighted links (see below). Finally, I calculate the position of the vertex like this in my vertex shader:


struct VS_IN
{
	float4 pos : POSITION;
	float4 col : COLOR;
	float2 TextureUV : TEXCOORD0;
	matrix anim : INSTANCE;
};

PS_IN VS(VS_IN input)
{
	PS_IN output = (PS_IN)0;

	output.col = input.col;
	output.TextureUV = input.TextureUV;

	// Calculate position.
	matrix w = transpose(mul(input.anim, World));
	matrix v = transpose(View);
	matrix p = transpose(Projection);

	matrix mat = mul(w, v);
	mat = mul(mat, p);

	output.pos = mul(input.pos, mat);

	return output;
}

anim is the calculated JointPosition and p is the vertex's position which was read from the FBX file.

As far as I understood, the Transformation Link Matrix contains the local position information for each joint (with FBX SDK: currCluster->GetTransformLinkMatrix(transformLinkMatrix);). I store the translation, rotation and scaling vector of this Transformation Link Matrix in every joint.

In order to calculate the JointPosition of a vertex, I use:


SharpDX.Matrix matrix = new SharpDX.Matrix(0.0f);
foreach (var jointLink in JointLinks)
    matrix += jointLink.CalculateMatrix();

JointPosition = matrix;

Where each joint link multiplies its weight with joint's world matrix:


public SharpDX.Matrix CalculateMatrix()
{
    return Weight * Joint.CalculateMatrix();
}

I use the following code to calculate joint's world matrix. I create the world matrix by multiplying the scaling, rotation and translation matrices using the vectors I got from the FBX files. The if-statement is required since some nodes have an invalid scaling vector. If a joint has a parent, I multiply its world matrix with its parent's world matrix, because the movement of a joint affects the position of all sub-joints.


SharpDX.Vector4 t = LinkTranslation;
SharpDX.Vector4 r = LinkRotation;
SharpDX.Vector4 s = LinkScaling;

if (s.X == 0.0f && s.Y == 0.0f && s.Z == 0.0f)
    return SharpDX.Matrix.Identity;

SharpDX.Matrix result = Helper.Converters.ToMatrix(t, r, s);
if (Parent != null)
    result = SharpDX.Matrix.Multiply(result, Parent.CalculateMatrix());

return result;

What I got

Well, the result looks pretty messed-up. :'-(

If I replace the last source code fragment with the Identity matrix, I see my mesh with the default pose, i.e. in a pose which is not affected by the joints of the mesh.

I thought I understood the concept of Skinned Animation, but I must have messed up something, because the result looks like a large blurry junk pile.

Advertisement

As far as I can see this thread has over 420 views, but no answer so far. Since I do not have a clue where my problem is, I'd be happy about any help. It would help me, if someone can ensure me, for example, that my overall idea is correct. Or if anyone can agree on my Vertex Shader, that would also help. Anything that helps me to reduce the possible error sources is more than welcome! :)

Hello,

I worked on some other topic in the last two weeks to clear my mind and now I am working again on my animation system. Until now I tried to update all joints at once, e.g. to show a key frame in the animation or to show the initial pose of the model.

Now, I took another approach by just setting the world matrices of a few selected joints. My CalculateMatrix function was edited to:


            SharpDX.Matrix result = SharpDX.Matrix.Identity;

            if (Name.Equals("Bip001 Head"))
                result = SharpDX.Matrix.Translation(new SharpDX.Vector3(0.0f, 2.0f, 0.0f));
            else if (Name.Equals("Bip001 R Forearm"))
                result = SharpDX.Matrix.RotationX(Helper.Math.DegreeToRadian(90.0f));
            else if (Name.Equals("Bip001 L Foot"))
                result = SharpDX.Matrix.Scaling(2.0f);

            if (Parent != null)
                result = SharpDX.Matrix.Multiply(result, Parent.CalculateMatrix());
            return result;

That means, the head is translated, the right forearm rotated and the left foot is scaled. I also updated my vertex shader to this:


PS_IN VS(VS_IN input)
{
	PS_IN output = (PS_IN)0;

	output.col = input.col;
	output.TextureUV = input.TextureUV;

	// Calculate position.
	matrix w = transpose(input.anim);
	matrix v = transpose(View);
	matrix p = transpose(Projection);

	matrix mat = mul(w, v);
	mat = mul(mat, p);

	output.pos = mul(input.pos, mat);

	return output;
}

That means, the normal world matrix which contains the position information of my model is no longer used. Instead of using the world matrix of the model, I use the calculated JointPosition of my vertices. As I explained in the initial thread, this property is calculated for each vertex by summing up the weighted local matrices of all joints.

The result of my changes can be seen in the image at the bottom of this post. The form of each part (head, foot and forearm) is correct. Also, the inheritance of parent's joint information does also work. When I look at the image, I would say, that the rotation and scaling is not done around the origin. What do you guys think? What can be the source of this strange appearance?

At the moment, I am only multiplying the JointPosition with the view and projection matrix.

mesh.png

I'm surprised i'm going to be the first one to respond, since animation is not really my forte.

Anyway, about your last post, the picture is exactly what you should expect. The foot is the only thing scaled, so it looks like its stretching away from the body, thats probably because the point of origin is the center of the body, and since only the foot is being scaled, it looks stretched away from the origin and larger. The head translation i saw you already said it was correct.

The arm took a minute to figure out what was going on, because at first i was not sure which directions are the x and z axis, but since the forearm is still pointing in the same direction after rotation around the x axis, i'm assuming the direction the arms are going in is the X axis, which would then make sense because you are rotating the vertices of the forearm around the point of origin on the x axis. since the arms were probably around 0 on the y axis, rotating them 90 degrees on the x axis would put them pretty close to 0 on the z axis, so now you can see that the left forearm is pretty much directly under the point of origin on the z axis.(i can't see how you get right forearm and left foot out of that, but doesn't really matter, they look like the left forearm and right foot to me)

So, what your doing there is working just fine, now on to the pose calculations. When calculating the final vertex position using weights and joint position and orientation, you have to make sure your vertex is in joint (bone) space before anything, so that when you apply the first joints matrix (the joint the vertex is attached to), the point is transformed in that joints space, then you apply the parents joint matrix, which moves the vertex into that parents space, until you reach the top joint in the heirarchy.

You could either pre-calculate the vertex position in joint space, or do it during runtime. Since translation, rotation and scale all together only take up 3 rows of a matrix, you could use the 4th row to store the joint position, then in the shader instead of multiplying the vertex position with the wvp, you would first subract the joint position from the vertex position to get it into joint space, the multiply the result with the wvp matrix.

Again, i'm not an expert at animation, so i can't tell you the best way to do this when your vertex has multiple weights. I'd have to do some research on animation to figure out an efficient way.

EDIT: Totally besides the point, but i see where you got right forearm and left foot. Its because you modeled the guy facing you, and named them according to your right and left, not the guys right and left, haha

Thanks, iedoc! I wondered where I missed something or where I made something wrong. Your answered helped me very much, but I still have troubles.

I have updated my Vertex Shader to:


PS_IN VS(VS_IN input)
{
	PS_IN output = (PS_IN)0;

	output.col = input.col;
	output.TextureUV = input.TextureUV;

	// Calculate position.
	matrix worldViewProjection = mul(transpose(World), mul(View, Projection));

	float4 test = mul(input.anim, /* TODO */ input.pos);
	output.pos = mul(test, worldViewProjection);

	return output;
}

This Vertex Shader produces the result shown in the image at the bottom of this post. The World-View-Projection matrix is applied correctly and also my joint matrices work. As far as I understood from your post and from reading this guide http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/how-to-work-with-fbx-sdk-r3582 again, I miss the projection from my vertex to joint space before applying the joints' matrices.

The author of the guide above wrote the equation


VertexAtTimeT = TransformationOfPoseAtTimeT * InverseOfGlobalBindPoseMatrix * VertexAtBindingTime

I have the TransformationOfPoseAtTimeT, which is matrix I calculated for my joints. I also have VertexAtBindingTime. This is the vertex I read from the FBX file. So, the missing part is InverseOfGlobalBindPoseMatrix. This part should be inserted into my Vertex Shader at position "TODO" (see code above), if I understood everything correctly. I have read this matrix from the FBX for each joint following the guide and I made sure, that my converter was able to obtain exactly the matrix that was written in the FBX file. I tried to apply this matrix in my vertex shader at the location "TODO", but I led to strange results. The result basically looked like a big chunk of something weird.

In my updated code, I calculated the InverseOfGlobalBindPoseMatrix for each vertex, by weighting the InverseOfGlobalBindPoseMatrix of each linked joint. I did it the same way as for the TransformationOfPoseAtTimeT. However, I am note sure, if I also have to consider the InverseOfGlobalBindPoseMatrix of a joint's parent joint? Is this information final for each joint or do I need to consider the InverseOfGlobalBindPoseMatrix from parents?

I feel like being really close to the solution, but it seems I am still stuck. :-/

I did not do this model by myself, iedoc. I got it from an online store. I write an extensive log file for each model I am converting from FBX to my own file format and I can easily find the name of joints. :-)

mesh2.png

Hey guys, I just wanted to let you know that I was able to solve my problems with the animation system. I am now able to load FBX files, extract the animation information and animate the imported mesh with almost no CPU or memory load. It was quite a tough way, but I am very happy with the result. Now, I am working on re-targeting animations, i.e. to make it possible to use animations I import for one mesh on other meshes. It will take some time to make it fully work, but I think I can finish this until the end of next week, if everything goes well.

It does not make much sense to post any code here, since I did not do a big single mistake, but had to touch my source code at many places. But, if anyone of you needs some help with animations, I'd like to help :)

This topic is closed to new replies.

Advertisement