Followers 0

# DX11 How To Apply Skinning

## 11 posts in this topic

I'm attempting to add animation to my game using skinning; however, I do not know how to actually apply it to my model's vertices.

Currently, I have written my own exporter for Blender that (correctly) outputs the vertex weights for each vertex, and I have loaded that into my DX11 app. But how do I then apply the various bone transforms to the vertices? What information do I need? I intend to again write my own exporter, but looking at the .x format, it appears you export the position, rotation and scale of each bone relative to its parent per frame (which I imagine you then turn into a matrix, for that frame).

But I then go on to read about how I have to transform each vertex into the "Bone space", then "Armature space" (what I imagine the .x file exports), and then finally "World space" (using your world matrix). How do I calculate this "Bone space" matrix? I've tried using the inverse matrix of the bone in it's "rest position" and then multiplying this by the pose matrix for the bone, but this doesn't appear to work.

0

##### Share on other sites

A couple of articles may give you a start on the concept and the process - one on skinned meshes, the other on an animation controller for a skinned mesh. Although the discussion in those articles is based on some DX9 work, I think you'll find the principles applicable.

In particular, the skinned mesh article discusses why and how the vertices are transformed into bone space, and the animation transforms applied to get the vertices back into world space in animated position.

With regard to exporting animations, you needn't export every frame if the rotation motion between key frames is consistent with SLERP (Spherical Linear intERPolation). I think that's what Blender uses between keyframes, in any case.

Tips: ensure the blend weights for any vertex sum to 1, as you can then use more efficient blending shaders. Also, things will work out better in the long run if you keep the bone influences per-vertex to 4 or less. That allows packing blend indices and weights into the vertex format more easily.

If you search for Frank Luna's site (EDIT: it's d3dcoder.net), he's got some D3D11 examples** for download. You'll have to search through it a bit to find the skinning example but the effect file (something-or-other.fx) will give you an idea how a skinned animation shader works.

EDIT: "...exporter for Blender that (correctly) outputs the vertex weights for each vertex.."

And the indices?

**In d3d11CodeSet3, chapter 25, Skinned Mesh - take a look at SsaoNormalDepth.fx. Note: his files, etc., are (C) 2011, Frank Luna, All Rights Reserved. FYI, I have found his books invaluable for learning the ins and outs. I've had trouble get his D3D11 samples compiled, but it's not difficult to rework them.

Edited by Buckeye
0

##### Share on other sites

Thanks for the links, I shall definitely be having a look at them!

In answer to your question about whether I export the indices correctly, my exporter is based on the OBJ exporter but with vertex weights added, so everything else is exported/loaded fine.

0

##### Share on other sites

Thanks for the links, I shall definitely be having a look at them!

In answer to your question about whether I export the indices correctly, my exporter is based on the OBJ exporter but with vertex weights added, so everything else is exported/loaded fine.

Just to be clear, I meant bone indices, not vertex indices. Your vertex format will eventually contain both the bone indices and weights. Just checkin'.

0

##### Share on other sites

To perform skinning you have to do the following steps:

1.) Construct hierarchy (BIND POSE)

2.) Compute local matrices for each joint (BIND POSE)

3.) Compute world matrices for each joint (BIND POSE)

4.) Compute inverse world matrices for each joint (BIND POSE)

5.) Transform all vertices and normals via inverse world matrices

6.) Compute animation local matrices

7.) Compute animation world matrices

8.) Transform all inverse vertices and normals (from point 5.) over animation world matrices (point 7)

0

##### Share on other sites

To perform skinning you have to do the following steps:

1.) Construct hierarchy (BIND POSE)

2.) Compute local matrices for each joint (BIND POSE)

3.) Compute world matrices for each joint (BIND POSE)

4.) Compute inverse world matrices for each joint (BIND POSE)

5.) Transform all vertices and normals via inverse world matrices

6.) Compute animation local matrices

7.) Compute animation world matrices

8.) Transform all inverse vertices and normals (from point 5.) over animation world matrices (point 7)

As a very general description of the process, that's correct. However, for the sake of clarity, for a skinned mesh with multiple influence bones per-vertex, steps 5 and 8 are done at frame-render time and are done per-bone with the product of the bone's offset matrix (inverse bind pose) and the bone's animation matrix, followed by multiplication by the bone's weighting factor, and the results of those weighted vertex positions are summed to yield the vertex animated world position.

Edited by Buckeye
1

##### Share on other sites

To update, I have managed to display the model in bind position correctly (at last!), but when I try to apply any actual animation, it deforms with the correct weights/indices etc. but the model is simply not in the right final position.

My code for generating the "finalMatrix" that is sent to the GPU to display the model in bind position is as follows (based upon the links Buckeye gave):

void CalcBindFinalMatrix(Bone* bone, Matrix &parentMatrix)
{
bone->combinedMatrix = bone->localMatrix * parentMatrix;
bone->finalMatrix = bone->offsetMatrix * bone->combinedMatrix;

for(auto child = bone->children.begin(); child != bone->children.end(); ++child)
{
CalcBindFinalMatrix(*child, bone->combinedMatrix);
}
}


This displays correctly, as expected. However, this function (which is run for every frame of animation) which is almost identical, gives weird transformations, almost like the whole model should be rotated 90 degrees.

The bone space transformations for each frame are originally stored in bone->keyFrames[frameNo], and once multiplied etc. they are also the matrices sent to GPU.

void CalcPoseFinalMatrix(Bone* bone, Matrix &parentMatrix, UINT frameNo)
{
Matrix com = bone->keyFrames[frameNo] * parentMatrix;
bone->keyFrames[frameNo] = bone->offsetMatrix * com;

for(auto child = bone->children.begin(); child != bone->children.end(); ++child)
{
CalcPoseFinalMatrix(*child, com, frameNo);
}
}


Also, by HLSL code for performing the skinning is as follows :

//b = bone indices
//w = bone weights

float4 Pos1 = mul(Pos, xBones[b[0]]);
float4 Pos2 = mul(Pos, xBones[b[1]]);
float4 Pos3 = mul(Pos, xBones[b[2]]);
float4 Pos4 = mul(Pos, xBones[b[3]]);

if(w[0] != 0.0)
{
Pos = (Pos1 * w[0]) + (Pos2 *  w[1]) + (Pos3 * w[2]) + (Pos4 * w[3]);
}

outputPos = mulWorldViewProj(Pos, xWorld, xView, xProj);


Is there anything inherently wrong with my code to cause these issues? I'll post some pictures of what's going wrong if needed!

Many thanks.

Edited by Mrfence97
0

##### Share on other sites

this function (which is run for every frame of animation) which is almost identical, gives weird transformations, almost like the whole model should be rotated 90 degrees.

How do you call the two functions? That is, you start the iteration with something like CalcBindFinalMatrix( root, identityMatrix ) ? What do those calls look like?

Also, are keyframe matrices calculated for every bone? Sometimes the animation set may not include (for instance) the root frame or other bones that aren't animated. If that's the case, sometimes the assumption is that for any bone which does not appear in the animation set, the bind pose matrix will be used. You can test that by initializing all the keyFrame matrices with the bind pose matrices.

With regard to the shader code, what's the intent of "if (w[0] != 0.0 )"? That looks kind of suspicious. If w[0] is 0, what's Pos supposed to be?

Also, do all vertices have 4 influence bones? If they do not, have you checked that the unused weights and indices are 0?

0

##### Share on other sites
Yes, I call both functions exactly as you've written - the "Bind" one immediately after I have loaded in the localMatrix, and the other one after I have loaded in the animation data.

I'm pretty sure there is a keyframe matrix for each bone, as I said, I've written the exporter myself, and for debugging it exports every frame with every bone, exporting it's position, rotation and scale (all values are exactly the same as a .x file I have also exported as a comparison, so the exporter isn't at fault). I will initialize the key frame matrices with the bind pose matrix, however, just to make sure!

Well in my shader Pos is initially set to the vertex position, so if the weight is zero, I still want it to draw the vertex in it's regular position (again, this is for debugging purposes).

All unused weights are 0 I'm afraid!
0

##### Share on other sites

Does the mesh have a parent frame? If so, do you take that frame's transformation matrix into account?

Does either the scene frame or the root frame has a (non-identity) transformation matrix? If so, and you call the Calc functions with root bone and an identity matrix, how do you take that transformation matrix into account?

When you render the bind pose (which you say appears correctly), do you use the same shader calls, setting the bone matrix array from the final matrix array?

All unused weights are 0 I'm afraid!

That's good, and as it should be.

Edited by Buckeye
0

##### Share on other sites

The mesh does have a parent frame (when exported as a .x file that is), but it is a simple rotation to get it from Blender to DirectX's coordinate system, which I already account for.

The root frame again does have a transformation matrix, when exported as a .x, but it again looks like a rotation, and if I use that instead of the identity matrix with the Calc functions, it just rotates the bone affected vertices and makes it even more wrong!

I use exactly the same shader calls the only difference being:

xBones[i] = _boneArray[i]->finalMatrix; //Bind pose

//vs

xBones[i] = _boneArray[i]->keyFrames[frameNo]; //"Animated" pose


I have also decided to attach a picture showing what exactly is wrong with the model, on the left is Blender at ~frame 0 and 90, compared to in my app on the right. It appears that the forehead's movement in my app has been rotated 90 degrees downwards so it is going into the Monkey's face, rather than forward and out. This makes me think it is something to do with the rotation keys I'm exporting, or maybe the order I'm using to create the initial bone space matrices per frame; is Scale * Rotation * Translation correct?

http://imgur.com/Fz9Ma1Z

Thanks again for any insight you can give

0

##### Share on other sites

The mesh does have a parent frame (when exported as a .x file that is), but it is a simple rotation to get it from Blender to DirectX's coordinate system, which I already account for.

The root frame again does have a transformation matrix, when exported as a .x, but it again looks like a rotation, and if I use that instead of the identity matrix with the Calc functions, it just rotates the bone affected vertices and makes it even more wrong!

What reference frame (axes) system are you using in your app? Right-handed or left-handed? Which axes is "up" in your system? How do you create your view and projection matrices?

If you're using a left-handed reference frame, the root frame transformation from the x-file is more than "a simple rotation," it's a transformation between two different reference frames. It's likely a scale and rotation. It would be suspicious for that frame transformation to be applied to a mesh in a frame hierarchy. Normally, DirectX exporters in Blender have exported a root frame with the right- to left-handed transformation (Blender to DirectX), and everything (including the frame containing the mesh) as children to that root frame. In a couple exporters I've used, the transformation is a rotation about axis ( 0, 1, 1 ) followed by a scale of ( 1, 1, -1).

What version of Blender are you using and what DirectX exporter are you using in your comparisons?

Also,

I'm pretty sure there is a keyframe matrix for each bone

If you're having trouble rendering correctly, you should know what data you have. Those are things you have to check in your debugging efforts. If you have garbage coming in, you'll have garbage going out.

Edited by Buckeye
0

## Create an account

Register a new account

Followers 0

• ### Similar Content

• Hi Guys,
I am revisiting an old DX11 framework I was creating a while back and am scratching my head with a small issue.
I am trying to set the pixel shader resources and am getting the following error on every loop.
As you can see in the below code, I am clearing out the shader resources as per the documentation. (Even going overboard and doing it both sides of the main PSSet call). But I just can't get rid of the error. Which results in the render target not being drawn.
ID3D11ShaderResourceView* srv = { 0 }; d3dContext->PSSetShaderResources(0, 1, &srv); for (std::vector<RenderTarget>::iterator it = rtVector.begin(); it != rtVector.end(); ++it) { if (it->szName == name) { //std::cout << it->srv <<"\r\n"; d3dContext->PSSetShaderResources(0, 1, &it->srv); break; } } d3dContext->PSSetShaderResources(0, 1, &srv);
I am storing the RT's in a vector and setting them by name. I have tested the it->srv and am retrieving a valid pointer.
At this stage I am out of ideas.
Any help would be greatly appreciated

• hi, guys, how to understand the math used in CDXUTDirectionWidget ::UpdateLightDir
the  following code snippet is taken from MS DXTU source code

D3DXMATRIX mInvView;
D3DXMatrixInverse( &mInvView, NULL, &m_mView );
mInvView._41 = mInvView._42 = mInvView._43 = 0;
D3DXMATRIX mLastRotInv;
D3DXMatrixInverse( &mLastRotInv, NULL, &m_mRotSnapshot );
D3DXMATRIX mRot = *m_ArcBall.GetRotationMatrix();
m_mRotSnapshot = mRot;
// Accumulate the delta of the arcball's rotation in view space.
// Note that per-frame delta rotations could be problematic over long periods of time.
m_mRot *= m_mView * mLastRotInv * mRot * mInvView;
// Since we're accumulating delta rotations, we need to orthonormalize
// the matrix to prevent eventual matrix skew
D3DXVECTOR3* pXBasis = ( D3DXVECTOR3* )&m_mRot._11;
D3DXVECTOR3* pYBasis = ( D3DXVECTOR3* )&m_mRot._21;
D3DXVECTOR3* pZBasis = ( D3DXVECTOR3* )&m_mRot._31;
D3DXVec3Normalize( pXBasis, pXBasis );
D3DXVec3Cross( pYBasis, pZBasis, pXBasis );
D3DXVec3Normalize( pYBasis, pYBasis );
D3DXVec3Cross( pZBasis, pXBasis, pYBasis );

https://github.com/Microsoft/DXUT/blob/master/Optional/DXUTcamera.cpp
• By YixunLiu
Hi,
I have a surface mesh and I want to use a cone to cut a hole on the surface mesh.
Anybody know a fast method to calculate the intersected boundary of these two geometries?

Thanks.

YL

• By hiya83
Hi, I tried searching for this but either I failed or couldn't find anything. I know there's D11/D12 interop and there are extensions for GL/D11 (though not very efficient). I was wondering if there's any Vulkan/D11 or Vulkan/D12 interop?
Thanks!

• Hi Guys,
I am just wondering if it is possible to acquire the address of the backbuffer if an API (based on DX11) only exposes the 'device' and 'context' pointers?
Any advice would be greatly appreciated

• 16
• 12
• 23
• 11
• 28