# DX11 GPU Skinning Problem

## Recommended Posts

Hello,

I have a Problem with GPU Skinning, I load from a COLLADA File my object with the vertices and the weights and bone indices for it and the bones with the matrices.

For every vertex I choose 4 weights and 4 bone indices.

For every non skin vertex i choose by the weights 1 0 0 0 and bone indices 0 0 0 0 (In the Bone Matrices Array is index 0 a Matrix Idetity)

And i check up if all weights values together is always 1 or i calculate it to 1.

So far so good, my Shader looks like this:

bool HasBones;
matrix BoneMatrices[256];

struct Vertex
{
float3 Position  : POSITION;
float3 Normal    : NORMAL;
float2 UV        : TEXCOORD0;
float3 Tangent   : TANGENT;
float4 Weights   : WEIGHTS;
int4 BoneIndices : BONEINDICES;
};

float4 ApplyBoneTransform(Vertex input, float4 value)
{
if(HasBones)
{
float4x4 skinTransform = (float4x4)0;
skinTransform += BoneMatrices[input.BoneIndices.x] * input.Weights.x;
skinTransform += BoneMatrices[input.BoneIndices.y] * input.Weights.y;
skinTransform += BoneMatrices[input.BoneIndices.z] * input.Weights.z;
skinTransform += BoneMatrices[input.BoneIndices.w] * input.Weights.w;

float4 position = mul(value, skinTransform);

return position;
}
else
return value;
}

{
Pixel result = (Pixel) 0;

float4 posWorld = mul(ApplyBoneTransform(input, float4(input.Position.xyz, 1.0f)), World);
result.Position = mul(mul(posWorld, View), Projection);
result.Normal = normalize(mul(ApplyBoneTransform(input, float4(input.Normal.xyz, 1.0f)), WorldIT));
result.UV = input.UV;
result.View = ViewInverse[3] - mul(float4(input.Position.xyz, 1.0f), World);
result.Tangent = normalize(mul(ApplyBoneTransform(input, float4(input.Tangent.xyz, 1.0f)), WorldIT).xyz);
result.Binormal = normalize(cross(input.Normal, input.Tangent));

return result;
}

And if i set HasBones to true, my object will not draw right anymore, i only see two dark triangles

I believe it depends on the bone matrices i load from the controller_lib of the COLLADA File and send it to the BoneMatrices in the shader + at the index 0 the Matrix Idetity.

Has anyone an Idea what I make wrong and could help and explain me it?

And i upload to this post the Collada file and images of the object draw in HasBones = false and HasBones = true

Greets

Benajmin

Model.dae

##### Share on other sites

You do linear interpolation of multiple matrices which makes no sense here.

You should transform the vertex by each matrix, and lerp the resulting vectors instead.

But to do so, you first need to transform the vertex into the local bone space instead, so you typically store 2 matrices for each bone. One (static) that transforms to bone space, and another (animated) that transforms the result back to world space.

Something like this:

EDIT: Hey - who upvoted this nonsense? Maybe crossing out helps

	vec os = currentVertexPosInObjectSpace;
vec ws (0); // result
for each affecting bone {
int i = bone.matrixIndex;
vec v = ObjectSpaceToBoneMatrices[i].Transform(os); // sadly this is necessary so the vertex knows it position relative to each bone
v = AnimatedBoneInWorlsSpaceMatricws[i].Transform(v); // but knowing that, we can transform now to world space for this bone
ws += v * bone.weight; // sum up weighted results
}

Edited by JoeJ

##### Share on other sites

Unfortunately this problem needs to be debugged. What helped me is loading a very simple model (I see you are already using a box so that should be fine) and very simple bone structure. Debug that your bone matrices are correct by drawing simple lines with each transform matrix. the lines should form a skeleton when correct.

The most common mistakes done is messing up the order of matrix multiplications of Scale*Rotate*Translate as well as bone hierarchy order, and there is also the need of multiplying with a t-pose relative matrix (inverse t-pose), and all these need to be in the correct order. Also HLSL expects matrices to be in column major order, while your application side math library might produce row-major matrices, so maybe they should either be pre-transposed, or reverse the multiplication order.

The shader you posted doesn't seem too bad at first look, though normal and tangent vectors probably shouldn't be set their w component to avoid applying translations to them. Also I noticed that you are adding your matrices together and multiplying by the bone weights. That should produce correct results, but less operations are performed if you would perform vector transformations first and then multiplying the vector with the weight, then finally adding the vectors together. The result should be the about same (if we don't consider floating point accuracy) but achieved faster. You also don't have to use a 4x4 matrix, 3x4 is perfectly enough, but set the 4th row to (0,0,0,1).

You can check out my skinning shader for reference: https://github.com/turanszkij/WickedEngine/blob/master/WickedEngine/skinningCS.hlsl

Good luck, it is a very rewarding experience once you manage to correct it!

17 minutes ago, JoeJ said:

You do linear interpolation of multiple matrices which makes no sense here.

You should transform the vertex by each matrix, and lerp the resulting vectors instead.

But to do so, you first need to transform the vertex into the local bone space instead, so you typically store 2 matrices for each bone. One (static) that transforms to bone space, and another (animated) that transforms the result back to world space.

Something like this:


vec os = currentVertexPosInObjectSpace;
vec ws (0); // result
for each affecting bone {
int i = bone.matrixIndex;
vec v = ObjectSpaceToBoneMatrices[i].Transform(os); // sadly this is necessary so the vertex knows it position relative to each bone
v = AnimatedBoneInWorlsSpaceMatricws[i].Transform(v); // but knowing that, we can transform now to world space for this bone
ws += v * bone.weight; // sum up weighted results
}


You can do the linear interpolation of matrices fine, the result is the same, just results in more operations. Also, you can premultiply the bone matrices with the relative "objectspaceToBoneMatrix" matrix on the application side and only send one bone matrix to the shader.

Edited by turanszkij

##### Share on other sites
1 minute ago, turanszkij said:

there is also the need of multiplying with a t-pose relative matrix (inverse t-pose)

This is what i mean with ObjectSpaceToBoneMatrices (to avoid any confusion)

You calc that by the inverse transform of the bone in rest pose

4 minutes ago, turanszkij said:

You can do the linear interpolation of matrices fine, the result is the same, just results in more operations. Also, you can premultiply the bone matrices with the relative "objectspaceToBoneMatrix" matrix on the application side and only send one bone matrix to the shader.

Ooops - you're right. And i've forgotten about the 'trick' to premultiply... embrassing

##### Share on other sites

@turanszkij With Draw the Bones to see the Skeleton, i did this already, to check if i load the bone matrices correct. see the image.

I take the postion of each Bone and draw a line to the next child postion.

But i see only a right skeleton result if i invert every single bone matrix.

And do you mean with t-pose the bind pose matrix?

And befor i send the boneMatrices Array to the Shader, here is the code how i collect them

                public Matrix CalculateMatrixFromParents(Joint joint, Matrix world)
{
if (joint.Parent != null)
{
world *= CalculateMatrixFromParents(joint.Parent, joint.Parent.Matrix);
return world;
}
else
return joint.Matrix;
}

public List<Matrix> GetBoneMatrices()
{
List<Matrix> boneMatrices = new List<Matrix>();

foreach (Joint joint in this.bones)
{
Matrix m = this.CalculateMatrixFromParents(joint, Matrix.Identity);
}

return boneMatrices;
}

I am not really math genius , but i hope i will understand your explain?

Greets

Benjamin

Edited by B. /

##### Share on other sites

You could try inspecting the shader bone data in a graphics debugger, if the contents of it match the bone data on the application side. Nvidia Nsight or the visual studio graphics debugger are good choices.

##### Share on other sites
11 hours ago, B. / said:

foreach (Joint joint in this.bones) { Matrix m = this.CalculateMatrixFromParents(joint, Matrix.Identity); boneMatrices.Add(m); }

To me it looks like you do not take the rest/t-pose/bind-pose (however we call it) into account here.

It might look somehow like this (assuming the matrices in your current code are the animated ones):

foreach (Joint joint in this.bones) {

Matrix m = this.CalculateMatrixFromParents(joint, Matrix.Identity* joint.bindPoseWorldSpaceMatrix.Inversed();

You should get this right after some trial and error usually.

Your shader however is potentially inefficient, because each thread may store 4 matrices in registers, that's 4*16 = 64 alone for that, which is a lot.

To get 100% occupancy on AMD you should only use 24 IIRC, NV varies but is similar.

To achieve this, you should transform the position by each matrix in order (as already said), so the compiler has the option to have just 1 or 2 matrices in registers at the same time. And you should transform position and normal in one go of course, otherwise the compiler will likely decide to store them all to have them available for the normal. (Also, using subfunctions almost always has a cost the last time i checked - it seems compilers are too stupid to inline the code, better do it yourself.)

This is said just to be nit-picking. The matrices likely end up in fast constant ram, so storing all 4 in registers might not be necessary and my proposed optimization would have no effect in practice. But you never really know how different GPUs / compilers handle this and thinking of it is often no extra work. (optimizing for low register usage is usually also better than optimizing for less instructions.)

Edited by JoeJ

##### Share on other sites

Hi Guys,

thank you for your answers, today i had also test loading weights/matrices per vertex in assimp, to test, if i dont make a mistake by loading the datas. The matrices was right, but the weights per vertex was different and the draw result was still wrong, but very better as mine.

The strange thing only was in the collada file say vcount, the first vertex has 2 weights and the second has 3 weights ...

But assimp load for the fist vertex 3 weights and for the second 2 weights. So i thought, maybe i dont right understand the docu of the collada file, to set the right weights and bone indices to the right vertex maybe?

        <source id="pCube1Controller-Weights">
<float_array id="pCube1Controller-Weights-array" count="33">

1.000000 0.989534 0.009756 0.708022 0.289010 0.002968 0.989518 0.009771 0.708398 0.288669 0.002933 0.989525 0.009771 0.708446 0.288689 0.002866
0.989540 0.009756 0.708070 0.289030 0.002900 0.004697 0.497651 0.497651 0.003326 0.498337 0.498337 0.004704 0.497648 0.497648 0.003331 0.498334
0.498334</float_array>
<technique_common>
<accessor source="#pCube1Controller-Weights-array" count="33">
<param type="float"/>
</accessor>
</technique_common>
</source>

<vertex_weights count="12">
<input semantic="JOINT" offset="0" source="#pCube1Controller-Joints"/>
<input semantic="WEIGHT" offset="1" source="#pCube1Controller-Weights"/>
<vcount>2 3 2 3 2 3 2 3 3 3 3 3</vcount>
<v>0 1 1 2 0 3 1 4 2 5 0 6 1 7 0 8 1 9 2 10 0 11 1 12 0 13 1 14 2 15 0 16 1 17 0 18 1 19 2 20 0 21 1 22 2 23 0 24 1 25 2 26 0 27 1 28 2 29 0 30 1 31 2 32</v>
</vertex_weights>

Here is the code how i set the Weights and Bone Indices to the vertices:

InfluencesWeightsPerVertex = VCount List

1+ wCounterIndex, because the first Weight in the Weights Array is an extra Weights

RemoveRange, because i create a Vertex by default Weights 1 0 0 0, Bone Indices 0 0 0 0 (Index 0 = Matrix Idetity)

Insert Range because of insert the loaded values in the list

                        int vertexIndex = 0;
int wCounterIndex = 0;

// Set Vertex Weights and Bone Indices
foreach (int item in InfluencesWeightsPerVertex)
{
List<float> weights = skinClusterWeights.GetRange(1 + wCounterIndex, item);
List<int> boneIndices = new List<int>();

for (int i = 0; i < (item * 2); i += 2)

if (weights.Count > 4)
weights.RemoveRange(4, weights.Count - 4);

if (boneIndices.Count > 4)
boneIndices.RemoveRange(4, boneIndices.Count - 4);

// Normalize all weights to 1
float factor = 0;

foreach (float weight in weights)
factor += weight;

factor = 1.0f / factor;

for (int i = 0; i < weights.Count; i++ )
weights[i] = factor * weights[i];

Vertex vertex = geometry.Vertices[vertexIndex];
vertex.Weights.RemoveRange(0, weights.Count);
vertex.Weights.InsertRange(0, weights);
vertex.BoneIndices.RemoveRange(0, boneIndices.Count);
vertex.BoneIndices.InsertRange(0, boneIndices);

geometry.Vertices[vertexIndex] = vertex;

vertexIndex++;
wCounterIndex += item;
}

Can you check up my code and see, if my way to load the datas for skinning is right, that we can exclude this?

Greets

Benjamin

16 hours ago, turanszkij said:

You could try inspecting the shader bone data in a graphics debugger, if the contents of it match the bone data on the application side. Nvidia Nsight or the visual studio graphics debugger are good choices.

Hi,

if the Bone Data/Matrices would be wrong, than the draw of the skeleton lines would be wrong too, right?

Edited by B. /

##### Share on other sites

Maybe assimp is set up to optimize for better vertex caching and so changes the order of vertices?

When i came across collada, i noticed each app seems to have its own interpretation of the standard, and it was not really a good format to exchange things, especially when skinning is involved.

I would create a simple model like the box just proceduraly only from code to be sure the data is right. You could also do CPU skinning for reference. Some work, but such code may be reusable for the next issue and on the long run i prefer this against frustrating GPU debugging.

##### Share on other sites
3 hours ago, JoeJ said:

Maybe assimp is set up to optimize for better vertex caching and so changes the order of vertices?

When i came across collada, i noticed each app seems to have its own interpretation of the standard, and it was not really a good format to exchange things, especially when skinning is involved.

I would create a simple model like the box just proceduraly only from code to be sure the data is right. You could also do CPU skinning for reference. Some work, but such code may be reusable for the next issue and on the long run i prefer this against frustrating GPU debugging.

Hi Joe,

which Format would be the best for GPU Skinning, maybe FBX?

And I believe i know where my mistake is, i see in the collada file the count of vertex weights is 12, but my elbow cube has total 60 vertices

The other 48 vertices has only Weights 1 0 0 0 and Indices of 0 0 0 0 Matrix Idetity.

But thats wrong because i skin every single vertices on the 3 bones and so assimp also import weights on all vertices, but how do i calculate this from the only 12 given vcounts, that only say how many weights/bones has a vertex??????????????

And for CPU Skinning i would need a origin Vertex List and transform these and set it to the vertexbuffer for every fps?

Greets

Benjamin

Edited by B. /

## Create an account

Register a new account

• 9
• 9
• 13
• 41
• 15
• ### Similar Content

• By chiffre
Introduction:
In general my questions pertain to the differences between floating- and fixed-point data. Additionally I would like to understand when it can be advantageous to prefer fixed-point representation over floating-point representation in the context of vertex data and how the hardware deals with the different data-types. I believe I should be able to reduce the amount of data (bytes) necessary per vertex by choosing the most opportune representations for my vertex attributes. Thanks ahead of time if you, the reader, are considering the effort of reading this and helping me.
I found an old topic that shows this is possible in principal, but I am not sure I understand what the pitfalls are when using fixed-point representation and whether there are any hardware-based performance advantages/disadvantages.
(TLDR at bottom)
The Actual Post:
To my understanding HLSL/D3D11 offers not just the traditional floating point model in half-,single-, and double-precision, but also the fixed-point model in form of signed/unsigned normalized integers in 8-,10-,16-,24-, and 32-bit variants. Both models offer a finite sequence of "grid-points". The obvious difference between the two models is that the fixed-point model offers a constant spacing between values in the normalized range of [0,1] or [-1,1], while the floating point model allows for smaller "deltas" as you get closer to 0, and larger "deltas" the further you are away from 0.
To add some context, let me define a struct as an example:
struct VertexData { float[3] position; //3x32-bits float[2] texCoord; //2x32-bits float[3] normals; //3x32-bits } //Total of 32 bytes Every vertex gets a position, a coordinate on my texture, and a normal to do some light calculations. In this case we have 8x32=256bits per vertex. Since the texture coordinates lie in the interval [0,1] and the normal vector components are in the interval [-1,1] it would seem useful to use normalized representation as suggested in the topic linked at the top of the post. The texture coordinates might as well be represented in a fixed-point model, because it seems most useful to be able to sample the texture in a uniform manner, as the pixels don't get any "denser" as we get closer to 0. In other words the "delta" does not need to become any smaller as the texture coordinates approach (0,0). A similar argument can be made for the normal-vector, as a normal vector should be normalized anyway, and we want as many points as possible on the sphere around (0,0,0) with a radius of 1, and we don't care about precision around the origin. Even if we have large textures such as 4k by 4k (or the maximum allowed by D3D11, 16k by 16k) we only need as many grid-points on one axis, as there are pixels on one axis. An unsigned normalized 14 bit integer would be ideal, but because it is both unsupported and impractical, we will stick to an unsigned normalized 16 bit integer. The same type should take care of the normal vector coordinates, and might even be a bit overkill.
struct VertexData { float[3] position; //3x32-bits uint16_t[2] texCoord; //2x16bits uint16_t[3] normals; //3x16bits } //Total of 22 bytes Seems like a good start, and we might even be able to take it further, but before we pursue that path, here is my first question: can the GPU even work with the data in this format, or is all I have accomplished minimizing CPU-side RAM usage? Does the GPU have to convert the texture coordinates back to a floating-point model when I hand them over to the sampler in my pixel shader? I have looked up the data types for HLSL and I am not sure I even comprehend how to declare the vertex input type in HLSL. Would the following work?
struct VertexInputType { float3 pos; //this one is obvious unorm half2 tex; //half corresponds to a 16-bit float, so I assume this is wrong, but this the only 16-bit type I found on the linked MSDN site snorm half3 normal; //same as above } I assume this is possible somehow, as I have found input element formats such as: DXGI_FORMAT_R16G16B16A16_SNORM and DXGI_FORMAT_R16G16B16A16_UNORM (also available with a different number of components, as well as different component lengths). I might have to avoid 3-component vectors because there is no 3-component 16-bit input element format, but that is the least of my worries. The next question would be: what happens with my normals if I try to do lighting calculations with them in such a normalized-fixed-point format? Is there no issue as long as I take care not to mix floating- and fixed-point data? Or would that work as well? In general this gives rise to the question: how does the GPU handle fixed-point arithmetic? Is it the same as integer-arithmetic, and/or is it faster/slower than floating-point arithmetic?
Assuming that we still have a valid and useful VertexData format, how far could I take this while remaining on the sensible side of what could be called optimization? Theoretically I could use the an input element format such as DXGI_FORMAT_R10G10B10A2_UNORM to pack my normal coordinates into a 10-bit fixed-point format, and my verticies (in object space) might even be representable in a 16-bit unsigned normalized fixed-point format. That way I could end up with something like the following struct:
struct VertexData { uint16_t[3] pos; //3x16bits uint16_t[2] texCoord; //2x16bits uint32_t packedNormals; //10+10+10+2bits } //Total of 14 bytes Could I use a vertex structure like this without too much performance-loss on the GPU-side? If the GPU has to execute some sort of unpacking algorithm in the background I might as well let it be. In the end I have a functioning deferred renderer, but I would like to reduce the memory footprint of the huge amount of vertecies involved in rendering my landscape.
TLDR: I have a lot of vertices that I need to render and I want to reduce the RAM-usage without introducing crazy compression/decompression algorithms to the CPU or GPU. I am hoping to find a solution by involving fixed-point data-types, but I am not exactly sure how how that would work.
• By cozzie
Hi all,
I was wondering it it matters in which order you draw 2D and 3D items, looking at the BeginDraw/EndDraw calls on a D2D rendertarget.
The order in which you do the actual draw calls is clear, 3D first then 2D, means the 2D (DrawText in this case) is in front of the 3D scene.
The question is mainly about when to call the BeginDraw and EndDraw.
Note that I'm drawing D2D stuff through a DXGI surface linked to the 3D RT.
Option 1:
A - Begin frame, clear D3D RT
B - Draw 3D
C - BeginDraw D2D RT
D - Draw 2D
E - EndDraw D2D RT
F - Present
Option 2:
A - Begin frame, clear D3D RT + BeginDraw D2D RT
B - Draw 3D
C - Draw 2D
D - EndDraw D2D RT
E- Present
Would there be a difference (performance/issue?) in using option 2? (versus 1)
Any input is appreciated.

• Do you know any papers that cover custom data structures like lists or binary trees implemented in hlsl without CUDA that work perfectly fine no matter how many threads try to use them at any given time?
• By cozzie
Hi all,
Last week I noticed that when I run my test application(s) in Renderdoc, it crashes when it enable my code that uses D2D/DirectWrite. In Visual Studio no issues occur (debug or release), but when I run the same executable in Renderdoc, it crashes somehow (assert of D2D rendertarget or without any information). Before I spend hours on debugging/ figuring it out, does someone have experience with this symptom and/or know if Renderdoc has known issues with D2D? (if so, that would be bad news for debugging my application in the future );
I can also post some more information on what happens, code and which code commented out, eliminates the problems (when running in RenderDoc).
Any input is appreciated.

• Hi Guys,
I understand how to create input layouts etc... But I am wondering is it at all possible to derive an input layout from a shader and create the input layout directly from this? (Rather than manually specifying the input layout format?)