• Advertisement
Sign in to follow this  

Skeletal Animation with ASSIMP

This topic is 453 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm trying to play a skeletan animation with ASSIMP and I can't quite get it to work. I've been stuck on this for about a week and I've got it looking pretty close to what it should. It's a big improvment over a tangled up mess. I've attached a video of what it looks like as well as a screenshot in bind pose. It's strange how the bind pose is higher up. This is what it should look like


Translation seems to be the problem. Could I be multiplying the matricies in the wrong order? Is the skinning shader inccorrect?

Here's the vertex shader

#version 410

const uint MAX_BONES_PER_VERTEX = 4;
const uint MAX_BONES = 128;

uniform mat4 model;
//transpose(inverse(model))
uniform mat4 transInvModel;
uniform mat4 mvp;
uniform mat4 bones[MAX_BONES];

in vec3 pos;
in vec3 normal;
in vec2 texturePos;
in uint boneID[MAX_BONES_PER_VERTEX];
in float boneWeight[MAX_BONES_PER_VERTEX];

out vec3 fragPos;
out vec3 fragNormal;
out vec2 fragTexturePos;

void main() {
  mat4 boneTransform;
  if (boneWeight[0] == 0.0) {
    boneTransform = mat4(1.0);
  } else {
    boneTransform = mat4(0.0);
    for (uint i = 0; i < MAX_BONES_PER_VERTEX; i++) {
      boneTransform += bones[boneID[i]] * boneWeight[i];
    }
  }

  gl_Position = mvp * boneTransform * vec4(pos, 1.0);
  fragPos = (model * boneTransform * vec4(pos, 1.0)).xyz;
  fragNormal = normalize((transInvModel * boneTransform * vec4(normal, 0.0)).xyz);
  fragTexturePos = texturePos;
}
Heres the code that multiplies that matricies. I'm not really sure about it. I think the problem is here.

I've brocken it up into 3 stages.

The first stage interpolates the translation, rotation and scaling keys of the animation channel (aiNodeAnim) of each bone node (aiNode). An animation (aiAnimation) doesn't necisarily have a channel for each bone node (aiNode), so "dummy" channels fill in the gaps (I could use a std::unordered_map but most of the time there won't be any "dummy" channels). The arrays can be indexed by their channel ID (aiString name).

std::vector<glm::mat4> getBoneNodeTransforms(const BoneNodes &boneNodes, 
                                             const Animation &anim) {
  std::vector<glm::mat4> boneNodeTransforms(boneNodes.size());
  
  for (ChannelID c = 0; c < boneNodes.size(); c++) {
    const Channel &channel = anim.channels[c];
    if (channel.dummy) {
      //transform is the mTransformation member of aiNode
      boneNodeTransforms[c] = boneNodes[c].transform;
    } else {
      //getKeyTransform does the interpolation and constructs the matrix
      //I've tested it and it works as expected
      boneNodeTransforms[c] = getKeyTransform(channel);
    }
  }
  
  return boneNodeTransforms;
}
I'm not completly sure I'm multiplying the translation, rotation and scaling matricies in the right order so heres the function that does that. It's called by getKeyTransform. I'm pretty sure its multiplying rotation then scaling then translation but is that the right order? Should it be scaling, rotation, translation?

glm::mat4 makeMat(const glm::vec3 &translation,
                  const glm::quat &rotation,
                  const glm::vec3 &scaling) {
  //rotation scaling translation
  return glm::translate(
    glm::scale(
      glm::mat4_cast(rotation),
      scaling
    ),
    translation
  );
}
The second stage traverses down the tree of bone nodes (aiNode) and multiplies each node's transformation by that of it's parent. This is the code I'm least certain about. transforms is the return value from getBoneNodeTransforms. I've seen that most people traverse up that tree. Isn't that less efficient because you're multiplying the same matrices more than once?

void relativeTransforms(std::vector<glm::mat4> &transforms, 
                        const BoneNodes &boneNodes, 
                        ChannelID parent) {
  const BoneNode &parentNode = boneNodes[parent];
  for (size_t n = 0; n < parentNode.children.size(); n++) {
    transforms[parentNode.children[n]] = transforms[parent] * 
                                         transforms[parentNode.children[n]];
    relativeTransforms(transforms, boneNodes, parentNode.children[n]);
  }
}
The third stage multiplies each transformation by the offset matrix. I'm not entirly sure what the offset matrix is. There are more bones (aiBone) than there are bone nodes (aiNode) so each bone holds the channel ID so it can find its corresponding bone node.

void finalTransform(const std::vector<glm::mat4> &transforms,
                    const Bones &bones) {
  std::vector<glm::mat4> boneTransforms(bones.size());
  for (size_t b = 0; b < bones.size(); b++) {
    const Bone &bone = bones[b];
    boneTransforms[b] = transforms[bone.channel] * bone.offset;
  }
  return boneTransforms;
}
This code calls the above 3 functions and the final vector of matrices get sent to the vertex shader.

std::vector<glm::mat4> boneNodeTransforms = getBoneNodeTransforms(boneNodes, anim);
relativeTransforms(boneNodeTransforms, boneNodes);
return finalTransform(boneNodeTransforms, mesh->getBones());
Another place the problem might be is in this function which converts an aiMatrix4x4 to a glm::mat4

glm::mat4 castMat4(const aiMatrix4x4 &aiMat) {
  //transposed
  return {
    aiMat.a1, aiMat.b1, aiMat.c1, aiMat.d1,
    aiMat.a2, aiMat.b2, aiMat.c2, aiMat.d2,
    aiMat.a3, aiMat.b3, aiMat.c3, aiMat.d3,
    aiMat.a4, aiMat.b4, aiMat.c4, aiMat.d4
  };
}
Could the problem be somewhere else? Maybe in the mesh loader?

Thank you to anyone who read through the whole post and thank you to anyone who can offer some help.

Share this post


Link to post
Share on other sites
Advertisement

I struggled abit with this one aswell. The difficult part is knowing where the issue is since its alot of things that have to work in unison.

 

FYI here's my shader code:
 

float4x4 BuildBoneTransform(const uint4 boneIndices, const float4 boneWeights)
{
	uint boneIndex = boneIndices[0];
	float boneWeight = boneWeights[0];
	float4x4 boneTransform = gBones[boneIndex] * boneWeight;

	for (uint boneNum = 1; boneNum < NUM_BONES_PER_VERTEX; ++boneNum)
	{
		boneIndex = boneIndices[boneNum];
		boneWeight = boneWeights[boneNum];

		boneTransform += gBones[boneIndex] * boneWeight;
	}

	return boneTransform;
}

Your assimp mat4 --> glm mat4 looks correct.

 

Can't comment on the rest atm untill I get home :)

 

EDIT: since you are using the bob model I assume you have read the ogldev tutorial. Try emulate it as close as possible. It also helps downloading graphics debugging tool RenderDoc and then run both the ogldev sample and yours and inspect/compare the shader execution results.

Edited by KaiserJohan

Share this post


Link to post
Share on other sites
IT WORKS!

The problem was in the makeMat function. I was multiplying the matricies in the wrong order. Here's the new working makeMat function.

glm::mat4 makeMat(const glm::vec3 &translation,
                  const glm::quat &rotation,
                  const glm::vec3 &scaling) {
  //scaling rotation translation
  return glm::translate({}, translation) *
         glm::scale(
           glm::mat4_cast(rotation),
           scaling
         );
}
I'm pretty sure this is multiplying scaling, then rotation, then translation (but I was pretty sure about the last one as well!). Am I right?

Share this post


Link to post
Share on other sites
Its not over yet.

I noticed that Bob's arms were a bit jagged and at first i thought it was just the mesh but i looked closly and i saw right angles where there shouldnt be right angles. I loaded up NightWing doing gangnam style and it had the same problem. There must be something wrong with the bone weights or maybe the vertex shader.

ScreenShot2017-01-28at15.23.18.png

Share this post


Link to post
Share on other sites
When i try that the shader program fails to link. The info log doesnt tell me why. Thats strange! Maybe my Intel HD Graphics 5000 has a limit to the size of vertex shader input?

Thanks for your suggestion.

Share this post


Link to post
Share on other sites
I just ran Kaiser Johan's shader code and it produces the same results as mine did but then i ran this

mat4 boneTransform = bones[boneID[0]] * boneWeight[0];
And i produced the same result. So then i ran this

mat4 boneTransform = bones[boneID[0]];
And it also produced the same result. So final i ran this

mat4 boneTransform = bones[boneID[0]] * boneWeight[1];
And the screen was red so I've come to the conclusion that the boneWeight array is a 1 followed by three 0's. This explains the jagged ness near joints because the bones arent mixing.

Share this post


Link to post
Share on other sites
IT WORKS!

(For real this time)

The reason only the first boneID boneWeight pair were being sent to the shader is because i was using an array attribute.

I looked at the man page for glVertexAttribPointer and the size parameter can only be 1, 2, 3 or 4. So its only surposed to be used with a vector and i was using an array. The elements of an array have separate locations so when i was calling glGetAttribLocation i was getting the location of the first element in the array so when so the other elements never got enabled by glEnableVertexAttribArray so they were being set to 0.

So a quick and easy solution is to replace the arrays with vectors and everything works fine but a proper solution is to use an array and make some additional calls to glVertexAttribPointer and glEnableVertexAttribArray.

I'm glad I struggled through this problem because I learnt a lot about OpenGL.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement