About skeletal animation (retargeting and other stuff)

Started by
3 comments, last by Ruggostein 8 years, 9 months ago

I've been trying doing this for probably 3 years, and still not had any success, and sadly there is almost no information about this online.

Basically, I want to reuse animations from one skeleton on another.

Both skeletons have exactly the same hierarchy of joints, except that each roint has a different offset matrix (position + rotation).

The best result I can get is by just copying the offset matrices from the original skeleton to the target skeleton (animation below).

Of course, this ignores completely the original skeleton, so this solution can't work.

If I get the idea right, ideally I just have to calculate a matrix per each joint, that converts the animation data from the original bone to the target bone orientation.

I though this would be done like this.

transformed_keyframe_mat = inverted_original_offset_mat * original_keyframe_mat * target_offset_mat

This however does not work, and in frustration I've tried all possible permutations of this multiplications, the animation always appears all wrong (head twisted inside body, paws twisted and stretched, etc).

OgbufCs.gif

My current animation algorithm works like this (with many parts I not understand!)

1 - Each joint has two precalculated matrices (a relative and a absolute offset matrix). The absolute offset matrix can be used to bring vertices from the joint space to the world space. And the relative matrix I have no clue what transformation it actually does if I apply it to vertices.

2 - After selecting the keyframes for current time and interpolating them, I get a keyframe/animation matrix for each joint. I call this 'relative frame matrix'. I then concatenate them to obtain a 'absolute frame matrix' per joint.

3 - I calculate the final matrices for each joint by multiplying the inverse of the 'absolute offset matrix' by the 'absolute frame matrix'. This supposedly brings the vertices to 'joint space', then applies the current animation and then somehow puts them back in world space (why?? I'm not sure how this even is working now... I've written this code like 4 or 5 years ago)

I'm not sure if there is even another way to do this. For example, is is possible to avoid concatenating all 'relative frame matrices' into 'absolute frame matrices' every frame? I've tried that, but I did not have any sucess, the animation always appears wrong.

Finally, where would the retargeting step happen?

Advertisement

Assuming you have a working animation/skinning system, retargeting comes from the fact that you are using an animation created for another models skeleton. Assuming you are not using an animation from a wildly different skeleton (which can be done but is complex task, yikes, http://chrishecker.com/Real-time_Motion_Retargeting_to_Highly_Varied_User-Created_Morphologies) you just ignore animation channel data for joints that cant be found in the target skeleton.

The animation matrices you describe appear, in general, to be pretty standard, though I think your terminology is non-standard.

More commonly, an "offset matrix" is understood to be a matrix that transforms a vertex in pose position from model space to joint space. A joint's animation matrix, when applied to a vertex in joint space, transforms the vertex to an animated position in the joint's space.

By concatenating the animation matrices - call that a "combined" matrix (assuming your action is joint-anim-matrix X joint-parent-anim-matrix X joint-parent-parent-anim-matrix X ... X rootFrame-matrix) then can be understood as animate-vertex-in-joint-space-and-transform-to-model-space.

As a result, offset-matrix X combined-matrix transforms the vertex from pose position to joint space, moves it to the animated position in joint space, and transforms the animated vertex position back into model space.

If your animations are as I assume, you can't arbitrarily apply animations from one skeleton to another. To apply animations from one skeleton to another, the two skeletons have to have the same child-to-parent relationships, and (more importantly) have to have the same distance from joint-to-parent. That joint-to-parent distance is part of each joint's animation matrix. I.e., that's what makes concatenating (or combining) work - it moves the animated vertex position in joint space to model space - which involves rotation and translation (distance.)

Again, if your animations are as I assume, the offset matrices for each bone (joint) are most commonly simple translations. E.g., a vertex is 6 units above the "ground" and a joint which is to influence it is 7 units above the "ground" in pose position; the offset matrix merely translates the vertex -7 units, positioning the vertex at -1 units in joint space. Then the vertex (for example) is rotated in joint space and translated +7 units, putting it back in model space.

If that animation is applied to a vertex in another mesh that, for example, is 8 units above the "ground," the offset matrix still translates it -7 units. Now the rotation is applied in joint space to a vertex at +1 in joint space. That rotation may be in a direction opposite to what is desired. That MAY be why you see "jittering" in the animation on the left in the pix you posted. Several of the bones on the left are longer than the corresponding bones on the right. By bone length, I mean the distance between joints.

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.

You don't forget how to play when you grow old; you grow old when you forget how to play.

I wouldn't sugguest to do retargeting yourself. Use a animation/modelling tool like blender for it. Automatically retargeting is really hard to get right (if it works at all), Chris Heckler used a procedural animation system with retargeting in Spore, but this will be a lot of work to get right.

To retarget a skeleton more effective you should not animate the bone directly. Instead you use a second set of bones and many constrains. Basically it works like a puppeteer, you have bones which deform your model (the one which will be used in your game later) and you have bones which controls these deformation bones. So instead of animating the whole leg, you have only one or two foot control bones which will manipulate the whole leg. Here's a good tutorial about using a second layer of control bones.

Buckeye, very useful post, it helped me understand better the transformations.

However my joints are not represented just as translations from the parent (sadly...). The joints are represented as a rotation + translation pair, which is one of the reason everything was so confusing.

I tried finding a way to eliminate the rotation and keep only translations, by trying to apply a inverted rotation to the animation dat, but I had no sucess, I'm doing something wrong.

Ashaman73, I actually need it because I want to automatize targeting more than 300 models (which all have skeletons with same hierarchy, but different joint positions), along with being able to do some other cool stuff at run time (eg: create characters procedurally by patching together different body parts).

And I don't think it is that much difficult if the skeletons have the exactly same hierarchy. For example the devs from Overgrowth did it, also jMonkey engine supports this and the code is open source, I've checked it, and it is not more than 300 lines of code (altough the code uses some naming conventions for matrices that completly confuse me).

Check this link for a example of what I want to do:

http://hub.jmonkeyengine.org/t/animation-retargeting-mocap-bvh/29985/2

This topic is closed to new replies.

Advertisement