Mapping animation skeletons

Started by
0 comments, last by Ashaman73 13 years, 5 months ago
Suppose I have a high-res skeleton (say, the display mesh) and a low-res skeleton (say, a ragdoll), and I want to map between them. I basically want to build up a bi-directional mapping of which bones in the low res skeleton correspond to which bones in the high res skeleton.

It's not a technically difficult problem, but I'm having problems deciding how best to build an API to support it (no UI tools yet, so this would all be in code still).

I could just brute force it and have the user create a long list of "A maps to B" entries, but if the skeletons are largely similar that's really redundant and it's fragile to changes. The graphics work for skinning (this is a 2D vector graphics engine), primitives like the 'finger' might exist for both the animation system and the graphics system. So the graphics system would have its own complete skeleton which is quite similar to the animation skeleton, which describes the transforms for all the quads to be drawn. So maybe 90% of the tree structure for either is really really similar.

I'd like whatever I do to be able to handle this case as well as the hi res/low res mapping.

I could try to be smart about it, and have the user specify the root bone of either skeleton, and a few terminal bones that also map, and have the system try to auto-magically figure out the best mapping, but there are weird cases where the inherent ambiguity would screw things up. Like if one skeleton has an elbow joint and the other one has two for some reason, it's very difficult to decide on best behavior for an auto-mapping (just pick one? Average them somehow?). So this seems like a case of the system trying to be too smart and actually making things more difficult.

On the other hand, in order for the user to meaningfully interact with the bones to chose mappings manually, they all need unique names that make sense, which is an extra bit of overhead for the user to maintain. If it's a quick throw-away effect they might not want to go through the whole hassle.

So I'm mostly curious if anyone has come up with a good system or seen a good system for this. I haven't played with many skeletal animation systems in practice, and I don't want to reinvent the wheel if there's a pretty good solution to this problem already. It's also an interesting enough problem in just a strictly computer science realm, where you're trying to mostly match two similar trees.

Maybe some sort of linear least squares way of matching up skeletons with callbacks if there's an ambiguity or something along those lines? Or have users specify "T Pose"s for either skeleton maybe, and map bones based on how close they are to other bones?
[size=2]Darwinbots - [size=2]Artificial life simulation
Advertisement
You need the low-res skeleton for ragdoll physics, right ?
My idea would be to generate the low-res skeleton from the high-res version by simply marking bones as low-res. Example
hi-res:spine(X) -> spine 2 -> spine 3 ->  neck(X) -> head -> eyes                                -> shoulder_l(X) -> upper_arm_l -> elbow_l(X) -> ..                               -> shoulder_r(X) -> upper_arm_r -> elbow_r(X) -> ..Start at the root and create new low-res bones between X marked bones.low-res:spine   ------>  neck        ------>  shoulder_l -----> elbow_l        ------>  shoulder_r -----> elbow_r


Now, execute any ragdoll physics on the low-res version and sample the result up. Up sampling could be done by using a simple particle physics to rearrange the bones in the hi res model, which are not marked for low-res use. A particle chain is very short (only 2-4 moving particles) which should be fast enough.

This topic is closed to new replies.

Advertisement