Animation Blending

Started by
170 comments, last by haegarr 15 years, 5 months ago
question #1: Can i not use AnimTrackType. my Keyframe class is

class CKeyframe {public:CVector3 vPos;CQuat qRotate;};


question #2: int someNodeID; is Bone ID if i use Bone System right?

question #3: Do i need Scale or Color? If yes where i'll use it

Thanks,
Kasya
Advertisement
Quote:Original post by haegarr
How do you prevent lower priority animations from being added if there is no criteria like "the sum is full"?


Depends on how the rest of your system works i guess. In our case, we don't need to do that, and in the majority of situations our clients normally want the weights to sum past 1. A very simple example is a spine rotated back, and a spine rotated forward pose. Given a basic walk, if the character is going up hill, add a bit of the spine forward pose to your walk animation - then normalise. Obviously the translation tracks should not be present in the two poses (which is normally the case for most bones other than the character root). Mind you, our system has about 20 different ways to blend animations - mainly because there really is no 'one fits all' blending scheme you can use.
Quote:Original post by Kasya
question #1: Can i not use AnimTrackType. my Keyframe class is


Which is probably not ideal. Imho, storing the keyframe as a quat+trans will probably give you a very large amount of redundent data. It's very very rare that an animator will modify the translation on anything other than the character root. Also, by storing the keyframe as an explicit keyframe type, you loose generalisation that's trivial to add into the system at the start - eg, what happens when you want to animate the parameters of a particle system (say, particle colour, emission rate etc?)

Quote:Original post by Kasya
question #2: int someNodeID; is Bone ID if i use Bone System right?


When i say 'some node' i mean it quite literally - it could be the id of a bone, it could be the id of a material, it could be the id of a particle system etc. By explicitly using a 'bone' id you are restricting your system to be able to animate bones only. By taking a small step towards generalisation, you can make the system store animated colours, blend shape weights, even point animation. The easiest way of doing this, is to simply store all keyframes as floats, and use some enum identifier to let you know what data it relates to.

Ultimately, there is only one thing that distinguishes the handling of rotations vs anything else you can animate - i.e. the quat's need to be normalised before converting to matrices. So you might as well store tracks independently for floats - instead of a float/vec2/vec3/vec4/quat track types.

Quote:Original post by Kasya
question #3: Do i need Scale or Color? If yes where i'll use it


Nope, but there's a good chance that at some point in the future you'll want to expand the system to handle something other than just rotation and translation. I used scale and color as an example, but you might have a need to animate ambient/diffuse/specular/emission/shininess/uv scrolling etc etc. All of those can be described as a number of floats.

Thanks!
Quote:Original post by Kasya
...
What is Animation Track?
...

I use animation tracks similar to your assumption, but a bit more complicated. I explain it coarsly in the following, but notice please that every system may have other edge constraints. So you may find another or just less complex solution better suited for your needs. In the following words with a capital first letter denote normally a C++ class.

The global Timeline is a construct to process time related Timeline::Task instances. It does so by providing Timeline::Track instances, onto which Timeline::Task instances can be strung. A Track is instanciated with a specific target in mind. A Task on that Track hence affects that target during the time period the Task is active. The active phase of the Task is defined by the task's Task::start moment in time and its Task::duration. Notice that a Task can be added with a Task::start in the future.

The global Timeline is controlled by the Runtime::tick what reflects the current moment in time but freezed during the processing of a video frame. I've mentioned this several times before. The Runtime::tick (indirectly) denotes the position of the virtual playhead of the Timeline. In fact a Timebase is in-between, but that should not play a role here.

An Animation::Resource is a kind of prototype for animations and as such it counts to the assets. It consists of one or more channels if well defined, where each channel is for modifying a tageted value. Targeting is done by a naming scheme, allowing me to bind the animation instance later to any target in principle. You have specified a BoneID in your suggestion. That is somewhat equivalent. But notice that the said channels are _not_ bound to actual targets, since we are still speaking of a prototype resource. One subtype of channel is implemented as a key-frame interpolation. Other classes exist as well, but are not of interest here.

Now, when the system decides to animate a skeleton, it gets a Track from the global Timeline. The Track is especially an AnimationTrack, since it is used also to handle the animation grouping for blending and layering. However, a new Animation instance is created and backed with the selected Animation::Resource. The Animation instance is a Timeline::Task, and as such can and will be added to the formerly got AnimationTrack. The Animation instance is bound to the skeleton instance, what means that the names available from the Animation::Resource channels are resolved w.r.t. the skeleton instance. Names that cannot be resolved are simply ignored (over-determined animation), and names for that no channel exists are simply not hit (under-determined animation; this is where my "an animation should not need to control _all_ bones" happens).

BTW: Notice please that an Animation instance is added to the Timeline similarly to a skeleton instance is added to the Scene graph. Also skeletons are defined two-fold in my engine, namely as a Linkage::Resource and a Linkage instance.

What have we now? There is an Animation instance on the Timeline, reflecting the current state of the animation and its bindings. The instance is backed by an Animation::Resource which holds the static definition of the animation. There is furthur a Linkage instance in the Scene graph, reflecting the current state (i.e. all current bone positions and rotations and so on) of the skeleton. The instance is backed by a Linkage::Resource which defines the static parts of the skeleton, e.g. its principle structure and the rest pose.

When time elapses and the Runtime::tick enforces the Timeline's playhead to be over the Animation instance (remember it is a kind of Timeline::Task), the Animation instance processes all channels like described in a previous post. That means that the targeted positions and orientations are set to what the channels compute as the current value due to the given playhead position, Animation::start, and inner definition of the channel.


EDIT: Don't forget that I describe a system. I don't enfore you to do the same. Please feel free to pick out whatever idea seems you suitable.
Uhh, writing my previous post has de-synchronized me from the thread's progression ... :)

Quote:Original post by RobTheBloke
Quote:Original post by haegarr
How do you prevent lower priority animations from being added if there is no criteria like "the sum is full"?


Depends on how the rest of your system works i guess. In our case, we don't need to do that, and in the majority of situations our clients normally want the weights to sum past 1. A very simple example is a spine rotated back, and a spine rotated forward pose. Given a basic walk, if the character is going up hill, add a bit of the spine forward pose to your walk animation - then normalise. Obviously the translation tracks should not be present in the two poses (which is normally the case for most bones other than the character root). Mind you, our system has about 20 different ways to blend animations - mainly because there really is no 'one fits all' blending scheme you can use.

It seems me that we have another understanding of layering. I mean a kind of blending that is able to supress another animation to be added as well, simply because the so higher priorized animation is processed earlier and "fills up the sum", so that "no more space" is left for the subsequent animation.
Quote:Original post by haegarrI mean a kind of blending that is able to supress another animation to be added as well, simply because the so higher priorized animation is processed earlier and "fills up the sum", so that "no more space" is left for the subsequent animation.


Do you have a use case for that? I'm assuming you mean target an upperbody anim on top of a lower body anim (or something like that?). But then, why not use a mask for that? i.e. eval tracks 1,2,8,9 only.... Since the system (when using nlerp) is basically a simple sum, shouldn't really cause a problem in those cases. Seems that :

if(mask_used){  foreach track to eval}else{  foreach track}


is better than
foreach track{  if(weight<1)  {  }}


The former is easier to multithread, not to mention not having the additional memory cost of storing the additional weighting for each track.
This makes me wonder, haegarr, how exactly do you supply these extra weights per animation? For example: I use Milkshape 3D to import my animation data, but there's no direct support for any kind of parameters like that. The best I can do is supply these extra weights in the supported joint "comments", but that feels kind of like a hack. Using a simple mask could alleviate my problem I think. I could then predefine a couple of masks like "upper-body animation", "lower-body animation" and then use these to blend the animations. Ofcourse this wouldn't give me alot of fine grained control...
@ RobTheBloke

An example would be aiming with a gun during walking. The walking means animation of the entire body if done alone, i.e. including swinging of arms. When layering the animation of aiming on top of it, swinging of the arm that holds the gun should be supressed.

Masking is a binary decision, while weighting is (more or less) continuous. The additional memory costs are neglectable. I don't weight individual channels, i.e. the modifier of an individual position or orientation, but the entirety of channels of a particular _active_ animation (I use "animation" as the modification of a couple of bones).

And masking would need to have more information on the active animation tracks than weighting need. This is because weighting works without the knowledge of other animations, while masks must be build over all (or at least all active) animations.

I don't say that masking is bad. It is just a way that seems not to fit well into the animation system I described in the posts above. I think we have a totally different approach to animations on the top abstraction level, haven't we?

[Edited by - haegarr on August 18, 2008 10:07:58 AM]
Quote:Original post by godmodder
This makes me wonder, haegarr, how exactly do you supply these extra weights per animation? For example: I use Milkshape 3D to import my animation data, but there's no direct support for any kind of parameters like that. The best I can do is supply these extra weights in the supported joint "comments", but that feels kind of like a hack.

If we still speak of the weights of layers: There is simply no sense in pre-defining them, and hence no need for DCC package support. It is a decision of the runtime system. The runtime system decides from the gamer's input and the state of the game object that pulling out a gun and aiming at a target is to be animated.

If we speak of the weights of blending: The same holds here. If the avatar is walking, and the user input causes the avatar to accelerate up to running, then the blending factor is influenced from the input system logic, not from the DCC package. The latter has been used to make a 100% walking animation, as well as a 100% running animation: No need for weights there.

Notice please that I have absolutely no weights associated with joints directly. All weights are associated with animation tracks. Please refer back to my post where I answered Kasya on what I understand on an animation track.

Quote:Original post by godmodder
Using a simple mask could alleviate my problem I think. I could then predefine a couple of masks like "upper-body animation", "lower-body animation" and then use these to blend the animations. Ofcourse this wouldn't give me alot of fine grained control...

I don't contradict. As said several times, there is no "the animation system", and I never claim that my system is even superior.

[Edited by - haegarr on August 18, 2008 10:26:15 AM]

This topic is closed to new replies.

Advertisement