Animation Blending

Started by
170 comments, last by haegarr 15 years, 5 months ago
@ godmodder

Addendum

Now I think I got it :) You speak of weights to disable joints since your DCC package is only able to export animations of full featured skeletons. That is totally apart from weights for blending and/or layering. In such a case I would use flags at the joints, and destroy the flagged joints on import. You can of course externally generated masks, too. You can furthur keep the belonging joints, and ignore them later during processing, but that seems me a waste.
Advertisement
Quote:Original post by haegarr
An example would be aiming with a gun during walking. The walking means animation of the entire body if done alone, i.e. including swinging of arms. When layering the animation of aiming on top of it, swinging of the arm that holds the gun should be supressed.


Yeah, I think we are talking cross purposes here. We use a node based system (google morpheme connect to find out more), where an anim is a node, so is a blend operation. We can then provide bone masks to limit evaluation on a given connection. That allows us the ability to reuse entire blend networks - not just limit the result of a single anim - we can limit the result of an entire set of blended anims.

Quote:Original post by haegarr
Masking is a binary decision, while weighting is (more or less) continuous.


I never said it was binary - there's no reason why you can't blend the resulting transforms via a weight - or a set of weights for each track if needed. ie. Think about an anim as a single node. We can ask (via a mask) for a set of TM's to be evaluated and the result can pass onto the next operation. We can also set a second mask to get the rest of the transforms. The result is exactly the same amount of work as evaluating the entire anim - but the results can be fed into seperate networks and blended with IK or physics as needed. So if you request the results of the upper body, you can blend that with an aim anim - or an aim IK modifier. As i said before, we have over 20 ways of blending anims because there is no single blending op that does everything you need.

Quote:I don't weight individual channels, i.e. the modifier of an individual position or orientation, but the entirety of channels of a particular _active_ animation (I use "animation" as the modification of a couple of bones).


What happens when you have locomotion blend system set up for basic movement, but you wish to make modifications to specific bones? for example, the character is walking (via blended anims), aiming his gun (via IK or an anim), and has just been shot in the arm (with the reaction handled in physics)? You can't simply blend the physics with the animation data for a pretty obvious reason - the physics has to modify the result of the animations.

So in your case, the aim would be done first as a layer, then what? You need to get the current walk pose from a set of blended anims, and then add in the physics? But then you can't do that, because the physics needs higher priority? But then the physics can't have a higher priority since it needs the results of the blended animations?

Quote:And masking would need to have more information on the active animation tracks than weighting need. This is because weighting works without the knowledge of other animations, while masks must be build over all (or at least all active) animations.


Who says the masks have to be associated with *each* animation? There are N masks for a given network - you determine how big N is - and then reuse them when and where needed. Given that a 32bit unsigned can store masks for 32 anim tracks, the size of the mask is not that big. Ultimately you can then provide a blend channels node if needed

Quote:For example: I use Milkshape 3D to import my animation data, but there's no direct support for any kind of parameters like that.


Using dcc file formats for game anims isn't a good idea - for the reasons that you've noticed. I'd recommend using your own custom file format that does have the data you need - add in some form of asset converter/compiler and be as hacky as you want there. If you do happen to change DCC packages, it's trivial to extend the asset converter.

Quote:If we still speak of the weights of layers: There is simply no sense in pre-defining them, and hence no need for DCC package support. It is a decision of the runtime system. The runtime system decides from the gamer's input and the state of the game object that pulling out a gun and aiming at a target is to be animated.


I disagree - you just need a DCC package and ingame format that's better suited to the job ;)

Quote:You can of course externally generated masks, too. You can furthur keep the belonging joints, and ignore them later during processing, but that seems me a waste.


Not at all. In some cases it would be a waste, but then if you have a way of describing all possible blend combinations, you can determine which tracks on which anims you can discard as a pre-processing step (i.e. asset converter stage). But then again, the system i'm describing is 100% data driven so part of the asset is the description of the blending system for a character.
@ RobTheBloke

Due to the very different animation systems, we've talked at cross purposes for sure. Even the understanding of masking wasn't the same.

I don't doubt that my animation system, whichever approach I choose, can never compete with a professional solution ... but that should not mean that I didn't try ;) So you got me curious on morpheme, but unfortunately the informations available there at naturalmotion.com seem somewhat concise. Have to try to find free tutorials or somewhat similar ...
Quote:Original post by RobTheBloke
What happens when you have locomotion blend system set up for basic movement, but you wish to make modifications to specific bones? for example, the character is walking (via blended anims), aiming his gun (via IK or an anim), and has just been shot in the arm (with the reaction handled in physics)? You can't simply blend the physics with the animation data for a pretty obvious reason - the physics has to modify the result of the animations.

So in your case, the aim would be done first as a layer, then what? You need to get the current walk pose from a set of blended anims, and then add in the physics? But then you can't do that, because the physics needs higher priority? But then the physics can't have a higher priority since it needs the results of the blended animations?

Well, IK by itself is no problem, since animations must not be key-framed (I've mentioned it somewhere in one of the long posts above). Animations are all kinds of time based modifiers. An IK based animation simply interpolates with the IK pose as goal.

However, IK is no problem since it is an "active" animation, and as such can be foreseen by the artist. The case of physics based influence you've described above is, say, "passive" instead. I think you're right when saying that it cannot be implemented as an animation when using an animation system as mine (at least not without modification). Nevertheless can a physics system, being separate from animations as defined so far, have an effect on the already computed pose, and can overwrite the pose partly or totally.

But the weakness you've pointed out is perhaps a flawed integration of the systems. The morpheme solution seems to allow an intertwining (hopefully the correct word) where I have a strict separation. Intertwining may obviously allow for a better control by the designer. That is for sure worth to think about it...
Quote:Original post by haegarr

Well, IK by itself is no problem, since animations must not be key-framed (I've mentioned it somewhere in one of the long posts above). Animations are all kinds of time based modifiers. An IK based animation simply interpolates with the IK pose as goal.



I want to create an animation system and physical RagDoll. But i've read that RagDoll needs to be FK. If i use Animation System like

 class CKeyframe {public:float time;float value;};enum AnimationTrackType {TRANSLATEX = 0,TRANSLATEY,TRANSLATEZ,ROTATEX,ROTATEY,ROTATEZ,//And lots of other stuff};class CAnimationTrack {public:int trackID; //To Find Track By IDint boneID;AnimationTrackType trackType;std::vector<Keyframe *> keys;};class CAnimation {public:std::vector<CAnimationTrack *> tracks;char animation_name[32];float startTime, endTime;CAnimation();~CAnimation();void Animate(CSkeleton *skel);void AddTrack(int trackId, AnimationTrackType trackType);void AddKeyframeToTrack(int trackId, float time, float value);void SetName(char name[32]);CAnimationTrack *FindTrackById(int trackID);CKeyframe * FindKeyframeInTrack(int trackID, float time);static void Blend(Keyframe * outKeyframe, Keyframe * keyPrev, Keyframe *keyCurrent, float blend_factor);};


Can i use FK? And can i create RagDoll? If yes i'll write my animation system.
if no, how do i need to write it?

Thanks,
Kasya

[Edited by - Kasya on August 18, 2008 10:37:15 PM]
All structures with local co-ordinate frames are inherently FK enabled. AFAIK skeletons are always implemented with local frames. Even if more than a single bone chain can be used, they are still bone _chains_. I.e. a joint's position and orientation denote the axis of a local frame w.r.t. its parental bone. So, as long as you don't set all skeleton global or even world global joint parameters from the animations, you have automatically FK working.
What does it mean? If its in Local Coordinate how can i Transform it? If i transform it that means im making World Coordinate. That means i first need to use FK then Transform.

I forgot to ask one question?
What is FK for? For Physics or something else?

Thanks,
Kasya
Any co-ordinate is senseful if and only if you also know the co-ordinate system (or frame) to which the co-ordinate is related. Meshes are normally defined w.r.t. a local frame, i.e. its vertex positions and normals and so on are to be interpreted in that frame.

When you transform the co-ordinates but still interpret the result relative to the same frame, you've actually changed the positions and normals and so on. When you look at the mesh in the world, you'll see it translated or rotated or whatever.

On the other hand, if you interpret the frame as a transformation, apply it to the mesh, and (important!) interpret the result w.r.t. the _parental_ frame, then you've changed the co-ordinates but in the world the mesh is at the same location and shows also the same orientation!

Hence you have to distinguish for what purpose you apply a transformation: Do you want to re-locate and/or re-orientate the mesh, or do you want to change the reference system only? A local-to-global transformation is for changing the reference system only, nothing else.


FK means that co-ordinate frames are related together, and transforming a parental frame has an effect on the child frames, but changing a child frame has no effect on the parental frame. IK, on the oether hand, means that changing a child frame _has_ an effect on the parental frame.
Quote:So you got me curious on morpheme, but unfortunately the informations available there at naturalmotion.com seem somewhat concise. Have to try to find free tutorials or somewhat similar ...

There isn't much info about it on the internet. Unlike endorhpin, there's no learning edition, so no tutorials available.

Quote:Original post by haegarr
Well, IK by itself is no problem, since animations must not be key-framed (I've mentioned it somewhere in one of the long posts above). Animations are all kinds of time based modifiers. An IK based animation simply interpolates with the IK pose as goal.


IK gives you a rigid fixed solution - by blending that result with animation you can improve the visual look - i.e. overlay small motions caused by breathing etc.

Quote:Original post by haegarr
However, IK is no problem since it is an "active" animation, and as such can be foreseen by the artist.


Unless the IK solution is running in response to game input - i.e. aiming at a moving point. The only way the animator can forsee that is to have a tools that interact with the engine - more or less what morpheme is designed to do really.

Quote:Original post by haegarr
Nevertheless can a physics system, being separate from animations as defined so far, have an effect on the already computed pose, and can overwrite the pose partly or totally.


yup - but we need more fidelity and control than that would allow.

Quote:The morpheme solution seems to allow an intertwining (hopefully the correct word) where I have a strict separation.


More or less what it was designed to do. Most of the stuff we do is AI driven animations, utilising physics and other bits and bobs (you can probably find some euphoria demo's around relating to GTA and star wars) - It just so happens we need a good way to combine all of those things - hence morpheme.

Quote:Can i use FK? And can i create RagDoll? If yes i'll write my animation system.
if no, how do i need to write it?


That's a pretty good starting point, I'd probably suggest trying 2 small simple demo's first though - the anim system, and a basic physics ragdoll. It's then a bit easier to see how to merge the two solutions into one all singing and dancing system.

Quote:What does it mean? If its in Local Coordinate how can i Transform it? If i transform it that means im making World Coordinate. That means i first need to use FK then Transform.

A bit old, but might give you a few ideas. A lot of the code can be improved fairly drastically, but I'll leave that as an excercise for the reader ;)

Quote:
FK means that co-ordinate frames are related together, and transforming a parental frame has an effect on the child frames, but changing a child frame has no effect on the parental frame. IK, on the oether hand, means that changing a child frame _has_ an effect on the parental frame.


Not quite. FK means you transform a series of parented transforms until you get an end pose. IK means you start with an End pose (the effector), and use CCD or the Jacobian to determine the positions/orientations of those bones. FK is just a standard hierarchical animation system.
Quote:Original post by RobTheBloke
Quote:Original post by haegarr
Well, IK by itself is no problem, since animations must not be key-framed (I've mentioned it somewhere in one of the long posts above). Animations are all kinds of time based modifiers. An IK based animation simply interpolates with the IK pose as goal.

IK gives you a rigid fixed solution - by blending that result with animation you can improve the visual look - i.e. overlay small motions caused by breathing etc.

Seconded, and since IK controlled animation is an Animation, it can be blended with others.

Quote:Original post by RobTheBloke
Quote:Original post by haegarr
However, IK is no problem since it is an "active" animation, and as such can be foreseen by the artist.

Unless the IK solution is running in response to game input - i.e. aiming at a moving point. The only way the animator can forsee that is to have a tools that interact with the engine - more or less what morpheme is designed to do really.

That the artist cannot _precompute_ the animation is clear. Else one could use key-framing instead of IK driven animation. However, he can foresee the need for IK driven aiming, and hence integrate scipts/nodes/tracks or however that stuff is implemented, so the system has something available that can be used for blending at all.

Quote:Original post by RobTheBloke
Quote:Original post by haegarr
Nevertheless can a physics system, being separate from animations as defined so far, have an effect on the already computed pose, and can overwrite the pose partly or totally.

yup - but we need more fidelity and control than that would allow.

I understand that. See the remaining text in the original post.

This topic is closed to new replies.

Advertisement