Followers 0

# Animation Blending

## 171 posts in this topic

Quote:
 Original post by godmodderWhat do you mean by "fading" an animation in and out? Is it transitioning from a walking to a running animation for example by letting the weight of the walking animation gradually decrease and the weight for the running animation increase?

Yes, that is a possible scenario.

Quote:
 Original post by godmodderAlso, this paper speaks only of blending of the rotations with the SLERP function, but do translations also need to be blended?

In principle also translations have to be blended, yes, but only if they are animated. The result of blending 2 identical value
vPos * b + vPos * ( 1 - b ) == vPos
is independend on the blending factor and hence need not be done if you _know_ that they are identical. In skeletons the positions are typically constant. That is one of the reasons I use tracks, so the artist _can_ animate (selected) positions.

Using slerp or nlerp is an issue; I've posted something about it anywhere earlier.

Quote:
 Original post by godmodderAs you can see, I have quite a few questions before I go and implement this technique. ;) Sorry you had to type that much, BTW!

Oh, life is hard ;) Well, my engine is definitely somewhat complex, and offering some details here give the chance of discussion. I don't claim to have found the best way, and so the discussion help also me in perhaps revising design decisions. So don't hesitate to point out weaknesses. I'm glad about every hint making things better.
0

##### Share on other sites
Quote:
 Original post by godmodderAlso, this paper speaks only of blending of the rotations with the SLERP function, but do translations also need to be blended?

Use nlerp when blending anims - it's faster and a hell of a lot easier when blending multiple anims. Normalise at the end though, then you have no reason to put constraints on the weights for the layering (additive blending as we call it). The other advantage of using nlerp, is that you can treat all anim tracks as floats, with a special case flag for quat rotations (i.e. normalise float tracks 0,1,2,3 as a quat). It's then trivial to extend the system to handle colours / bland shape weights etc.
0

##### Share on other sites
Quote:
 Original post by RobTheBlokeUse nlerp when blending anims - it's faster and a hell of a lot easier when blending multiple anims. Normalise at the end though, then you have no reason to put constraints on the weights for the layering (additive blending as we call it). The other advantage of using nlerp, is that you can treat all anim tracks as floats, with a special case flag for quat rotations (i.e. normalise float tracks 0,1,2,3 as a quat). It's then trivial to extend the system to handle colours / bland shape weights etc.

Seconded, with the exception of dropping constraints on the weights for the layering. How do you prevent lower priority animations from being added if there is no criteria like "the sum is full"?

[Edited by - haegarr on August 18, 2008 6:46:50 AM]
0

##### Share on other sites
Hello,
What is Animation Track? is it like

class AnimationTrack {public:std::vector<Keyframe> keyframes;unsigned int BoneID;};

?

Thanks,
Kasya
0

##### Share on other sites
More or less. That track will be fine if you only want skeletal animation, you could simplify it to:

struct KeyFrame{  float time;  float value;};enum AnimTrackType{  kRotateX,  kRotateY,  kRotateZ,  kRotateW,  kTranslateX,  kTranslateY,  kTranslateZ,  kScaleX,  kScaleY,  kScaleZ,  kColorR,  kColorG,  kColorB,  // and any others you might need.... };struct AnimTrack{  int someNodeId;  AnimTrackType type;  std::vector<KeyFrame> keys;};

Though there is a *lot* of optimisation possible in the above code...
0

##### Share on other sites
question #1: Can i not use AnimTrackType. my Keyframe class is

class CKeyframe {public:CVector3 vPos;CQuat qRotate;};

question #2: int someNodeID; is Bone ID if i use Bone System right?

question #3: Do i need Scale or Color? If yes where i'll use it

Thanks,
Kasya
0

##### Share on other sites
Quote:
 Original post by haegarrHow do you prevent lower priority animations from being added if there is no criteria like "the sum is full"?

Depends on how the rest of your system works i guess. In our case, we don't need to do that, and in the majority of situations our clients normally want the weights to sum past 1. A very simple example is a spine rotated back, and a spine rotated forward pose. Given a basic walk, if the character is going up hill, add a bit of the spine forward pose to your walk animation - then normalise. Obviously the translation tracks should not be present in the two poses (which is normally the case for most bones other than the character root). Mind you, our system has about 20 different ways to blend animations - mainly because there really is no 'one fits all' blending scheme you can use.
0

##### Share on other sites
Quote:
 Original post by Kasyaquestion #1: Can i not use AnimTrackType. my Keyframe class is

Which is probably not ideal. Imho, storing the keyframe as a quat+trans will probably give you a very large amount of redundent data. It's very very rare that an animator will modify the translation on anything other than the character root. Also, by storing the keyframe as an explicit keyframe type, you loose generalisation that's trivial to add into the system at the start - eg, what happens when you want to animate the parameters of a particle system (say, particle colour, emission rate etc?)

Quote:
 Original post by Kasyaquestion #2: int someNodeID; is Bone ID if i use Bone System right?

When i say 'some node' i mean it quite literally - it could be the id of a bone, it could be the id of a material, it could be the id of a particle system etc. By explicitly using a 'bone' id you are restricting your system to be able to animate bones only. By taking a small step towards generalisation, you can make the system store animated colours, blend shape weights, even point animation. The easiest way of doing this, is to simply store all keyframes as floats, and use some enum identifier to let you know what data it relates to.

Ultimately, there is only one thing that distinguishes the handling of rotations vs anything else you can animate - i.e. the quat's need to be normalised before converting to matrices. So you might as well store tracks independently for floats - instead of a float/vec2/vec3/vec4/quat track types.

Quote:
 Original post by Kasyaquestion #3: Do i need Scale or Color? If yes where i'll use it

Nope, but there's a good chance that at some point in the future you'll want to expand the system to handle something other than just rotation and translation. I used scale and color as an example, but you might have a need to animate ambient/diffuse/specular/emission/shininess/uv scrolling etc etc. All of those can be described as a number of floats.

0

Thanks!
0

##### Share on other sites
Quote:
 Original post by Kasya...What is Animation Track?...

I use animation tracks similar to your assumption, but a bit more complicated. I explain it coarsly in the following, but notice please that every system may have other edge constraints. So you may find another or just less complex solution better suited for your needs. In the following words with a capital first letter denote normally a C++ class.

The global Timeline is controlled by the Runtime::tick what reflects the current moment in time but freezed during the processing of a video frame. I've mentioned this several times before. The Runtime::tick (indirectly) denotes the position of the virtual playhead of the Timeline. In fact a Timebase is in-between, but that should not play a role here.

An Animation::Resource is a kind of prototype for animations and as such it counts to the assets. It consists of one or more channels if well defined, where each channel is for modifying a tageted value. Targeting is done by a naming scheme, allowing me to bind the animation instance later to any target in principle. You have specified a BoneID in your suggestion. That is somewhat equivalent. But notice that the said channels are _not_ bound to actual targets, since we are still speaking of a prototype resource. One subtype of channel is implemented as a key-frame interpolation. Other classes exist as well, but are not of interest here.

Now, when the system decides to animate a skeleton, it gets a Track from the global Timeline. The Track is especially an AnimationTrack, since it is used also to handle the animation grouping for blending and layering. However, a new Animation instance is created and backed with the selected Animation::Resource. The Animation instance is a Timeline::Task, and as such can and will be added to the formerly got AnimationTrack. The Animation instance is bound to the skeleton instance, what means that the names available from the Animation::Resource channels are resolved w.r.t. the skeleton instance. Names that cannot be resolved are simply ignored (over-determined animation), and names for that no channel exists are simply not hit (under-determined animation; this is where my "an animation should not need to control _all_ bones" happens).

BTW: Notice please that an Animation instance is added to the Timeline similarly to a skeleton instance is added to the Scene graph. Also skeletons are defined two-fold in my engine, namely as a Linkage::Resource and a Linkage instance.

What have we now? There is an Animation instance on the Timeline, reflecting the current state of the animation and its bindings. The instance is backed by an Animation::Resource which holds the static definition of the animation. There is furthur a Linkage instance in the Scene graph, reflecting the current state (i.e. all current bone positions and rotations and so on) of the skeleton. The instance is backed by a Linkage::Resource which defines the static parts of the skeleton, e.g. its principle structure and the rest pose.

When time elapses and the Runtime::tick enforces the Timeline's playhead to be over the Animation instance (remember it is a kind of Timeline::Task), the Animation instance processes all channels like described in a previous post. That means that the targeted positions and orientations are set to what the channels compute as the current value due to the given playhead position, Animation::start, and inner definition of the channel.

EDIT: Don't forget that I describe a system. I don't enfore you to do the same. Please feel free to pick out whatever idea seems you suitable.
0

##### Share on other sites
Uhh, writing my previous post has de-synchronized me from the thread's progression ... :)

Quote:
Original post by RobTheBloke
Quote:
 Original post by haegarrHow do you prevent lower priority animations from being added if there is no criteria like "the sum is full"?

Depends on how the rest of your system works i guess. In our case, we don't need to do that, and in the majority of situations our clients normally want the weights to sum past 1. A very simple example is a spine rotated back, and a spine rotated forward pose. Given a basic walk, if the character is going up hill, add a bit of the spine forward pose to your walk animation - then normalise. Obviously the translation tracks should not be present in the two poses (which is normally the case for most bones other than the character root). Mind you, our system has about 20 different ways to blend animations - mainly because there really is no 'one fits all' blending scheme you can use.

It seems me that we have another understanding of layering. I mean a kind of blending that is able to supress another animation to be added as well, simply because the so higher priorized animation is processed earlier and "fills up the sum", so that "no more space" is left for the subsequent animation.
0

##### Share on other sites
Quote:
 Original post by haegarrI mean a kind of blending that is able to supress another animation to be added as well, simply because the so higher priorized animation is processed earlier and "fills up the sum", so that "no more space" is left for the subsequent animation.

Do you have a use case for that? I'm assuming you mean target an upperbody anim on top of a lower body anim (or something like that?). But then, why not use a mask for that? i.e. eval tracks 1,2,8,9 only.... Since the system (when using nlerp) is basically a simple sum, shouldn't really cause a problem in those cases. Seems that :

if(mask_used){  foreach track to eval}else{  foreach track}

is better than
foreach track{  if(weight<1)  {  }}

The former is easier to multithread, not to mention not having the additional memory cost of storing the additional weighting for each track.
0

##### Share on other sites
This makes me wonder, haegarr, how exactly do you supply these extra weights per animation? For example: I use Milkshape 3D to import my animation data, but there's no direct support for any kind of parameters like that. The best I can do is supply these extra weights in the supported joint "comments", but that feels kind of like a hack. Using a simple mask could alleviate my problem I think. I could then predefine a couple of masks like "upper-body animation", "lower-body animation" and then use these to blend the animations. Ofcourse this wouldn't give me alot of fine grained control...
0

##### Share on other sites
@ RobTheBloke

An example would be aiming with a gun during walking. The walking means animation of the entire body if done alone, i.e. including swinging of arms. When layering the animation of aiming on top of it, swinging of the arm that holds the gun should be supressed.

Masking is a binary decision, while weighting is (more or less) continuous. The additional memory costs are neglectable. I don't weight individual channels, i.e. the modifier of an individual position or orientation, but the entirety of channels of a particular _active_ animation (I use "animation" as the modification of a couple of bones).

And masking would need to have more information on the active animation tracks than weighting need. This is because weighting works without the knowledge of other animations, while masks must be build over all (or at least all active) animations.

I don't say that masking is bad. It is just a way that seems not to fit well into the animation system I described in the posts above. I think we have a totally different approach to animations on the top abstraction level, haven't we?

[Edited by - haegarr on August 18, 2008 10:07:58 AM]
0

##### Share on other sites
Quote:
 Original post by godmodderThis makes me wonder, haegarr, how exactly do you supply these extra weights per animation? For example: I use Milkshape 3D to import my animation data, but there's no direct support for any kind of parameters like that. The best I can do is supply these extra weights in the supported joint "comments", but that feels kind of like a hack.

If we still speak of the weights of layers: There is simply no sense in pre-defining them, and hence no need for DCC package support. It is a decision of the runtime system. The runtime system decides from the gamer's input and the state of the game object that pulling out a gun and aiming at a target is to be animated.

If we speak of the weights of blending: The same holds here. If the avatar is walking, and the user input causes the avatar to accelerate up to running, then the blending factor is influenced from the input system logic, not from the DCC package. The latter has been used to make a 100% walking animation, as well as a 100% running animation: No need for weights there.

Notice please that I have absolutely no weights associated with joints directly. All weights are associated with animation tracks. Please refer back to my post where I answered Kasya on what I understand on an animation track.

Quote:
 Original post by godmodderUsing a simple mask could alleviate my problem I think. I could then predefine a couple of masks like "upper-body animation", "lower-body animation" and then use these to blend the animations. Ofcourse this wouldn't give me alot of fine grained control...

I don't contradict. As said several times, there is no "the animation system", and I never claim that my system is even superior.

[Edited by - haegarr on August 18, 2008 10:26:15 AM]
0

##### Share on other sites
@ godmodder

Now I think I got it :) You speak of weights to disable joints since your DCC package is only able to export animations of full featured skeletons. That is totally apart from weights for blending and/or layering. In such a case I would use flags at the joints, and destroy the flagged joints on import. You can of course externally generated masks, too. You can furthur keep the belonging joints, and ignore them later during processing, but that seems me a waste.
0

##### Share on other sites
Quote:
 Original post by haegarrAn example would be aiming with a gun during walking. The walking means animation of the entire body if done alone, i.e. including swinging of arms. When layering the animation of aiming on top of it, swinging of the arm that holds the gun should be supressed.

Yeah, I think we are talking cross purposes here. We use a node based system (google morpheme connect to find out more), where an anim is a node, so is a blend operation. We can then provide bone masks to limit evaluation on a given connection. That allows us the ability to reuse entire blend networks - not just limit the result of a single anim - we can limit the result of an entire set of blended anims.

Quote:
 Original post by haegarrMasking is a binary decision, while weighting is (more or less) continuous.

I never said it was binary - there's no reason why you can't blend the resulting transforms via a weight - or a set of weights for each track if needed. ie. Think about an anim as a single node. We can ask (via a mask) for a set of TM's to be evaluated and the result can pass onto the next operation. We can also set a second mask to get the rest of the transforms. The result is exactly the same amount of work as evaluating the entire anim - but the results can be fed into seperate networks and blended with IK or physics as needed. So if you request the results of the upper body, you can blend that with an aim anim - or an aim IK modifier. As i said before, we have over 20 ways of blending anims because there is no single blending op that does everything you need.

Quote:
 I don't weight individual channels, i.e. the modifier of an individual position or orientation, but the entirety of channels of a particular _active_ animation (I use "animation" as the modification of a couple of bones).

What happens when you have locomotion blend system set up for basic movement, but you wish to make modifications to specific bones? for example, the character is walking (via blended anims), aiming his gun (via IK or an anim), and has just been shot in the arm (with the reaction handled in physics)? You can't simply blend the physics with the animation data for a pretty obvious reason - the physics has to modify the result of the animations.

So in your case, the aim would be done first as a layer, then what? You need to get the current walk pose from a set of blended anims, and then add in the physics? But then you can't do that, because the physics needs higher priority? But then the physics can't have a higher priority since it needs the results of the blended animations?

Quote:
 And masking would need to have more information on the active animation tracks than weighting need. This is because weighting works without the knowledge of other animations, while masks must be build over all (or at least all active) animations.

Who says the masks have to be associated with *each* animation? There are N masks for a given network - you determine how big N is - and then reuse them when and where needed. Given that a 32bit unsigned can store masks for 32 anim tracks, the size of the mask is not that big. Ultimately you can then provide a blend channels node if needed

Quote:
 For example: I use Milkshape 3D to import my animation data, but there's no direct support for any kind of parameters like that.

Using dcc file formats for game anims isn't a good idea - for the reasons that you've noticed. I'd recommend using your own custom file format that does have the data you need - add in some form of asset converter/compiler and be as hacky as you want there. If you do happen to change DCC packages, it's trivial to extend the asset converter.

Quote:
 If we still speak of the weights of layers: There is simply no sense in pre-defining them, and hence no need for DCC package support. It is a decision of the runtime system. The runtime system decides from the gamer's input and the state of the game object that pulling out a gun and aiming at a target is to be animated.

I disagree - you just need a DCC package and ingame format that's better suited to the job ;)

Quote:
 You can of course externally generated masks, too. You can furthur keep the belonging joints, and ignore them later during processing, but that seems me a waste.

Not at all. In some cases it would be a waste, but then if you have a way of describing all possible blend combinations, you can determine which tracks on which anims you can discard as a pre-processing step (i.e. asset converter stage). But then again, the system i'm describing is 100% data driven so part of the asset is the description of the blending system for a character.
0

##### Share on other sites
@ RobTheBloke

Due to the very different animation systems, we've talked at cross purposes for sure. Even the understanding of masking wasn't the same.

I don't doubt that my animation system, whichever approach I choose, can never compete with a professional solution ... but that should not mean that I didn't try ;) So you got me curious on morpheme, but unfortunately the informations available there at naturalmotion.com seem somewhat concise. Have to try to find free tutorials or somewhat similar ...
0

##### Share on other sites
Quote:
 Original post by RobTheBlokeWhat happens when you have locomotion blend system set up for basic movement, but you wish to make modifications to specific bones? for example, the character is walking (via blended anims), aiming his gun (via IK or an anim), and has just been shot in the arm (with the reaction handled in physics)? You can't simply blend the physics with the animation data for a pretty obvious reason - the physics has to modify the result of the animations.So in your case, the aim would be done first as a layer, then what? You need to get the current walk pose from a set of blended anims, and then add in the physics? But then you can't do that, because the physics needs higher priority? But then the physics can't have a higher priority since it needs the results of the blended animations?

Well, IK by itself is no problem, since animations must not be key-framed (I've mentioned it somewhere in one of the long posts above). Animations are all kinds of time based modifiers. An IK based animation simply interpolates with the IK pose as goal.

However, IK is no problem since it is an "active" animation, and as such can be foreseen by the artist. The case of physics based influence you've described above is, say, "passive" instead. I think you're right when saying that it cannot be implemented as an animation when using an animation system as mine (at least not without modification). Nevertheless can a physics system, being separate from animations as defined so far, have an effect on the already computed pose, and can overwrite the pose partly or totally.

But the weakness you've pointed out is perhaps a flawed integration of the systems. The morpheme solution seems to allow an intertwining (hopefully the correct word) where I have a strict separation. Intertwining may obviously allow for a better control by the designer. That is for sure worth to think about it...
0

##### Share on other sites
Quote:
 Original post by haegarrWell, IK by itself is no problem, since animations must not be key-framed (I've mentioned it somewhere in one of the long posts above). Animations are all kinds of time based modifiers. An IK based animation simply interpolates with the IK pose as goal.

I want to create an animation system and physical RagDoll. But i've read that RagDoll needs to be FK. If i use Animation System like

 class CKeyframe {public:float time;float value;};enum AnimationTrackType {TRANSLATEX = 0,TRANSLATEY,TRANSLATEZ,ROTATEX,ROTATEY,ROTATEZ,//And lots of other stuff};class CAnimationTrack {public:int trackID; //To Find Track By IDint boneID;AnimationTrackType trackType;std::vector<Keyframe *> keys;};class CAnimation {public:std::vector<CAnimationTrack *> tracks;char animation_name[32];float startTime, endTime;CAnimation();~CAnimation();void Animate(CSkeleton *skel);void AddTrack(int trackId, AnimationTrackType trackType);void AddKeyframeToTrack(int trackId, float time, float value);void SetName(char name[32]);CAnimationTrack *FindTrackById(int trackID);CKeyframe * FindKeyframeInTrack(int trackID, float time);static void Blend(Keyframe * outKeyframe, Keyframe * keyPrev, Keyframe *keyCurrent, float blend_factor);};

Can i use FK? And can i create RagDoll? If yes i'll write my animation system.
if no, how do i need to write it?

Thanks,
Kasya

[Edited by - Kasya on August 18, 2008 10:37:15 PM]
0

##### Share on other sites
All structures with local co-ordinate frames are inherently FK enabled. AFAIK skeletons are always implemented with local frames. Even if more than a single bone chain can be used, they are still bone _chains_. I.e. a joint's position and orientation denote the axis of a local frame w.r.t. its parental bone. So, as long as you don't set all skeleton global or even world global joint parameters from the animations, you have automatically FK working.
0

##### Share on other sites
What does it mean? If its in Local Coordinate how can i Transform it? If i transform it that means im making World Coordinate. That means i first need to use FK then Transform.

I forgot to ask one question?
What is FK for? For Physics or something else?

Thanks,
Kasya
0

##### Share on other sites
Any co-ordinate is senseful if and only if you also know the co-ordinate system (or frame) to which the co-ordinate is related. Meshes are normally defined w.r.t. a local frame, i.e. its vertex positions and normals and so on are to be interpreted in that frame.

When you transform the co-ordinates but still interpret the result relative to the same frame, you've actually changed the positions and normals and so on. When you look at the mesh in the world, you'll see it translated or rotated or whatever.

On the other hand, if you interpret the frame as a transformation, apply it to the mesh, and (important!) interpret the result w.r.t. the _parental_ frame, then you've changed the co-ordinates but in the world the mesh is at the same location and shows also the same orientation!

Hence you have to distinguish for what purpose you apply a transformation: Do you want to re-locate and/or re-orientate the mesh, or do you want to change the reference system only? A local-to-global transformation is for changing the reference system only, nothing else.

FK means that co-ordinate frames are related together, and transforming a parental frame has an effect on the child frames, but changing a child frame has no effect on the parental frame. IK, on the oether hand, means that changing a child frame _has_ an effect on the parental frame.
0

##### Share on other sites
Quote:
 So you got me curious on morpheme, but unfortunately the informations available there at naturalmotion.com seem somewhat concise. Have to try to find free tutorials or somewhat similar ...

There isn't much info about it on the internet. Unlike endorhpin, there's no learning edition, so no tutorials available.

Quote:
 Original post by haegarrWell, IK by itself is no problem, since animations must not be key-framed (I've mentioned it somewhere in one of the long posts above). Animations are all kinds of time based modifiers. An IK based animation simply interpolates with the IK pose as goal.

IK gives you a rigid fixed solution - by blending that result with animation you can improve the visual look - i.e. overlay small motions caused by breathing etc.

Quote:
 Original post by haegarrHowever, IK is no problem since it is an "active" animation, and as such can be foreseen by the artist.

Unless the IK solution is running in response to game input - i.e. aiming at a moving point. The only way the animator can forsee that is to have a tools that interact with the engine - more or less what morpheme is designed to do really.

Quote:
 Original post by haegarrNevertheless can a physics system, being separate from animations as defined so far, have an effect on the already computed pose, and can overwrite the pose partly or totally.

yup - but we need more fidelity and control than that would allow.

Quote:
 The morpheme solution seems to allow an intertwining (hopefully the correct word) where I have a strict separation.

More or less what it was designed to do. Most of the stuff we do is AI driven animations, utilising physics and other bits and bobs (you can probably find some euphoria demo's around relating to GTA and star wars) - It just so happens we need a good way to combine all of those things - hence morpheme.

Quote:
 Can i use FK? And can i create RagDoll? If yes i'll write my animation system.if no, how do i need to write it?

That's a pretty good starting point, I'd probably suggest trying 2 small simple demo's first though - the anim system, and a basic physics ragdoll. It's then a bit easier to see how to merge the two solutions into one all singing and dancing system.

Quote:
 What does it mean? If its in Local Coordinate how can i Transform it? If i transform it that means im making World Coordinate. That means i first need to use FK then Transform.

A bit old, but might give you a few ideas. A lot of the code can be improved fairly drastically, but I'll leave that as an excercise for the reader ;)

Quote:
 FK means that co-ordinate frames are related together, and transforming a parental frame has an effect on the child frames, but changing a child frame has no effect on the parental frame. IK, on the oether hand, means that changing a child frame _has_ an effect on the parental frame.

Not quite. FK means you transform a series of parented transforms until you get an end pose. IK means you start with an End pose (the effector), and use CCD or the Jacobian to determine the positions/orientations of those bones. FK is just a standard hierarchical animation system.
0

##### Share on other sites
Quote:
Original post by RobTheBloke
Quote:
 Original post by haegarrWell, IK by itself is no problem, since animations must not be key-framed (I've mentioned it somewhere in one of the long posts above). Animations are all kinds of time based modifiers. An IK based animation simply interpolates with the IK pose as goal.

IK gives you a rigid fixed solution - by blending that result with animation you can improve the visual look - i.e. overlay small motions caused by breathing etc.

Seconded, and since IK controlled animation is an Animation, it can be blended with others.

Quote:
Original post by RobTheBloke
Quote:
 Original post by haegarrHowever, IK is no problem since it is an "active" animation, and as such can be foreseen by the artist.

Unless the IK solution is running in response to game input - i.e. aiming at a moving point. The only way the animator can forsee that is to have a tools that interact with the engine - more or less what morpheme is designed to do really.

That the artist cannot _precompute_ the animation is clear. Else one could use key-framing instead of IK driven animation. However, he can foresee the need for IK driven aiming, and hence integrate scipts/nodes/tracks or however that stuff is implemented, so the system has something available that can be used for blending at all.

Quote:
Original post by RobTheBloke
Quote:
 Original post by haegarrNevertheless can a physics system, being separate from animations as defined so far, have an effect on the already computed pose, and can overwrite the pose partly or totally.

yup - but we need more fidelity and control than that would allow.

I understand that. See the remaining text in the original post.
0

## Create an account

Register a new account