• Create Account

Calling all IT Pros from Canada and Australia.. we need your help! Support our site by taking a quick sponsored surveyand win a chance at a \$50 Amazon gift card. Click here to get started!

# Animation Blending

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

171 replies to this topic

### #1Gasim  Members   -  Reputation: 207

Like
0Likes
Like

Posted 15 August 2008 - 01:52 AM

Hello, What is Animation Blending and how to implement it? Im using Bone System. my Keyframe class: struct Keyframe { float time; CQuat qRotate; CVector3 vPos; }; my Animation class class Animation { public: Bone *root; float startTime, endTime; char *animation_name; void Animate(); //Not written yet. Will write it after my question. }; Thanks, Kasya

### #2SiS-Shadowman  Members   -  Reputation: 359

Like
0Likes
Like

Posted 15 August 2008 - 02:27 AM

Imagine a character that is running. Now you want this charactet to stop. After he stops, you'll propably play an idle animation or something. But if you just play the idle animation when he is not running, it will look jerky.

Blending involves the part, where you blend varous (propably 2) animations together in order to achieve seemless transitions between several animations.

If you want to blend 2 animations, you can do this:
-Before you stop the running animation, start the idle animation
-You now neither use the running skeleton, nor the idle skeleton to transform the vertices of the character. Instead you create a third skeleton.
-You now calculate a blend factor for each frame:
float blend_factor = current_time/blending_time;//blending_time is the time you want the animations are blended, propably 1 second//current_time is the time, that ellapsed after you have startet the second animation, the idle for example

Now you look through each bone and do the following:
final_skeleton[ bone ].pos = running_skeleton[ bone ].pos * blend_factor + idle_skeleton[ bone ].pos * (1-blend_factor);// You will do the same for the rotation of the bone.

Now you can use this temporary skeleton to transform the vertices of the character. You will do this until blend_factor reaches 1, after that you can start playing the idle animation.

This is a very basic and rudimentary blending system and I'm shure there are alot of others and better ones out there.
I still hope that this helps :)

### #3Gasim  Members   -  Reputation: 207

Like
0Likes
Like

Posted 15 August 2008 - 02:41 AM

Thanks for your post. But i have a question.
you wrote
-You now neither use the running skeleton, nor the idle skeleton to transform the vertices of the character. Instead you create a third skeleton.

do you mean running animation, idle animation, third animation?

void Blend(Animation *a, Animation *b, float blendtime) {

float blendfactor = blendtime / b.currentTime;

Animation *final.bone->pos = a->bone.pos * blend_factor + b->bone.pos * (1-blend_factor);

}

Is that right?

### #4haegarr  Crossbones+   -  Reputation: 6330

Like
0Likes
Like

Posted 15 August 2008 - 02:57 AM

Not skeletons/bones are blended, but their transformations. You compute the joint transformations of the current pose of N animations in parallel, e.g. by key-frame interpolations. Then you blend the N transformations to yield in the definite pose. That pose is then used during skinning.

For the blending it plays no role which animations are blended. And you get "virtual" animations, i.e. animations that are not specially made by an artist (and hence not saved as an instance).

### #5Gasim  Members   -  Reputation: 207

Like
0Likes
Like

Posted 15 August 2008 - 03:03 AM

Then that means i need to make attachning mesh, skinning and after all animating a bone?

### #6SiS-Shadowman  Members   -  Reputation: 359

Like
0Likes
Like

Posted 15 August 2008 - 03:17 AM

We need to clarify what running means.

Suppose you've got a skeleton for every keyframe in an animation and you want to create a skeleton that fits in bewteen those keyframes, you blend those 2 keyframe skeletons and store the result in a skeleton that is used to transform the vertices:

class CAnimation{    UINT number_of_frames; // Those are the number of keyframes in your animation    float length; // This is the length of the animation in seconds    float time; // This is the time that is elapsed from the beginning of the first frame    CSkeleton m_Frames[ number_of_frames ];};void CAnimation::Blend( CSkeleton *pFinalSkeleton, CSkeleton *pLastSkeleton, CSkeleton *pNextSkeleton, float blend_factor ){    for( UINT i = 0; i < plastskeleton->num_bones(); i++ )    {        pFinalSkeleton[ i ].pos = pLastSkeleton[ i ].pos * (1-blend_factor) + pNextSkeleton[ i ].pos * blend_factor;        // same for the quaternion    }}void CAnimation::Animate( CSkeleton *pFinalSkeleton ){    // compute the blend factor and the index of the two frames, being used    Blend( pFinalSkeleton, &m_Frames[ last_frame ], &m_Frames[ next_frame ], blend_factor );}

This is how an animation class will propably look like. If your character is running all the time, you character class (or whatever you name it) is only going to animate this one. But if the character is about to stand still, you will animate both animations

class CCharacter{    CSkeleton m_FinalSkeleton;    CSkeleton m_TempSkeleton1;    CSkeleton m_TempSkeleton2;    CAnimation m_Animations[ number_of_anims ];};void CCharacter::Animate(){    // In general, it will work NOT like this    // Your class will detect that you want to move on to another animation and will just switch to this code if it is that way    if( running & wanting_to_stop )    {        m_Animation[ RUNNING ].animate( &m_TempSkeleton1 );        m_Animation[ IDLE ].animate( &m_TempSkeleton2 );        // Here we can blend those two skeletons like in the animation class        Blend( &m_FinalSkeleton, m_TempSkeleton1, m_TempSkeleton2 );    } else if( running && !wanting_to_stop ) {        m_Animation[ RUNNING ].animate( &m_FinalSkeleton );    }    TransformMesh( &m_FinalSkeleton );}

That's a pretty rough sketch of how it could be done. But keep in mind that this is just some scratched code. You should really use std::string instead of char * and std::vector instead of an c-array. They really rock thebig1111!!!!! :)

*edit* I used those two animations (running, idle) only for demonstration purposes, to make things clear. In the end, the character class won't care about what bone transformations are blended, it will just do so.

### #7haegarr  Crossbones+   -  Reputation: 6330

Like
0Likes
Like

Posted 15 August 2008 - 03:47 AM

Quote:
 Original post by KasyaThen that means i need to make attachning mesh, skinning and after all animating a bone?

Character animation (with blending) is roughly as follows:

You have e.g. a key-frame animation for each animated parameter of the skeleton. Animated parameters are orientations and perhaps positions of what is called the bones. For the given moment of time, you have the virtual playhead of the animation anywhere between 2 key-frames for each parameter. So you can compute the current orientations/positions by using interpolation.

What you've done so far is to compute the pose on the basis of 1 animation. Now you can do the same for another animation, for a 3rd animation, and so on. So you got N sets of current parameters, where each set defines a pose from a particular animation. Then you blend the parameter sets so that parameter_a of set 1 and parameter_a of set 2 and parameter_a of set 3 ... yield in parameter_a of the resulting set; the same for parameter_b and so on.

At this point you get a single parameter set, and hence there is no longer a difference in processing in comparison of using only a single animation: With the resulting parameter set and the weighting (with which vertices are attached to bones) you do the skinning.

### #8Gasim  Members   -  Reputation: 207

Like
0Likes
Like

Posted 15 August 2008 - 04:47 AM

Thanks very very much. I won't have problem after that understanding. :)

### #9Gasim  Members   -  Reputation: 207

Like
0Likes
Like

Posted 16 August 2008 - 02:32 AM

Hello. I started writing that Animation Blending:

CAnimation::CAnimation() {

m_Frames.clear();
currentFrame = 0;

strset(name, 0);

}

CAnimation::~CAnimation() {

m_Frames.clear();
currentFrame = 0;
strset(name, 0);

}

void CAnimation::Blend(CSkeleton *pFinalSkel, CSkeleton *pLastSkel, CSkeleton *pNextSkel, float blendFactor) {

for(unsigned int i = 0; i < pLastSkel->NumBones(); i++) {

pFinalSkel->bones[i]->vPos = pLastSkel->bones[i]->vPos * (1.0f - blendFactor) + pNextSkel->bones[i]->vPos * blendFactor;
pFinalSkel->bones[i]->qRotate = pLastSkel->bones[i]->qRotate * (1.0f - blendFactor) + pNextSkel->bones[i]->qRotate * blendFactor;

}

}

void CAnimation::Update() {

float endTime = time + length;
float curTime = m_Timer.GetSeconds();

if(curTime < endTime) {
if(currentFrame < m_Frames.size() - 1) {

currentFrame++;

} else {

currentFrame = 0;

}
}

}

void CAnimation::Animate(CSkeleton *pFinalSkel) {

Blend(pFinalSkel, m_Frames.back(), m_Frames[currentFrame+1], 1.0f);

}

m_Frames.push_back(skel);

}

now it doesn't show anything. I think the problem is in the Animate Function. But i can't write it. [nextFrame] must be [currentFrame + 1] or something else. And please look at my (Update) Function. Is that right?

Thanks,
Kasya

P.S. I have a CSkeleton::Draw() Function it doesn't show it here. But if i don't call CAnimtion::Animate(FinalBone), it shows the bones

### #10haegarr  Crossbones+   -  Reputation: 6330

Like
0Likes
Like

Posted 16 August 2008 - 03:35 AM

Please wrap code snippets by [ source ] and [ /source ] tags (but without the spaces inside the brackets. It makes code much more readable, and hence helps us in helping you. Please edit yourt previous post accordingly, too.

The 1st problem I see in the blending of qRotate: I assume your rotations being represented by quaternions. You need to re-normalize the belnded quaternion, or else it would no longer represent a pure rotation!

Another problem is that you interpolate between the previously computed skeleton and the current pose (if I interpret your code correctly). That is not the way one would normally go.

An animation of, say, a position is given by a sequence of key-frames. For each key-frame you know the (relative) moment of time, and the position value. E.g.
{ { vPos1, 0.0 }, { vPos2, 0.5 }, { vPos3, 1.2 }, ... }
denotes that at t=0s the position should be vPos1, at t=0.5s it should be vPos2, and so on. Here t is the moment of time related to the beginning of the animation playback.

Now, when rendering the next video frame, you have the current time t', and you can relate it to the playing animation by subtracting the moment T when the animation was started:
t := t' - T
With this t, you look in to the sequence of key-frames. Let's say t==0.3s. Then the virtual playhead is between the first and second key-frames. Fetch the moments of those both key-frames, in this case
leftT := 0.0
rightT := 0.5
and make the current time relative to this period
b := ( t - leftT ) / ( rightT - leftT )
Use this factor for interpolation of the both belonging positions, i.e.
vPos := vPos1 * ( 1 - b ) + vPos2 * b
Notice that ( 1 - b ) is used for the "left" position.

The factor to be used when _blending_ is not (directly) time dependent. It depends on time only if something like fading is done. Moreover, since blending more than 2 animations should be possible, it is furthur not principally correct to blend with b and 1-b, or else weights will be weighted by subsequent blendings.

### #11Gasim  Members   -  Reputation: 207

Like
0Likes
Like

Posted 16 August 2008 - 04:36 AM

Is That Blending in Real Interpolation? If yes then i can use Quaternion Spherical Linear Interpolation.

### #12haegarr  Crossbones+   -  Reputation: 6330

Like
0Likes
Like

Posted 16 August 2008 - 04:59 AM

slerp has the advantage of constant angular velocity for equidistant angles, but has the disadvantage of non-commutativity. nlerp (i.e. normalized lerp) has non constant angular velocity but is commutative; moreover it is less expensive. IMHO, nlerp is the better alternative for blending. In particular, it allows a simpler working when several animations are blended, due to its commutativity.

EDIT: Yes, blending is mathematically an interpolation.

### #13Gasim  Members   -  Reputation: 207

Like
0Likes
Like

Posted 16 August 2008 - 06:05 AM

when you use
void CAnimation::CAnimation::Blend(CSkeleton *pFinalSkel, CSkeleton *pLastSkel, CSkeleton *pNextSkel, float blendFactor);

inside the

void CAnimation::Animate(CSkeleton *pFinalSkel) {Blend(pFinalSkel, m_Frames[last] /* Last Frame */ , m_Frames[next] /* Next Frame */, blend_factor);}

Last Frame means: if there are 50 frames Last is 50th Frame or Last used Frame
Next Frames means: if current frame is 23, next is 24 or next can be any value

I need to know only inside one animation not with other animations

Thanks,
Kasya

### #14godmodder  Members   -  Reputation: 748

Like
0Likes
Like

Posted 16 August 2008 - 06:30 AM

If you blend the animations as described earlier, wouldn't you get some average result of all the animations? I mean, suppose you wanted to make a character run and shoot like in Tomb Raider for example. If you followed the instructions above, you'd probably end up with some movement that isn't what you intended. Somehow, we need to give the arm movement a priority for the shooting animation and the legs movement a priority for the running animation, or else the arms and legs would always be "in between" the two animations. I think this could be done using some kind of weighting, but I have no idea how exactly.

Jeroen

### #15haegarr  Crossbones+   -  Reputation: 6330

Like
0Likes
Like

Posted 16 August 2008 - 06:49 AM

Quote:
 Original post by godmodderIf you blend the animations as described earlier, wouldn't you get some average result of all the animations? I mean, suppose you wanted to make a character run and shoot like in Tomb Raider for example. If you followed the instructions above, you'd probably end up with some movement that isn't what you intended. Somehow, we need to give the arm movement a priority for the shooting animation and the legs movement a priority for the running animation, or else the arms and legs would always be "in between" the two animations. I think this could be done using some kind of weighting, but I have no idea how exactly.

First, you must differentiate between animation blending and animation layering. Blending actually "averages" animations, that's correct and intentional. Animation layering, on the other hand, would sum up animations similarly, but only until a total weight of 1 is reached. So animation processed prior to others will have priority. Animation layering has to be taken into account if priorization in animations of the same joints is wanted.

Second, artists should be enabled to define animations only for selected parameters. In other words, animations should be defined by channels, each one for a particular position or orientation. So blending/layering 2 or more animations will not necessarily affect all or even the same joints.

However, the issue is heavy enough without considering layering at this time, and the OP has asked for blending explicitely; so I haven't talked about it layering. But, if you're interested in, I will tell you what I know, of course.

### #16godmodder  Members   -  Reputation: 748

Like
0Likes
Like

Posted 16 August 2008 - 06:58 AM

Quote:
 But, if you're interested in, I will tell you what I know, of course.

Thank you for giving me the correct term for this effect. I didn't know it was called "layering". I'd be very interested in your explanation, because you seem to have experience with it. I've just implemented a skeletal animation system myself, and I would love to add this kind of layering to it. This automatic combining of animations would save me alot of work.

Jeroen

### #17haegarr  Crossbones+   -  Reputation: 6330

Like
0Likes
Like

Posted 16 August 2008 - 07:08 AM

Quote:
 Original post by KasyaLast Frame means: if there are 50 frames Last is 50th Frame or Last used FrameNext Frames means: if current frame is 23, next is 24 or next can be any valueI need to know only inside one animation not with other animations

Sorry, but I don't understand. Perhaps you have an animation description that is out of my imagination. The following is simply based on assumptions I made about your animation system.

It seems that you interpolate between the last recently computed pose and an end pose. That _can_ be correct, if you compute the blending factor accordingly. However, I have to dissuade. Even if you make a continuing correction of the blending factor, you will suffer from accumulating errors due to numerical inaccuracies. If you have no enforcing reason to not do so, you should ever interpolate between the surrounding key-frames.

The other thing is that using animations that actually depend on _frames_ will make your results being dependend on the performance of the computer. This is bad practice. Make time controlled animations instead.

### #18Gasim  Members   -  Reputation: 207

Like
0Likes
Like

Posted 16 August 2008 - 07:19 AM

Thanks! I'll do Time Based. Animation Layering is a good Technique. Can i use Animation Layering with Animation Blending? And could you tell me more about Animation Layering please? or could you give me a resource about Animation Layering? I googled it but no good result.

Thanks
Kasya

### #19Gasim  Members   -  Reputation: 207

Like
0Likes
Like

Posted 16 August 2008 - 07:37 AM

Everyone have good night. See you Tomorrow! Bye!

### #20haegarr  Crossbones+   -  Reputation: 6330

Like
0Likes
Like

Posted 16 August 2008 - 08:38 AM

Quote:
 Original post by godmodderThank you for giving me the correct term for this effect. I didn't know it was called "layering".

Presumbly there are also other terms out there for that kind of thing.

First I give a coarse overview, since it seems that e.g. Kasya's and perhaps your animation system works differently from mine.

The runtime system of my engine measures time in realtime seconds from the beginning of the application. The moment of time when an animation is started is used as its local temporal origin, so that the time runs from 0 (but still in realtime seconds) on from the viewpoint of the animation. The current moment in time is stored as "the tick" whenever a new video frame processing is started. All time related computations, being it animations or physics based, use this tick as reference.

An animation is a time based modifier of some elemental data. For this purpose it is integrated into the timeline. Being in the timeline guarantees that the animation is processed whenever the current tick (say the virtual playhead) is over it. One can imagine that each modified elemental data has a track in the timeline. I.e. a position of a joint or the orientation of a joint is such an elemental data, although it plays no role: For the animation track there is just a position vector or a quaternion ready for modification; nothing more need to be known by it.

An animation track may be implemented by a key-framed, vector typed interpolation. Notice please that, due to the timing mechanism described above, "key-framed" should not be understood as being frame based but time based, of course. Like written in one of the above posts, the current tick is then used to determine the 2 surrounding key-frames, the 2 belonging data values, and the interpolation factor. With these values the time interpolated data value is computed and overhanded to the data storage (the position of a joint in this case).

We have 2 important things already, although they are not yet explicitely mentioned yet: First, there is no force that _every_ parameter of all joints has an animation track to modify it! That means, an animation for scratching the head has tracks for the right arm, shoulder and perhaps the head, but none of its tracks is influencing the hips or legs. (This is one of the issues I've hinted at already in a post above.) Second, tracks can temporarily overlap each other. This is obviously necessary for tracks that modify distinct parameters, but I also allow several tracks to modify the same parameter at the same tick. That is the basis for animation blending and layering. However, a _single_ animation is defined to have at most 1 track for a particular parameter. But who says we can have only 1 animation targeting the same skeleton?

To blend 2 or more animations, the modifying doesn't happen by simply writing the current (time interpolated) value to the joint's position, but it is weighted by a factor and summed up. (A little excursus: To make things even more interesting, this factor is a float parameter of the animation, and a float modifier can modify it. You see what I mean? Yes, another animation can be used to control it if one wants to do so. So it is even possible to fade animations in and out.) To let this work, the parameters (at the joints) are preset with identity values (i.e. null for a position). Sometime we need to normalize the summed up parameters. To control this, the animations so far are grouped into an animation group. In other words, all animations that are in the same group are working by blending their values.

Now I come to the layering. After the previous animation group has been processed, the total weight that has been accumulated during their blendings may or may not be at 100%. If it is less than 100%, then there is some more "place" for another animation (group). Hence, if there actually is another animation group targeting the same skeleton, then it is processed as as already described. The formerly computed joint parameters and weights must not be overwritten, so you can copy them or use the little index trick. I.e. the joints have 2 sets of parameters. One where the blending happens, and one where the values (from the blending) are "layered". The latter set is obviously the one that is used for skinning at the end. The computation of the latter set from the former one (say the "layering") is also done when the "animation group end" is processed, so there is no extra synchronization point to be implemented. If the summed weight of the current blended animation group doesn't fit into the place the layering has left (i.e. the current total weight plus the current sum of blended weights exceeds 100%) then the current animation group is weighted accordingly to fill up the 100% but not exceed it.

After all animation groups are processed, it may happen that some parameters are not touched at all, so that the total weight is still zero. In such a case the "rest pose" is asked to deliver the parameter value.

Notice please that the process order of animations play a role not for blending but layering.

The above is the soul of my skeleton animation system. There are more aspects than those, e.g. prototypes and instances of skeletons and animations and the method how animations are bound to the modified parameters. I can use the same animation prototype (but another instance, of course) to control another skeleton as long as its fulfills the binding requirements. And last but not least symmetry: I can use the same animation to control either the left arm or the right arm, for example, by simply using a symmetry condition in the binding phase. Things like this make my artist happy and reduces the count of animations.

Huh, I've written a novel now ;) Time to end the work for today.

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

PARTNERS