Sign in to follow this  
Kasya

Animation Blending

Recommended Posts

Kasya    207
Hello, What is Animation Blending and how to implement it? Im using Bone System. my Keyframe class: struct Keyframe { float time; CQuat qRotate; CVector3 vPos; }; my Animation class class Animation { public: Bone *root; float startTime, endTime; char *animation_name; void Animate(); //Not written yet. Will write it after my question. }; Thanks, Kasya

Share this post


Link to post
Share on other sites
SiS-Shadowman    359
Imagine a character that is running. Now you want this charactet to stop. After he stops, you'll propably play an idle animation or something. But if you just play the idle animation when he is not running, it will look jerky.

Blending involves the part, where you blend varous (propably 2) animations together in order to achieve seemless transitions between several animations.

If you want to blend 2 animations, you can do this:
-Before you stop the running animation, start the idle animation
-You now neither use the running skeleton, nor the idle skeleton to transform the vertices of the character. Instead you create a third skeleton.
-You now calculate a blend factor for each frame:

float blend_factor = current_time/blending_time;

//blending_time is the time you want the animations are blended, propably 1 second
//current_time is the time, that ellapsed after you have startet the second animation, the idle for example




Now you look through each bone and do the following:

final_skeleton[ bone ].pos = running_skeleton[ bone ].pos * blend_factor + idle_skeleton[ bone ].pos * (1-blend_factor);
// You will do the same for the rotation of the bone.



Now you can use this temporary skeleton to transform the vertices of the character. You will do this until blend_factor reaches 1, after that you can start playing the idle animation.


This is a very basic and rudimentary blending system and I'm shure there are alot of others and better ones out there.
I still hope that this helps :)

Share this post


Link to post
Share on other sites
Kasya    207
Thanks for your post. But i have a question.
you wrote
-You now neither use the running skeleton, nor the idle skeleton to transform the vertices of the character. Instead you create a third skeleton.

do you mean running animation, idle animation, third animation?


void Blend(Animation *a, Animation *b, float blendtime) {

float blendfactor = blendtime / b.currentTime;

Animation *final.bone->pos = a->bone.pos * blend_factor + b->bone.pos * (1-blend_factor);

}

Is that right?

Share this post


Link to post
Share on other sites
haegarr    7372
Not skeletons/bones are blended, but their transformations. You compute the joint transformations of the current pose of N animations in parallel, e.g. by key-frame interpolations. Then you blend the N transformations to yield in the definite pose. That pose is then used during skinning.

For the blending it plays no role which animations are blended. And you get "virtual" animations, i.e. animations that are not specially made by an artist (and hence not saved as an instance).

Share this post


Link to post
Share on other sites
SiS-Shadowman    359
We need to clarify what running means.

Suppose you've got a skeleton for every keyframe in an animation and you want to create a skeleton that fits in bewteen those keyframes, you blend those 2 keyframe skeletons and store the result in a skeleton that is used to transform the vertices:


class CAnimation
{
UINT number_of_frames; // Those are the number of keyframes in your animation
float length; // This is the length of the animation in seconds
float time; // This is the time that is elapsed from the beginning of the first frame
CSkeleton m_Frames[ number_of_frames ];
};

void CAnimation::Blend( CSkeleton *pFinalSkeleton, CSkeleton *pLastSkeleton, CSkeleton *pNextSkeleton, float blend_factor )
{
for( UINT i = 0; i < plastskeleton->num_bones(); i++ )
{
pFinalSkeleton[ i ].pos = pLastSkeleton[ i ].pos * (1-blend_factor) + pNextSkeleton[ i ].pos * blend_factor;
// same for the quaternion
}
}

void CAnimation::Animate( CSkeleton *pFinalSkeleton )
{
// compute the blend factor and the index of the two frames, being used
Blend( pFinalSkeleton, &m_Frames[ last_frame ], &m_Frames[ next_frame ], blend_factor );
}




This is how an animation class will propably look like. If your character is running all the time, you character class (or whatever you name it) is only going to animate this one. But if the character is about to stand still, you will animate both animations


class CCharacter
{
CSkeleton m_FinalSkeleton;

CSkeleton m_TempSkeleton1;
CSkeleton m_TempSkeleton2;

CAnimation m_Animations[ number_of_anims ];
};

void CCharacter::Animate()
{
// In general, it will work NOT like this
// Your class will detect that you want to move on to another animation and will just switch to this code if it is that way
if( running & wanting_to_stop )
{
m_Animation[ RUNNING ].animate( &m_TempSkeleton1 );
m_Animation[ IDLE ].animate( &m_TempSkeleton2 );

// Here we can blend those two skeletons like in the animation class
Blend( &m_FinalSkeleton, m_TempSkeleton1, m_TempSkeleton2 );
} else if( running && !wanting_to_stop ) {
m_Animation[ RUNNING ].animate( &m_FinalSkeleton );
}

TransformMesh( &m_FinalSkeleton );
}




That's a pretty rough sketch of how it could be done. But keep in mind that this is just some scratched code. You should really use std::string instead of char * and std::vector instead of an c-array. They really rock thebig1111!!!!! :)

*edit* I used those two animations (running, idle) only for demonstration purposes, to make things clear. In the end, the character class won't care about what bone transformations are blended, it will just do so.

Share this post


Link to post
Share on other sites
haegarr    7372
Quote:
Original post by Kasya
Then that means i need to make attachning mesh, skinning and after all animating a bone?

Character animation (with blending) is roughly as follows:

You have e.g. a key-frame animation for each animated parameter of the skeleton. Animated parameters are orientations and perhaps positions of what is called the bones. For the given moment of time, you have the virtual playhead of the animation anywhere between 2 key-frames for each parameter. So you can compute the current orientations/positions by using interpolation.

What you've done so far is to compute the pose on the basis of 1 animation. Now you can do the same for another animation, for a 3rd animation, and so on. So you got N sets of current parameters, where each set defines a pose from a particular animation. Then you blend the parameter sets so that parameter_a of set 1 and parameter_a of set 2 and parameter_a of set 3 ... yield in parameter_a of the resulting set; the same for parameter_b and so on.

At this point you get a single parameter set, and hence there is no longer a difference in processing in comparison of using only a single animation: With the resulting parameter set and the weighting (with which vertices are attached to bones) you do the skinning.

Share this post


Link to post
Share on other sites
Kasya    207
Hello. I started writing that Animation Blending:

CAnimation::CAnimation() {

m_Frames.clear();
currentFrame = 0;

strset(name, 0);

}

CAnimation::~CAnimation() {

m_Frames.clear();
currentFrame = 0;
strset(name, 0);

}

void CAnimation::Blend(CSkeleton *pFinalSkel, CSkeleton *pLastSkel, CSkeleton *pNextSkel, float blendFactor) {

for(unsigned int i = 0; i < pLastSkel->NumBones(); i++) {

pFinalSkel->bones[i]->vPos = pLastSkel->bones[i]->vPos * (1.0f - blendFactor) + pNextSkel->bones[i]->vPos * blendFactor;
pFinalSkel->bones[i]->qRotate = pLastSkel->bones[i]->qRotate * (1.0f - blendFactor) + pNextSkel->bones[i]->qRotate * blendFactor;

}

}

void CAnimation::Update() {

float endTime = time + length;
float curTime = m_Timer.GetSeconds();

if(curTime < endTime) {
if(currentFrame < m_Frames.size() - 1) {

currentFrame++;

} else {

currentFrame = 0;

}
}



}

void CAnimation::Animate(CSkeleton *pFinalSkel) {

Blend(pFinalSkel, m_Frames.back(), m_Frames[currentFrame+1], 1.0f);

}

void CAnimation::AddFrame(CSkeleton *skel) {

m_Frames.push_back(skel);

}

now it doesn't show anything. I think the problem is in the Animate Function. But i can't write it. [nextFrame] must be [currentFrame + 1] or something else. And please look at my (Update) Function. Is that right?

Thanks,
Kasya

P.S. I have a CSkeleton::Draw() Function it doesn't show it here. But if i don't call CAnimtion::Animate(FinalBone), it shows the bones

Share this post


Link to post
Share on other sites
haegarr    7372
Please wrap code snippets by [ source ] and [ /source ] tags (but without the spaces inside the brackets. It makes code much more readable, and hence helps us in helping you. Please edit yourt previous post accordingly, too.

The 1st problem I see in the blending of qRotate: I assume your rotations being represented by quaternions. You need to re-normalize the belnded quaternion, or else it would no longer represent a pure rotation!

Another problem is that you interpolate between the previously computed skeleton and the current pose (if I interpret your code correctly). That is not the way one would normally go.

An animation of, say, a position is given by a sequence of key-frames. For each key-frame you know the (relative) moment of time, and the position value. E.g.
{ { vPos1, 0.0 }, { vPos2, 0.5 }, { vPos3, 1.2 }, ... }
denotes that at t=0s the position should be vPos1, at t=0.5s it should be vPos2, and so on. Here t is the moment of time related to the beginning of the animation playback.

Now, when rendering the next video frame, you have the current time t', and you can relate it to the playing animation by subtracting the moment T when the animation was started:
t := t' - T
With this t, you look in to the sequence of key-frames. Let's say t==0.3s. Then the virtual playhead is between the first and second key-frames. Fetch the moments of those both key-frames, in this case
leftT := 0.0
rightT := 0.5
and make the current time relative to this period
b := ( t - leftT ) / ( rightT - leftT )
Use this factor for interpolation of the both belonging positions, i.e.
vPos := vPos1 * ( 1 - b ) + vPos2 * b
Notice that ( 1 - b ) is used for the "left" position.

The factor to be used when _blending_ is not (directly) time dependent. It depends on time only if something like fading is done. Moreover, since blending more than 2 animations should be possible, it is furthur not principally correct to blend with b and 1-b, or else weights will be weighted by subsequent blendings.

Share this post


Link to post
Share on other sites
haegarr    7372
slerp has the advantage of constant angular velocity for equidistant angles, but has the disadvantage of non-commutativity. nlerp (i.e. normalized lerp) has non constant angular velocity but is commutative; moreover it is less expensive. IMHO, nlerp is the better alternative for blending. In particular, it allows a simpler working when several animations are blended, due to its commutativity.

EDIT: Yes, blending is mathematically an interpolation.

Share this post


Link to post
Share on other sites
Kasya    207
when you use

void CAnimation::CAnimation::Blend(CSkeleton *pFinalSkel, CSkeleton *pLastSkel, CSkeleton *pNextSkel, float blendFactor);

inside the


void CAnimation::Animate(CSkeleton *pFinalSkel) {

Blend(pFinalSkel, m_Frames[last] /* Last Frame */ , m_Frames[next] /* Next Frame */, blend_factor);

}





Last Frame means: if there are 50 frames Last is 50th Frame or Last used Frame
Next Frames means: if current frame is 23, next is 24 or next can be any value

I need to know only inside one animation not with other animations

Thanks,
Kasya

Share this post


Link to post
Share on other sites
godmodder    828
If you blend the animations as described earlier, wouldn't you get some average result of all the animations? I mean, suppose you wanted to make a character run and shoot like in Tomb Raider for example. If you followed the instructions above, you'd probably end up with some movement that isn't what you intended. Somehow, we need to give the arm movement a priority for the shooting animation and the legs movement a priority for the running animation, or else the arms and legs would always be "in between" the two animations. I think this could be done using some kind of weighting, but I have no idea how exactly.

Jeroen

Share this post


Link to post
Share on other sites
haegarr    7372
Quote:
Original post by godmodder
If you blend the animations as described earlier, wouldn't you get some average result of all the animations? I mean, suppose you wanted to make a character run and shoot like in Tomb Raider for example. If you followed the instructions above, you'd probably end up with some movement that isn't what you intended. Somehow, we need to give the arm movement a priority for the shooting animation and the legs movement a priority for the running animation, or else the arms and legs would always be "in between" the two animations. I think this could be done using some kind of weighting, but I have no idea how exactly.

First, you must differentiate between animation blending and animation layering. Blending actually "averages" animations, that's correct and intentional. Animation layering, on the other hand, would sum up animations similarly, but only until a total weight of 1 is reached. So animation processed prior to others will have priority. Animation layering has to be taken into account if priorization in animations of the same joints is wanted.

Second, artists should be enabled to define animations only for selected parameters. In other words, animations should be defined by channels, each one for a particular position or orientation. So blending/layering 2 or more animations will not necessarily affect all or even the same joints.

However, the issue is heavy enough without considering layering at this time, and the OP has asked for blending explicitely; so I haven't talked about it layering. But, if you're interested in, I will tell you what I know, of course.

Share this post


Link to post
Share on other sites
godmodder    828
Quote:
But, if you're interested in, I will tell you what I know, of course.


Thank you for giving me the correct term for this effect. I didn't know it was called "layering". I'd be very interested in your explanation, because you seem to have experience with it. I've just implemented a skeletal animation system myself, and I would love to add this kind of layering to it. This automatic combining of animations would save me alot of work.

Jeroen

Share this post


Link to post
Share on other sites
haegarr    7372
Quote:
Original post by Kasya
Last Frame means: if there are 50 frames Last is 50th Frame or Last used Frame
Next Frames means: if current frame is 23, next is 24 or next can be any value

I need to know only inside one animation not with other animations

Sorry, but I don't understand. Perhaps you have an animation description that is out of my imagination. The following is simply based on assumptions I made about your animation system.

It seems that you interpolate between the last recently computed pose and an end pose. That _can_ be correct, if you compute the blending factor accordingly. However, I have to dissuade. Even if you make a continuing correction of the blending factor, you will suffer from accumulating errors due to numerical inaccuracies. If you have no enforcing reason to not do so, you should ever interpolate between the surrounding key-frames.

The other thing is that using animations that actually depend on _frames_ will make your results being dependend on the performance of the computer. This is bad practice. Make time controlled animations instead.

Share this post


Link to post
Share on other sites
Kasya    207
Thanks! I'll do Time Based. Animation Layering is a good Technique. Can i use Animation Layering with Animation Blending? And could you tell me more about Animation Layering please? or could you give me a resource about Animation Layering? I googled it but no good result.

Thanks
Kasya

Share this post


Link to post
Share on other sites
haegarr    7372
Quote:
Original post by godmodder
Thank you for giving me the correct term for this effect. I didn't know it was called "layering".

Presumbly there are also other terms out there for that kind of thing.

First I give a coarse overview, since it seems that e.g. Kasya's and perhaps your animation system works differently from mine.

The runtime system of my engine measures time in realtime seconds from the beginning of the application. The moment of time when an animation is started is used as its local temporal origin, so that the time runs from 0 (but still in realtime seconds) on from the viewpoint of the animation. The current moment in time is stored as "the tick" whenever a new video frame processing is started. All time related computations, being it animations or physics based, use this tick as reference.

An animation is a time based modifier of some elemental data. For this purpose it is integrated into the timeline. Being in the timeline guarantees that the animation is processed whenever the current tick (say the virtual playhead) is over it. One can imagine that each modified elemental data has a track in the timeline. I.e. a position of a joint or the orientation of a joint is such an elemental data, although it plays no role: For the animation track there is just a position vector or a quaternion ready for modification; nothing more need to be known by it.

An animation track may be implemented by a key-framed, vector typed interpolation. Notice please that, due to the timing mechanism described above, "key-framed" should not be understood as being frame based but time based, of course. Like written in one of the above posts, the current tick is then used to determine the 2 surrounding key-frames, the 2 belonging data values, and the interpolation factor. With these values the time interpolated data value is computed and overhanded to the data storage (the position of a joint in this case).

We have 2 important things already, although they are not yet explicitely mentioned yet: First, there is no force that _every_ parameter of all joints has an animation track to modify it! That means, an animation for scratching the head has tracks for the right arm, shoulder and perhaps the head, but none of its tracks is influencing the hips or legs. (This is one of the issues I've hinted at already in a post above.) Second, tracks can temporarily overlap each other. This is obviously necessary for tracks that modify distinct parameters, but I also allow several tracks to modify the same parameter at the same tick. That is the basis for animation blending and layering. However, a _single_ animation is defined to have at most 1 track for a particular parameter. But who says we can have only 1 animation targeting the same skeleton?

To blend 2 or more animations, the modifying doesn't happen by simply writing the current (time interpolated) value to the joint's position, but it is weighted by a factor and summed up. (A little excursus: To make things even more interesting, this factor is a float parameter of the animation, and a float modifier can modify it. You see what I mean? Yes, another animation can be used to control it if one wants to do so. So it is even possible to fade animations in and out.) To let this work, the parameters (at the joints) are preset with identity values (i.e. null for a position). Sometime we need to normalize the summed up parameters. To control this, the animations so far are grouped into an animation group. In other words, all animations that are in the same group are working by blending their values.

Now I come to the layering. After the previous animation group has been processed, the total weight that has been accumulated during their blendings may or may not be at 100%. If it is less than 100%, then there is some more "place" for another animation (group). Hence, if there actually is another animation group targeting the same skeleton, then it is processed as as already described. The formerly computed joint parameters and weights must not be overwritten, so you can copy them or use the little index trick. I.e. the joints have 2 sets of parameters. One where the blending happens, and one where the values (from the blending) are "layered". The latter set is obviously the one that is used for skinning at the end. The computation of the latter set from the former one (say the "layering") is also done when the "animation group end" is processed, so there is no extra synchronization point to be implemented. If the summed weight of the current blended animation group doesn't fit into the place the layering has left (i.e. the current total weight plus the current sum of blended weights exceeds 100%) then the current animation group is weighted accordingly to fill up the 100% but not exceed it.

After all animation groups are processed, it may happen that some parameters are not touched at all, so that the total weight is still zero. In such a case the "rest pose" is asked to deliver the parameter value.

Notice please that the process order of animations play a role not for blending but layering.


The above is the soul of my skeleton animation system. There are more aspects than those, e.g. prototypes and instances of skeletons and animations and the method how animations are bound to the modified parameters. I can use the same animation prototype (but another instance, of course) to control another skeleton as long as its fulfills the binding requirements. And last but not least symmetry: I can use the same animation to control either the left arm or the right arm, for example, by simply using a symmetry condition in the binding phase. Things like this make my artist happy and reduces the count of animations.


Huh, I've written a novel now ;) Time to end the work for today.

Share this post


Link to post
Share on other sites
haegarr    7372
An addendum to the previous post: One has to differ between a "relative" weighting during blending and an "absolute" weighting during layering. I.e. in general the sums of weights during blending are allowed to not sum-up to 1 when a normalization step is done during processing the "end of animation group". But the sum of weights as is used for layering must not be normalized, of course.

To make the life of the artist a bit easier, one can use distinct weights for blending and layering. E.g. the blending weights are associated with the animations, while the layering weights are associated with the animation groups.

Share this post


Link to post
Share on other sites
Kasya    207

void CAnimation::Blend(CSkeleton *pFinalSkel, CSkeleton *pLastSkel, CSkeleton *pNextSkel, float blendFactor) {

for(unsigned int i = 0; i < pLastSkel->NumBones(); i++) {

pFinalSkel->bones[i]->vPos = pLastSkel->bones[i]->vPos * (1.0f - blendFactor) + pNextSkel->bones[i]->vPos * blendFactor;
pFinalSkel->bones[i]->qRotate.Nlerp(pLastSkel->bones[i]->qRotate, pNextSkel->bones[i]->qRotate, blendFactor);

}

}

bool loop = true;

void CAnimation::Animate(CSkeleton *pFinalSkel) {


unsigned int uiFrame = 0;
CQuat qTmp;
CVector3 vTmp;

static float lastTime = startTime;
float time = m_Timer.GetSeconds();

time += lastTime;
lastTime = time;

if(time > (startTime + length)) {

if(loop) {
m_Timer.Init();
time = startTime;
lastTime = startTime;
uiFrame = 0;

} else {

time = startTime + length;

}

}

while(uiFrame < m_Frames.size() && m_Frames[uiFrame]->time < time) uiFrame++;

if(uiFrame == 0) {

for(unsigned int i = 0; i < m_Frames[0]->NumBones()+1; i++) {

pFinalSkel->bones[i]->qRotate = pFinalSkel->bones[i]->qRotate * m_Frames[0]->bones[i]->qRotate;
pFinalSkel->bones[i]->vPos += m_Frames[0]->bones[i]->vPos;

}
}

else if(uiFrame == m_Frames.size()) {

for(unsigned int i = 0; i < m_Frames[uiFrame-1]->NumBones()+1; i++){

pFinalSkel->bones[i]->qRotate = pFinalSkel->bones[i]->qRotate * m_Frames[uiFrame-1]->bones[i]->qRotate;
pFinalSkel->bones[i]->vPos += m_Frames[uiFrame-1]->bones[i]->vPos;

}

}

else {

CSkeleton *curFrame = m_Frames[uiFrame];
CSkeleton *prevFrame = m_Frames[uiFrame-1];

float delta = curFrame->time - prevFrame->time;
float blend_factor = (time - prevFrame->time) / delta;

CAnimation::Blend(pFinalSkel, curFrame, prevFrame, blend_factor);

}

}



Thats My New Animation. Its working how needed. But sometimes it slows down. I think its because of CPU. Am i right?

Thanks
Kasya

Share this post


Link to post
Share on other sites
haegarr    7372
@ Kasya: Your latest code seems me mainly okay in principle, but I think there are some design quirks. Perhaps I'm wrong because I don't have insight to some implementation and design details. However, here is what I mean:

(1) I wouldn't use a timer but an always running time. When you start using animation blending, several animated objects, and perhaps physics you would otherwise deal with dozens of timers, and you would introduce little slides at each timer restart. A central, always running time would unburden you from taking care, and also gives all animated elements the same time base.

Let's assume there is a central time outside the animation. Whenever you begin you compute and render a new video frame, you freeze the current time (what I have named "the tick" in my large post above).

Reasoning: If you wouldn't do so, then animation A is computed for time t, animation B a bit later, say at time t+0.05s, animation 3 another bit later, and so on. So, your current snapshot of the scene would be no longer a real snapshot since all animations are computed at slightly different time moments.

Hold a relative time in the animation instance. I.e. an animation is instanciated and should start playing at t0. The animation is actually not playing as long as "the tick" is less than t0. Then, when the tick goes behind t0, the relative time will be
t' := tick - t0
Of course, every animation has its own t0 and hence its own t'. If t' exceeds the length T of the animation, the result depends on whether the animation loops or not (you already have seen that, of course).

(2) Animations with loops have to wrap around. That means, there is no "reset" when t' exceeds the length. That has 2 consequences:

First, when t' exceeds the duration, you have to correct t0 (what would be the easiest way of dealing with this) by setting
t0 := t0 + T while t' + t0 > T
Doing so does _not_ drop the time overshooting as would a simple reset do!

Second, when you look at the key-frames, the indices of the key-frames look like
0, 1, 2, 3, ... n-2, n-1, 0, 1, 2, ...
where N is the number of key-frames. Then, if t'[n-1] != t'[0] you have to interpolate between key-frames n-1 and 0 also! You may have another scheme where t'[n-1] == t'[0] so need not handle this stuff specially.

I hint you at this possibility although, or even because, I don't know how you define your key-frames.

Share this post


Link to post
Share on other sites
godmodder    828
Quote:
Hold a relative time in the animation instance. I.e. an animation is instanciated and should start playing at t0.


I currently have a global timer in my engine that I query each frame to see how much time has passed since the previous frame. Every model in the world has a CurrentAnimTime variable that is increased with this frame interval. With the help of this CurrentAnimTime, the skeleton is animated. Is this the method you are using?

Quote:
(A little excursus: To make things even more interesting, this factor is a float parameter of the animation, and a float modifier can modify it. You see what I mean? Yes, another animation can be used to control it if one wants to do so. So it is even possible to fade animations in and out.)


What do you mean by "fading" an animation in and out? Is it transitioning from a walking to a running animation for example by letting the weight of the walking animation gradually decrease and the weight for the running animation increase?



I've read a thesis here about this subject. It states that animation blending could be implemented as a special case of animation layering, where all the weights off the skeletons from different animations are taken into account, as opposed to just a few joints. This seems like a good approach to me, but I'd like to know what you think of it.

Also, this paper speaks only of blending of the rotations with the SLERP function, but do translations also need to be blended?

Aside from the actual implementation, is this really the technique games like Tomb Raider used in the past to combine animations? I find it a little hard to believe that these games, that didn't even use skinning, used such an advanced animation system. I ask this because I'm currently replicating a Tomb Raider engine and I just need to combine a running and a shooting animation. The rest of the animations can simply be transitioned in between. Maybe there's a simpler technique that I've overlooked?

As you can see, I have quite a few questions before I go and implement this technique. ;) Sorry you had to type that much, BTW!

Jeroen

[Edited by - godmodder on August 17, 2008 8:28:03 AM]

Share this post


Link to post
Share on other sites
haegarr    7372
Quote:
Original post by godmodder
I currently have a global timer in my engine that I query each frame to see how much time has passed since the previous frame. Every model in the world has a CurrentAnimTime variable that is increased with this frame interval. With the help of this CurrentAnimTime, the skeleton is animated. Is this the method you are using?

I use another way of computing, but in effect it seems me the same with perhaps some minor exceptions.

We both use a global time from which the video frame's tick is fetched. I hope you freeze the tick as I have described in one of the posts above. From your desciption I see that you compute the time elapsed since the last recent video frame
dt := tickn - tickn-1
and increase the animation time by it
t'n := t'n-1 + dt , presumbly with t'0 := 0

My way is
t'n := tickn - t0

Now (if I'm correct with my assumptions), comparing these 2 ways shows that the animation playback is started at a video frame boundary in your case, while it is started at a arbitrary (but defined, of course) time moment t0 in my case. So I can better synchronize with already running animations. But you may also use a t'0==t0!=0, of course, in which my hint is irrelevant.

The other thing is that your way accumulates small inaccuraries in t'n over time, while my way avoids this.

But as said, both things are minor and may never yield in a problem at runtime.

Quote:
Original post by godmodder
I've read thesis here about this subject. It states that animation blending could be implemented as a special case of animation layering, where all the weights off the skeletons from different animations are taken into account, as opposed to just a few joints. This seems like a good approach to me, but I'd like to know what you think of it.

Please give me some days to read the thesis.

Quote:
Original post by godmodder
Aside from the actual implementation, is this really the technique games like Tomb Raider used in the past to combine animations? I find it a little hard to believe that these games, that didn't even use skinning, used such an advanced animation system. I ask this because I'm currently replicating a Tomb Raider engine and I just need to combine a running and a shooting animation. The rest of the animations can simply be transitioned in between. Maybe there's a simpler technique that I've overlooked?

Well, I don't know how animation is implemented in Tomb Raider. The described system is just the animation system for my engine. If you ignore named binding, symmetry, and instancing, nothing more than a 2 level averaging is left over. Essentially, in a kind of pseudo code, what is done is

for( group=0; group < noofAnimGroups; ++group ) {
for( anim=0; anim < group.noofAnims; ++anim ) {
for( track=0; track < anim.noofTracks; ++ track ) {
value = track.interpolate( tickRelativeToAnim );
track.targetParameter += anim.weight * value;
)
}
skeleton.normalize;
result.addIfTotalWeightIsLessThanOne( skeleton, group.weight );
}

As can be seen here, the "for( anim" loop does the blending, while the "for( group" loop does the layering. In fact, blending and layering is mathematically the same with the exception of manipulating the layer's weight factor accordingly to the already applied weight factors. No big deal.


However, I don't claim that the described method is the only one, or even the best one. It is just the method that seems me appropriate to reach the goals I've discussed with my artist, and that fit in my overall engine design, of course. E.g. the timeline and prototype/instancing stuff result from the basic design principles of the engine, and not from a necessity due to the animation system itself. And the usage of tracks is IMHO advantageous because it allows my artits to concentrate on the parameters he want to tweak, and it lowers the amount of parameters the engine will process.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this