Animation Blending

Started by
170 comments, last by haegarr 15 years, 5 months ago
Is That Blending in Real Interpolation? If yes then i can use Quaternion Spherical Linear Interpolation.
Advertisement
slerp has the advantage of constant angular velocity for equidistant angles, but has the disadvantage of non-commutativity. nlerp (i.e. normalized lerp) has non constant angular velocity but is commutative; moreover it is less expensive. IMHO, nlerp is the better alternative for blending. In particular, it allows a simpler working when several animations are blended, due to its commutativity.

EDIT: Yes, blending is mathematically an interpolation.
when you use
void CAnimation::CAnimation::Blend(CSkeleton *pFinalSkel, CSkeleton *pLastSkel, CSkeleton *pNextSkel, float blendFactor);

inside the

void CAnimation::Animate(CSkeleton *pFinalSkel) {Blend(pFinalSkel, m_Frames[last] /* Last Frame */ , m_Frames[next] /* Next Frame */, blend_factor);}




Last Frame means: if there are 50 frames Last is 50th Frame or Last used Frame
Next Frames means: if current frame is 23, next is 24 or next can be any value

I need to know only inside one animation not with other animations

Thanks,
Kasya
If you blend the animations as described earlier, wouldn't you get some average result of all the animations? I mean, suppose you wanted to make a character run and shoot like in Tomb Raider for example. If you followed the instructions above, you'd probably end up with some movement that isn't what you intended. Somehow, we need to give the arm movement a priority for the shooting animation and the legs movement a priority for the running animation, or else the arms and legs would always be "in between" the two animations. I think this could be done using some kind of weighting, but I have no idea how exactly.

Jeroen
Quote:Original post by godmodder
If you blend the animations as described earlier, wouldn't you get some average result of all the animations? I mean, suppose you wanted to make a character run and shoot like in Tomb Raider for example. If you followed the instructions above, you'd probably end up with some movement that isn't what you intended. Somehow, we need to give the arm movement a priority for the shooting animation and the legs movement a priority for the running animation, or else the arms and legs would always be "in between" the two animations. I think this could be done using some kind of weighting, but I have no idea how exactly.

First, you must differentiate between animation blending and animation layering. Blending actually "averages" animations, that's correct and intentional. Animation layering, on the other hand, would sum up animations similarly, but only until a total weight of 1 is reached. So animation processed prior to others will have priority. Animation layering has to be taken into account if priorization in animations of the same joints is wanted.

Second, artists should be enabled to define animations only for selected parameters. In other words, animations should be defined by channels, each one for a particular position or orientation. So blending/layering 2 or more animations will not necessarily affect all or even the same joints.

However, the issue is heavy enough without considering layering at this time, and the OP has asked for blending explicitely; so I haven't talked about it layering. But, if you're interested in, I will tell you what I know, of course.
Quote:But, if you're interested in, I will tell you what I know, of course.


Thank you for giving me the correct term for this effect. I didn't know it was called "layering". I'd be very interested in your explanation, because you seem to have experience with it. I've just implemented a skeletal animation system myself, and I would love to add this kind of layering to it. This automatic combining of animations would save me alot of work.

Jeroen
Quote:Original post by Kasya
Last Frame means: if there are 50 frames Last is 50th Frame or Last used Frame
Next Frames means: if current frame is 23, next is 24 or next can be any value

I need to know only inside one animation not with other animations

Sorry, but I don't understand. Perhaps you have an animation description that is out of my imagination. The following is simply based on assumptions I made about your animation system.

It seems that you interpolate between the last recently computed pose and an end pose. That _can_ be correct, if you compute the blending factor accordingly. However, I have to dissuade. Even if you make a continuing correction of the blending factor, you will suffer from accumulating errors due to numerical inaccuracies. If you have no enforcing reason to not do so, you should ever interpolate between the surrounding key-frames.

The other thing is that using animations that actually depend on _frames_ will make your results being dependend on the performance of the computer. This is bad practice. Make time controlled animations instead.
Thanks! I'll do Time Based. Animation Layering is a good Technique. Can i use Animation Layering with Animation Blending? And could you tell me more about Animation Layering please? or could you give me a resource about Animation Layering? I googled it but no good result.

Thanks
Kasya
Everyone have good night. See you Tomorrow! Bye!
Quote:Original post by godmodder
Thank you for giving me the correct term for this effect. I didn't know it was called "layering".

Presumbly there are also other terms out there for that kind of thing.

First I give a coarse overview, since it seems that e.g. Kasya's and perhaps your animation system works differently from mine.

The runtime system of my engine measures time in realtime seconds from the beginning of the application. The moment of time when an animation is started is used as its local temporal origin, so that the time runs from 0 (but still in realtime seconds) on from the viewpoint of the animation. The current moment in time is stored as "the tick" whenever a new video frame processing is started. All time related computations, being it animations or physics based, use this tick as reference.

An animation is a time based modifier of some elemental data. For this purpose it is integrated into the timeline. Being in the timeline guarantees that the animation is processed whenever the current tick (say the virtual playhead) is over it. One can imagine that each modified elemental data has a track in the timeline. I.e. a position of a joint or the orientation of a joint is such an elemental data, although it plays no role: For the animation track there is just a position vector or a quaternion ready for modification; nothing more need to be known by it.

An animation track may be implemented by a key-framed, vector typed interpolation. Notice please that, due to the timing mechanism described above, "key-framed" should not be understood as being frame based but time based, of course. Like written in one of the above posts, the current tick is then used to determine the 2 surrounding key-frames, the 2 belonging data values, and the interpolation factor. With these values the time interpolated data value is computed and overhanded to the data storage (the position of a joint in this case).

We have 2 important things already, although they are not yet explicitely mentioned yet: First, there is no force that _every_ parameter of all joints has an animation track to modify it! That means, an animation for scratching the head has tracks for the right arm, shoulder and perhaps the head, but none of its tracks is influencing the hips or legs. (This is one of the issues I've hinted at already in a post above.) Second, tracks can temporarily overlap each other. This is obviously necessary for tracks that modify distinct parameters, but I also allow several tracks to modify the same parameter at the same tick. That is the basis for animation blending and layering. However, a _single_ animation is defined to have at most 1 track for a particular parameter. But who says we can have only 1 animation targeting the same skeleton?

To blend 2 or more animations, the modifying doesn't happen by simply writing the current (time interpolated) value to the joint's position, but it is weighted by a factor and summed up. (A little excursus: To make things even more interesting, this factor is a float parameter of the animation, and a float modifier can modify it. You see what I mean? Yes, another animation can be used to control it if one wants to do so. So it is even possible to fade animations in and out.) To let this work, the parameters (at the joints) are preset with identity values (i.e. null for a position). Sometime we need to normalize the summed up parameters. To control this, the animations so far are grouped into an animation group. In other words, all animations that are in the same group are working by blending their values.

Now I come to the layering. After the previous animation group has been processed, the total weight that has been accumulated during their blendings may or may not be at 100%. If it is less than 100%, then there is some more "place" for another animation (group). Hence, if there actually is another animation group targeting the same skeleton, then it is processed as as already described. The formerly computed joint parameters and weights must not be overwritten, so you can copy them or use the little index trick. I.e. the joints have 2 sets of parameters. One where the blending happens, and one where the values (from the blending) are "layered". The latter set is obviously the one that is used for skinning at the end. The computation of the latter set from the former one (say the "layering") is also done when the "animation group end" is processed, so there is no extra synchronization point to be implemented. If the summed weight of the current blended animation group doesn't fit into the place the layering has left (i.e. the current total weight plus the current sum of blended weights exceeds 100%) then the current animation group is weighted accordingly to fill up the 100% but not exceed it.

After all animation groups are processed, it may happen that some parameters are not touched at all, so that the total weight is still zero. In such a case the "rest pose" is asked to deliver the parameter value.

Notice please that the process order of animations play a role not for blending but layering.


The above is the soul of my skeleton animation system. There are more aspects than those, e.g. prototypes and instances of skeletons and animations and the method how animations are bound to the modified parameters. I can use the same animation prototype (but another instance, of course) to control another skeleton as long as its fulfills the binding requirements. And last but not least symmetry: I can use the same animation to control either the left arm or the right arm, for example, by simply using a symmetry condition in the binding phase. Things like this make my artist happy and reduces the count of animations.


Huh, I've written a novel now ;) Time to end the work for today.

This topic is closed to new replies.

Advertisement