• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
Kasya

Animation Blending

171 posts in this topic

Quote:
Original post by RobTheBloke
It's not done for efficiency, it's done to make the anim system re-usable and improve build times. Doing it the other way requires the anim system to know about bones, meshes, lights, materials, blend shapes, and anything else it might want to animate. Ultimately that breaks modularity and makes the system a PITA to work with. (btw, I have been writing commercial anim systems for the last 10 years or so, so i've seen the positives and negatives from most ways of doing it.

Uhh, well, I'm reworking my first animation system. So your experience in this topic is by far greater. Nevertheless, there is something in-between black and white: Your abstraction is absolute in that you use a single data type only: float over all. Our abstraction is less strict, but we still have an abstraction. ATM we are speaking of float scalars, float vectors, and (unit) quaternions. You have these types, too, of course, but look at them as sequence of 1 to 4 floats. This allows you to concentrate all the state variables, and to perform a unified blending operation on them. Hence you can presumbly perform a more efficient blending w.r.t. caching and/or SSE. On the other hand, you have the requirement of continuous float buffers for every and all operations, or else the cache advantage will be lowered. Moreover, you are restricted to operations that can be performed on all (vector) components separately. E.g. you could do a nlerp (although only with an "external" normalization step) but no slerp. Don't misunderstand me: That is okay so far. I believe that this makes a performance difference to your benefits (although I'm not able to quantize it).

Quote:
Original post by RobTheBloke
Quote:
Original post by godmodder
My last argument is about the logic of the code. Think about what happens in real life: a bone doesn't query it's next position but is forced into it. Having the tracks explicitely impose a transformation on the bones seems more realistic. I like to imagine it like controlling a puppet with strings.

But that argument fails the second you have something like an aim constraint that is applied before the final stage of the animation. With your method, the character would have to be updated, and then you'd have get the aim to pull data from those transforms, and then you'd fill the temp buffer with more data. You're adding another read where you don't need it, that or you'd always be a frame behind.

That depends on how the aiming is done. IK solutions need not necessarily be implemented the traditional way. Aiming, and any other animation given an advantage by IK, can itself be implemented by blending (perhaps static) animations. Putting this animation to a higher priority and there is no need to "post-process".

Quote:
Original post by godmodder
Quote:
Original post by RobTheBloke
It's not done for efficiency, it's done to make the anim system re-usable and improve build times. Doing it the other way requires the anim system to know about bones, meshes, lights, materials, blend shapes, and anything else it might want to animate. Ultimately that breaks modularity and makes the system a PITA to work with.

Our system doesn't need to know anything about those specifics either. We just need to pass a generic scenegraph node to the animation system, which could represent almost anything.

In fact, the scenegraph node here is nothing more than a scope for the name resolution. As such, its purpose is to restrict the name resolution to work on a defined set of possible outcomes. This is done for both efficieny and re-usability. One can pass a specific NameResolver if wanted. It is also possible to give it up if one absolutely wishes to do so.


In the meanwhile I've read some stuff on animation trees. You remember that little discussion belonging to morpheme at the beginning of this thread? With the concept of animation trees in mind I no longer wonder that morpheme provides 30 or so blend modes (that isn't meant in a negative sense). It also explains RobTheBloke's insisting on masking. I see 2 big differences: Concentration of animated values vs. distribution of generally modifyable values, and the splitting of data and operations vs. "class" building. Its not surprising that exactly both these issues are currently subject of discussion in this thread.
0

Share this post


Link to post
Share on other sites
Hello,
Sorry For Late Reply. You know i go to school. Im just 15 years old who wants to be a programmer :)

I started reading some answers in here, I really forgot to tell you about that.

My Question is FourCC Semantic includes four things:
Position, Local Position, Orientation, Local Orientation?

Or something else. I didn't use in my Code of Skeleton System any const FourCC &semantic;. Do i need to use that?

I just did thing like that:



bool Resolve(_uint boneID, CAnimQuatVariable &var);
bool Resolve(_uint boneID, CAnimVector3Variable &var);





Do i need that semantic thing here?

And what to do now: Pre-Post Blending or Runtime blending? Which works faster? If Runtime blending works faster, how to implement it? (I have Pre-Post Blending)

Thanks,
Kasya

P.S on Saturday i think i'll finish that Instance and Prototype for Animation. I'm getting used to Game Programming really. I neved understood that Animation like that. Thank you very much! :D
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Kasya
My Question is FourCC Semantic includes four things:
Position, Local Position, Orientation, Local Orientation?

Or something else. I didn't use in my Code of Skeleton System any const FourCC &semantic;. Do i need to use that?

Not necessarily.

The semantic is used in the resolution step of the binding process. Hence it is used if and only if a general binding is done at all. E.g. if all your animations are complete w.r.t. the targeted skeleton, and the tracks are in the same order as the skeleton's bones, then you don't need a resolution. I personally allow incomplete animations, and I don't rely on the correct order, so I need some resolution. I pay with some additional runtime costs, of course.

There is a post anywhere in this thread where I describe some variations, with and without semantics. In the minimalistic approach you don't need semantics, since each bone (as identified by the boneId) provides a single (local) position and (local) orientation only. Hence the both routines you've presented in the above post are sufficient.

On the other hand, if you want to animate the skeleton global positions and orientations also, then there are 2 positions and 2 orientations. Or, if you want to animate the x, y, and z channels of the position separately, you'll have 3 floats. In such cases you need an additional criteria for resolution, e.g. a semantic. You have to decide whether such cases are senseful for you.

I have introduced the semantic simply because I allow animations to be bound to other entities than skeletons, too. So I need a more general resolving method.

Quote:
Original post by Kasya
And what to do now: Pre-Post Blending or Runtime blending? Which works faster? If Runtime blending works faster, how to implement it? (I have Pre-Post Blending)

What do you mean with "Pre-Post Blending"? Do you mean the routines preBlending(...) and postBlending(...) I've introduced earlier? These routines are for preparing and reworking blending. If you mean something else, so please explain what you mean.

Quote:
Original post by Kasya
I neved understood that Animation like that.

Notice please that this approach to animations is just one. Other possibilities exist as well, e.g. the "animation tree" as is favored by RobTheBloke.
0

Share this post


Link to post
Share on other sites
@haegarr

Yes i mean PreBlending and PostBlending. In one post i think RobTheBloke said he's using Runtime Blending, not PreBlending and PostBlending. How to do that? Is that neccesary? And what you think if i'll create Animation System like in Unreal 3:

Quote:

Animation is driven by a tree of animation objects including: Blend controllers, data-driven controllers, physics controllers, and procedural animation controllers.

?

Thanks
Kasya
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Kasya
Yes i mean PreBlending and PostBlending. In one post i think RobTheBloke said he's using Runtime Blending, not PreBlending and PostBlending. How to do that? Is that neccesary?

IMHO "runtime blending" (as opposed to "precomputed blending") means that blending is done at, well, runtime. Both approaches do blending at runtime. Perhaps RobTheBloke means something special when he spoke about "Runtime Blending", but then I don't know.

About the roles of preBlending and postBlending: Think of computing a mean value. A code snippet would look like

// preparation
sum = 0;
// main part
for( int i=0; i<length; ++i ) {
sum += value[i];
}
// reworking
sum /= length;

This snippet can be broken down in 3 sections: The preparation sets the sum to 0, the loop sums up, and the reworking by normalizes the sum. preBlending, accuBlend (or others), and postBlending are to be understood in the same way. It is just a mechanism that allows (and is used by) distributed invocations of the blending calculations.

Quote:
Original post by Kasya
And what you think if i'll create Animation System like in Unreal 3:
Quote:

Animation is driven by a tree of animation objects including: Blend controllers, data-driven controllers, physics controllers, and procedural animation controllers.

This sounds like Unreal 3 uses animation trees. Such an animations system uses a bit a different approach, and is IMHO in fact what RobTheBloke is favouring. It has some advantages and some disadvantages. Notice please that the "controlling" is a part we have not yet (deeply) discussed in this thread at all.

Several commercial packages use animation trees, so that method is approved to work. Hence, you can rely on it and choose it as your implementation, of course. There are 2 or 3 principle differences to the other animation system, so you may be required to rewrite some stuff.
0

Share this post


Link to post
Share on other sites
Hello,
@haegarr what does an Animation tree structure need? And can i use Skeleton Instance and Skeleton Prototype with Animation Trees? Or i need to create something like Skeleton Trees?

Thanks,
Kasya
0

Share this post


Link to post
Share on other sites
@haegarr

void Skeleton::reworkPose() {
forEach( bone in bones ) {
if( !bone->m_positionVariable.m_totalWeight ) {
bone->m_positionVariable.m_value = bone->m_positionInBindPose;
}
if( !bone-> m_orientationVariable.m_totalWeight ) {
bone-> m_orientationVariable.m_value = bone->m_orientationInBindPose;
}
}
}



In your code what is bone->m_orientationInBindPose. Is that _backing->m_orientation; (SkeletonPrototype::Bone _backing)? or its that bone's current orientation?

Thanks
Kasya
0

Share this post


Link to post
Share on other sites
Animation tree is a kind of animation system that handles animation like a mathematical expression. The expression uses poses as type of operands and results. A pose, as ever in the sense of skeletal animation, is the state of a part or of the entire skeleton. The operators itself are nodes in the tree, and the literals are leaves of the tree. Operators can be (and most will be) parametrized.

An example:
p := dissolve( walk, run, 0.3 ) + turnLeft*0.2
is an expression composed as
p := p1 + p2
of
p1 := dissolve( walk, run, 0.3 )
p2 := turnLeft*0.2 = scale( turnLeft, 0.2 )
so that 3 nodes exist, one cross-dissolve node, one scaling node, and one addition node. Moreover, 3 leaves exist (okay, leaves are often implemented as nodes, too, but I made this distinction here for clarity), namely "walk", "run", and "turnLeft". The leaves are in fact animation players. They are like AnimationInstance if the blending is stripped away from them.

If you can read power point files, then perhaps these slides may give you an overview. If you interpret the animation tree system, you will hit some differences to the other animation system, as I've already mentioned earlier.

However, the differences are not as big as they seem at a first glance. As soon as I find some time, I will evaluate how the both systems can be melted together. You have noticed that controlling wasn't an issue under discussion so far. The controlling at the lower level is an inherent part of the animation tree (that's the reason for the many node types RobTheBloke has mentioned earlier).

Quote:
Original post by Kasya
In your code what is bone->m_orientationInBindPose. Is that _backing->m_orientation; (SkeletonPrototype::Bone _backing)?

Exactly. The purpose is to guarantee that the skeleton's bone state is set to something senseful. Since the system doesn't enforce an animation to be complete w.r.t. the entirety of bone variables, it may happen that some variables are not touched after all current animations are blended. E.g. bone positions will often not be animated and are hence typical candidates. Hence the reworking iterates over the bones and looks whether it can find untouched variables, and set them to a nominal value (the said bind pose).
0

Share this post


Link to post
Share on other sites
Hello again. It's been a while and I got blending of animations to work. I can now blend an arbitrary amount of animations together. I'll add support for layering when I find the time. This leads me to the problem of animation controlling. This is what I came up with:

1) gather input from the system controls
2) set the character animation state machine in a certain state (e.g. RUNNING)
3) the state machine knows how to transition from one state to another and it sets the right blending modes etc...
4) the animations are applied in the right way to the character

Now, I don't know if this is a good way of doing things, but it seems like a reasonable approach. The state machine for a character could be constructed by the game code itself, or read from a file created by the artists. There's one thing I don't know how to solve: where do additive and layered animations fit in such an animation state machine? Here's a picture of the most important state transitions in my game. Every transition is done by transitional blending of animations:

animation state machine

In almost every state, it's possible to take out your guns and to start shooting. (but you can't shoot while crouching for instance) Should shooting be a seperate state? If this is the case, I would have to connect almost every state with it and even then it wouldn't be entirely correct because the previous animation should still be running and mixed with it. The same problem exists for additive animations such as leaning left while running. What would you suggest?

Part 3 is a bit problematic too. Should a blend tree be constructed in this step? How would this tree be traversed then? I think the approach taken in this step is highly correlated with the solution for the previous question.

As you can see, I have a rough picture of how this should be done, but there are few details which I hope you can fill in for me.

Jeroen
0

Share this post


Link to post
Share on other sites
The following are thoughts, not purely pulled out of air (i.e. existing games use such a technique), but still not implemented by myself. People with more experience may constradict me...

I'm not a friend of a strict state maschine approach. By definition, a state is current in a state maschine. So we would need a state for forward going, but another one for forward going while shooting, and another one for forward going while waving, and another one for forward going while ... that would IMHO contradict the flexible blending approach a bit. Not only the states but also the amount of edges in the belonging graph will be numerous.

Instead, I think of different animation axes. E.g. one such animation axis may be the forward motion. The controlling variable on that axis would be the (forward) velocity. The velocity is part of the object's state, and is controlled by user input (e.g. velocity is increased up to a limit by each pressing of the UP key, and decreased by pressing the DOWN key; things like that), physical constraints (e.g. the weight of armour), terrain (e.g. the slope in motion direction, grip), or whatever you can think of.

Now, the axis' job is to translate the velocity into an animation. For that purpose particular concrete animations are associated with a value on the axis. E.g. the "walk" animation is made for a velocity of 1 m/s, the "jog" animation is made for a velocity of 4 m/s, and the "run" animation is made for 7 m/s. Obviously, if the current velocity is in the range [4,7] m/s, a blending between the surrounding animations "jog" and "run" has to be performed, and the blending factor is f = (v-4)/(7-4) what is well known from keyframing, of course. (As usual, one has to do a phase correction by adapting the animation playback speeds. But that has nothing to do with the controlling technique.)

I have excluded v==0 in the above example for simplicity. This is because it is likely that blending of "stand" with any forward motion will not look pretty. From what I have read in the inet, starting to walk is often done by an explicit animation. E.g. the animation starts by lifting the right foot and performs the first step until it is in sync with the slowest forward motion animation.

So, the axis above is a 1 dimensional, continuous (and limited) axis. It is no problem to define multi-dimensional axes as well. Let's assume that we define the forward motion not only by the (strict) forward velocity but couple it with a left/right moving. So a particular animation may be made for 3 m/s forward motion with a 5°/s curve motion. Then the state will be represented by a tuple (v,d), for example.

It is also possible to have discrete axes. E.g. a boolean that controls "shooting" or not shooting.

With variables as those above some kind of rule set and cross influences can be build. E.g. the forward velocity can be used to influence the propability of hits when shooting. Or one can jump only if the forward velocity is above a minimum, or the jump distance is determined from the velocity, or ...

Nevertheless, the above mechanism is not sufficient for all purposes. E.g. whether or not the character is upright or crouching may be used to switch entire axes (oh well, if one really wants one can understand this as a discrete macro axis :)). The walk/jog/run axis above is active only if the character is in its upright state. Such a "macroscopic" behaviour can be modelled by a discrete state maschine, of course. This may have the advantage to not get lost in control scripting, since the state maschine may be used to devide up the rule set into several smaller sets.

The total set of variables like the forward velocity, shooting-or-not, upright-or-crouching-or-falling, ... together defines the state of the character.
0

Share this post


Link to post
Share on other sites
Hello Everyone,
I was very very busy with the school. Sorry for late reply.

I created all the animation system. It can add keyframes, bindings, tracks, skeletons, bones etc.

But about Blending, I got a problem here. And don't know how to fix it.

Thats the blending code:

AnimationUtils.h


class AnimationUtils {
public:

template<class T> static T Interpolate(T &v1, T &v2, real factor) {

return Math::Lerp<T>(v1, v2, factor);

}

template<> static Quat Interpolate<Quat>(Quat &q1, Quat &q2, real factor) {

Quat q;
q.Nlerp(q1, q2, factor);
return q;

}

};

class AnimQuatVariable {
public:
Quat value;
real totalWeight;

AnimQuatVariable() : totalWeight(0) {}

void Reset() { totalWeight = 0; }

void Blend(Quat &other, real weight) {

totalWeight += weight;
value.Nlerp(value, other, totalWeight);

}

};

class AnimVector3Variable {
public:
Vector3 value;
real totalWeight;

AnimVector3Variable() : totalWeight(0) {}

void Reset() { totalWeight = 0; }

void Blend(Vector3 &other, real weight) {

totalWeight += weight;
value = Math::Lerp<Vector3>(value, other, totalWeight);

}

};

enum FourCC {

ORIENTATION = 0,
POSITION

};



AnimationTrack.h - Interpolation Part


Keyframe * FindKey(real time) const {

for(KeyList::const_iterator i = keys.begin(); i != keys.end(); ++i) {

if((*i)->time == time) return *i;

}

return NULL;

}

Keyframe * NextKey(Keyframe * key) const {

for(KeyList::const_iterator i = keys.begin(); i != keys.end(); ++i) {

if((*i) == key) return *i + 1;

}

return NULL;

}

template<class value_g, class target_g> value_g AnimationPrototype::KeyframeAnimationTrack<value_g, target_g>::Interpolate(real time) const {

Keyframe * cur = FindKey(time);
Keyframe * next = NextKey(cur);

if(!cur) cur = new Keyframe; cur->value = 0;
if(!next) next = new Keyframe; cur->value = 0;

return (value_g) AnimationUtils::Interpolate<value_g>(cur->value, next->value, (next->time - cur->time));

}



AnimationBinding.h - Contribute:


void contribute(real localTime, real weight) const {

value_g value = track->Interpolate(localTime);
target->Blend(value, weight);

}



AnimationInstance.h - Animate



void AnimationInstance::Animate(SkeletonInstance *instance, real time) {

instance->preBlending();

for(BindList::iterator i = binds.begin(); i != binds.end(); ++i) {

(*i)->contribute(time, weight);

}

instance->postBlending();

}




Testing Code:



SkeletonInstance * skel_instance = new SkeletonInstance;
SkeletonPrototype * skel_prototype = new SkeletonPrototype;
AnimationPrototype * anim_prototype = new AnimationPrototype;
AnimationInstance * anim_instance = new AnimationInstance;

Vec3AnimationTrack * track = new Vec3AnimationTrack("RightArm",FourCC::POSITION);

void Init() {

skel_prototype->AddBone(1, 0, "RightArm", Quat(), Vector3(0,1,0), 5);
skel_instance = skel_prototype->newInstance();

track->AddKey(Vector3(2,2,1), 0);
track->AddKey(Vector3(3,12,1), 1);

anim_prototype->AddTrack(track);

anim_instance->SetWeight(0.5f);
anim_instance = anim_prototype->create(skel_instance);

}

real time = 0;

void Render() {

if(time < 2) time++;
else time = 0;

anim_instance->Animate(skel_instance, time );

g_Text.PrintText(5,6, "%1.1f", skel_instance->getBoneByID("RightArm")->vLocalPosition.value.y);

}




In there it shows RightArm bone's Local Position 1.0, but doesn't change when i animate it.

What can i do?

Thanks,
Kasya
0

Share this post


Link to post
Share on other sites
The first thing I've found is that AnimQuatVariable::Blend(...) and AnimVector3Variable::Blend(...) are not correct. You sum up the current weight to the totalWeight and then invoke a lerp with the totalWeight. That is wrong!

Let's look at an example with weight=0.3:

variable->Reset(); => totalWeight := 0
variable->Blend( other, weight ) :
totalWeight += weight; => totalWeight := 0.3
value = Math::Lerp<Vector3>( value, other, totalWeight ); => value := 0.7 * value + 0.3 * other

But what we want is:

variable->Reset(); => totalWeight := 0
variable->Blend( other, weight ) :
totalWeight += weight; => totalWeight := 0.3
value = Math::Lerp<Vector3>( value, other, weight/totalWeight ); => value := 0.0 * value + 1.0 * other

Notice the ratio "weight/totalWeight" as parameter of the lerp. Look at the stuff on the 3rd page of this thread; there we've handled it in great detail.
0

Share this post


Link to post
Share on other sites
Hello,
I changed the totalWeight to weight/totalWeight inside function.
But it doesn't change the value now too. What can i do? Maybe i need to change


return (value_g) AnimationUtils::Interpolate<value_g>(cur->value, next->value, (next->time - cur->time));



into


return (value_g) AnimationUtils::Interpolate<value_g>(cur->value, next->value, time);



Thanks,
Kasya
0

Share this post


Link to post
Share on other sites
Next turn ;)

AnimationPrototype::KeyframeAnimationTrack<value_g, target_g>::Interpolate(...) can be made more efficient and also contains several problems as well as an error.

1st, it is inefficient to look up the left and right key-frame independently. Inside FindKey(...) you already iterated the list of key-frames, found the requested (left) key-frame and hence can step to the follower very efficiently. Instead, you use NextKey(...) and do the entire iteration again.

You could define a routine like FindKeys(KeyFrame*& left,KeyFrame*& right, real time) that returns both frames at once. Even better, you should integrate the functionality in the Interpolate(...) routine itself. Moreover, you can exploit temporal coherence: Store the iterator itself as member, and use its most recently value as start position for the next request.

2nd, in the Interpolate(...) routine itself, you allocate new Frame objects on-the-fly if any of FindKey(...) or NextKey(...) doesn't return a Frame instance. However, those allocation are never deleted, hence remaining as memory leaks!

3rd, in the Interpolate(...) routine, the interpolation weight is not calculated correctly. The interpolation weight has to be in the range [0,1] and obviously must depend on the value of time. Both is not given by your implementation.

The correct weight would be
weight = ( time - cur->time ) / ( next->time - cur->time )
This weight will be 0 if time==cur->time, and it will be 1 if time==next->time. For weight==0 the cur->value has to be returned, and for weight==1 the next->value, of course. From this assignment I assume that
AnimationUtils::Interpolate<value_g>(cur->value, next->value, weight);
would be the correct invocation of the interpolator, but it may be the other way around (check you interpolator implementation to clarify this).


There are other issues as well, e.g. to control what happens outside the animation interval and when looping, but that is not of primary interest now (until the other stuff works fine).
0

Share this post


Link to post
Share on other sites
Addendum; forgotten to mention this yesterday:

4th, the FindKey(...) routine uses identity comparison == for the time moment. You have to use >= instead.
0

Share this post


Link to post
Share on other sites
Hello,
I changed what you said:



bool FindKeys(Keyframe *& left, Keyframe *& right, real time) const {

for(KeyList::const_iterator i = keys.begin(); i != keys.end(); ++i) {

if((*i)->time >= time) {

left = *i;
right = *i + 1;

if(left && right) return true;

}
}

return false;

}

template<class value_g, class target_g> value_g AnimationPrototype::KeyframeAnimationTrack<value_g, target_g>::Interpolate(real time) const {


Keyframe * cur = NULL;
Keyframe * next = NULL;

if(FindKeys(cur, next, time)) {

real weight = (time - cur->time) / (next->time - cur->time);
return (value_g) AnimationUtils::Interpolate<value_g>(cur->value, next->value, weight);

}

return (value_g) Math::EmptyValue<value_g>();

}






class AnimQuatVariable {
public:
Quat value;
real totalWeight;

AnimQuatVariable() : totalWeight(0) {}

void Reset() { totalWeight = 0; }

void Blend(Quat &other, real weight) {

totalWeight += weight;
value.Nlerp(value, other, (weight/totalWeight));

}

};

class AnimVector3Variable {
public:
Vector3 value;
real totalWeight;

AnimVector3Variable() : totalWeight(0) {}

void Reset() { totalWeight = 0; }

void Blend(Vector3 &other, real weight) {

totalWeight += weight;
value = Math::Lerp<Vector3>(value, other, (weight/totalWeight));

}

};






template<class M> static M Lerp(const M &v1, const M &v2, T factor) {

return v1 + ( v2 - v1 ) * factor;

}

class AnimationUtils {
public:

template<class T> static T Interpolate(T &v1, T &v2, real factor) {

return Math::Lerp<T>(v1, v2, factor);

}

template<> static Quat Interpolate<Quat>(Quat &q1, Quat &q2, real factor) {

Quat q;
q.Nlerp(q1, q2, factor);
return q;

}

};




But it doesn't change.

Thanks,
Kasya
0

Share this post


Link to post
Share on other sites
Are you sure that *i+1 will be interpreted as *(i+1)? I would expect it being interpreted as (*i)+1, what obviously would be problematic if the iterator doesn't happen to point into an array of keyframe pointers. Hmmm.

However, its time to check the implementation step by step, since staring at the whole bunch of code doesn't expose secrets anymore (at least to me). ;)

(1) After adding 3 or 4 keyframes to a track, does the invocation of FinKeys(...) for several different time values return the correct keyframes? Use at least value before, between the both first, between the 2nd and 3rd, and after the last keyframe.

(2) If (1) is okay, does the track's Interpolate(...) routine work okay as well? (BTW: I would detach KeyframeAnimationTrack from AnimationPrototype, making a class by its own; but that is mainly a matter of taste.) Check this for the time values as above, and check it for both quaternions as well as positions.

(3) If (2) is okay, does the Variable:Blend(...) behave fine as well?

(4) ...

You should think of "unit tests". I.e. the tests shown above should not be written for a one-shot run and then be put into waste. Instead, write the tests above in a function (perhaps class) each, write a main(...) that invokes all that tests, and let it live inside your code forever. Then, whenever you have edited the animation or skeleton classes, you can run the test's main(...) and verify that the code still behaves as expected. If it doesn't run, then check whether the code or the (old) tests were erroneous, and correct the failures accordingly. This is good practise in general. Please google for "unit test" to learn more about it. There are also support tools available if you want to do the tests in a more formal manner.
0

Share this post


Link to post
Share on other sites
1) Variable::Blend works fine
2) AnimationUtils::Interpolate works fine (TESTED By Putting values in left, right and blend_factor)
3) Track::Interpolate doesn't work fine:
REASON:


value_g Interpolate(real time) {
if(FindKeys(cur, next, time)) {

real weight = (time - cur->time) / (next->time - cur->time);

return (value_g) AnimationUtils::Interpolate<value_g>(cur->value, next->value, weight);

}

else {

return Math::EmptyValue<value>(); //That returns
//For Vector3: Vector3(0,0,0)
//For Quat: Quat(0,0,0)
//For others: 0
}

}




Suppose we have 2 Keyframes:

Key1:
Value: Vector3(0,1,2);
Time: 1

Key2:
Value: Vector3(0,-1,-2)
Time: 2

And a local time which is always going in interval (0,2)

if(time > 2) time = 0;
time ++;

Now Interpolation Formula:
real weight = (time - cur->time) / (next->time - cur->time);

1) time = 0
no value for current keyframe

2) time = 1
cur->time = 1

weight = ( time - cur->time) / (...)
time = cur->time;
weight = 0

3) time = 2
cur->time = 2

weight = ( time - cur->time) / (...)
time = cur->time;
weight = 0

---------------
Function FindKeys():



bool FindKeys(Keyframe *& left, Keyframe *& right, real time) const {

for(KeyList::const_iterator i = keys.begin(); i != keys.end(); ++i) {

if((*i)->time >= time) {

left = (*i);

if( (i + 1) != keys.end()) {
++i;
right = (*i);
return true;
}

}
}

return false;
}





When (*i)->time >= time, cur->time != time. But i think cur->time = time here too. I think the problem happens from there. Because that place doesn't work.

Thanks,
Kasya

P.S i'll test that track->interpolate thing outside the contribute to see if that works. (Forgot about that)

EDIT:
Tested the track->interpolate outside the contribute. About not changing the value its because of target thing i'll solve it. The only problem i found now is it doesn't read the keyframe in the last time (please look at the FindKeys and Interpolate function). does it need to read that keyframe.
0

Share this post


Link to post
Share on other sites
The only thing that i have found for that is:



value_g KeyframeAnimationTrack<value_g, target_g>::Interpolate(real time) const {

Keyframe * cur = NULL;
Keyframe * next = NULL;

real curT = 0;
real nextT = 0;

value_g curV = Math::EmptyValue<value_g>();
value_g nextV = Math::EmptyValue<value_g>();


if(FindKeys(cur, next, time)) {

curT = cur->time;
nextT = next->time;

curV = cur->value;
nextV = next->value;

}

else if(cur != NULL) {

curT = cur->time;
curV = cur->value;

}

real weight = (time - curT) / (nextT - cur->time);
return (value_g) AnimationUtils::Interpolate<value_g>(curV, nextV, weight);

}




if we need to read last time's cur->time
0

Share this post


Link to post
Share on other sites
I'm not able to follow all of your thoughts, Kasya, so I may post something that is already clear. Sorry for that.

(1) If the time overhanded to Interpolate(...,real time) does match a time stored in a keyframe, then ofc the weight becomes computed as 0. Due to
l + ( r - l ) * w = l for w==0
the exact value of the keyframe is returned in such a case, what is correct, isn't it?

The computation
weight = (time - cur->time) / (next->time - cur->time)
is nothing else than transforming the time value into the interval [0,1) if the left and right keyframes are found (_and_ are ordered left-to-right, see below), so making weight a relative time w.r.t. the both involved keyframes.

(2)
Quote:
Original post by Kasya
When (*i)->time >= time, cur->time != time. But i think cur->time = time here too.

I don't understand this.

(3) I mentioned already earlier that handling of "off-range" time moments must be defined in greater detail one day, and that integrating FindKeys(...) into Interpolate(...) would have an advantage. Now that day has arrived ;)

There are at least 2 possible ways to offer to an animator how to deal with time moments before the 1st keyframe: Returning a specific constant value or returning the value of the 1st keyframe. For a time behind the last keyframe both ways are possible, too, but also a cycling could be done. Very advanced animation systems may deal not only with a simple cycling but "cue points", and that anywhere in the track. E.g. a cue point may express "jump back to local time 10 sec for the next 5 times when passing here". (I don't say that you should implement this!)

When you extend Track with ins and outs like those above, you'll see that it can still not be guaranteed to find a left and right keyframe, but it can be guaranteed that the animator can specify the behaviour over the full temporal range of the animation. If not both the left and right keyframe could be found, then the resulting value will not be computed by interpolation but from a single value. Moreover, the computation of the weight depends on the method of the active "in" or "out". Hence IMHO it is better to integrate the functionality of FindKeys(...) into Interpolate(...), as said.

(4) Now the handling of the time can be outlined as follows: The global game time defined where the "playhead" is in the timeline. Somewhere at a defined moment in time an animation starts in the timeline. The animations local time is then computed by subtracting its start time from the global time. If the result is negative, i.e. the playhead is _before_ the animation start, then the animation isn't active yet and processing it should terminate. (Next the local time has to be scaled to allow smooth blending, but that comes later when dealing with controlling.) The (scaled) local time is then fed into the Interpolate(...) routine. The track then subtracts its own time reference (that is initially zero but will become different when cycling/jumping is done) and uses this track-local time to search for the next pair of keyframes. If you use cue points, then you must not skip over cue points during this search, of course.

(5) In a comment in your code you initialize a quaternion with (0,0,0). A quaternion has of course 4 values. I mention this to make sure that the quaternion you return at default is a _unit_ quaternion. It can't be seen from the (0,0,0) whether actually (0,0,0,0) or (1,0,0,0) is meant; the former value will not work well.
0

Share this post


Link to post
Share on other sites
Hello,
Are you still using the originally posted test code?

Although if you only have a single track it won't matter (or at least won't see the problem), but to be completely accurate your Blend method would need to know the totalWeight before using the current tracks weight in the division. Adding it up as you go changes the ratio each time a new tracks weight is added.

Also in your test code it appears you add two key values at time 0 and 1 (or 1 nd 2 - depending on the post), yet you increase the time value by using time++ which increments by 1 each time, skipping in between float values of time (try testing with keys at times 0 and 2 if you want to keep it like this).

It's also probably best to ensure keys are within some valid time range regardless of the time passed, and disregard animations by some other means - use curr->value if time < curr->time, use next->value if time > next->time, depends on your needs I guess.


bool FindKeys(Keyframe *& left, Keyframe *& right, real time) const
{
if (!keys.empty())
{
KeyList::const_iterator curr = keys.begin();
KeyList::const_iterator next = curr;
for(; next != keys.end(); ++next)
{
if((*next)->time > time)
{
break;
}
curr = next;
}
if (next == keys.end)
{
next = curr;
}
left = (*curr);
right = (*next);
return true;
}

return false;
}



Be mindful of any inaccuracies - treat more as a pseudo-code suggestion - depending on how you choose to handle things you may always want the true current and previous/next keys instead of them being the same key when the time is out of range.

Other than that, from the quick glance I did I don't particularly see a direct solution to your issue. May need a few more clarifications on how exactly you are handling things.

0

Share this post


Link to post
Share on other sites
Quote:
Original post by Amadeus
Although if you only have a single track it won't matter (or at least won't see the problem), but to be completely accurate your Blend method would need to know the totalWeight before using the current tracks weight in the division. Adding it up as you go changes the ratio each time a new tracks weight is added.

Interestingly, all you've said in the above citation is correct, but your conclusion of incorrectness of the method is wrong (I must assume that your intention was to hint at an incorrectness of the method). It is definitely intention that the totalWeight increases with each additional blending. Because I'm too lazy to write the proof down again, please look at the 3rd page of this thread into my post where I explain the method in great detail, beginning with "So far you need the a-priori knowledge of all weights to work".

Quote:
Original post by Amadeus
Also in your test code it appears you add two key values at time 0 and 1 (or 1 nd 2 - depending on the post), yet you increase the time value by using time++ which increments by 1 each time, skipping in between float values of time (try testing with keys at times 0 and 2 if you want to keep it like this).

It's also probably best to ensure keys are within some valid time range regardless of the time passed, and disregard animations by some other means - use curr->value if time < curr->time, use next->value if time > next->time, depends on your needs I guess.

*** Source Snippet Removed ***

The animation system is intended to be open. One issue is to break the need for simultaneous keyframes in all tracks of an animation. Another is to allow single tracks to have their own behaviour (e.g. off range handling). Animations and their belonging tracks have a defined start, but their end may be open (i.e. the end may be determined at runtime, or may last until the end of the gameplay; notice that this animation system is not restricted to drive character animation, and mechanical devices may for sure have an endless animation w.r.t. the gameplay). If e.g. jumping/cycling is allowed, the tracks will furthur need to deal with their own local time. Considering all this, each track has to deal with a "valid time range" by its own, but finding a valid time range on the animation level w.r.t. track blending seems me senseless. As stated in my post above, a settings dependent search for the valid keyframes, integrated into the Interpolate(...) routine will solve the problem.

Quote:
Original post by Amadeus
Be mindful of any inaccuracies - treat more as a pseudo-code suggestion - depending on how you choose to handle things you may always want the true current and previous/next keys instead of them being the same key when the time is out of range.

Yes, it is senseful to distinguish the different off range situations.
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0