Jump to content

  • Log In with Google      Sign In   
  • Create Account


Animation Blending


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
171 replies to this topic

#41 haegarr   Crossbones+   -  Reputation: 3237

Like
0Likes
Like

Posted 18 August 2008 - 04:53 AM

@ godmodder

Addendum

Now I think I got it :) You speak of weights to disable joints since your DCC package is only able to export animations of full featured skeletons. That is totally apart from weights for blending and/or layering. In such a case I would use flags at the joints, and destroy the flagged joints on import. You can of course externally generated masks, too. You can furthur keep the belonging joints, and ignore them later during processing, but that seems me a waste.

Sponsor:

#42 RobTheBloke   Crossbones+   -  Reputation: 2225

Like
0Likes
Like

Posted 18 August 2008 - 06:53 AM

Quote:
Original post by haegarr
An example would be aiming with a gun during walking. The walking means animation of the entire body if done alone, i.e. including swinging of arms. When layering the animation of aiming on top of it, swinging of the arm that holds the gun should be supressed.


Yeah, I think we are talking cross purposes here. We use a node based system (google morpheme connect to find out more), where an anim is a node, so is a blend operation. We can then provide bone masks to limit evaluation on a given connection. That allows us the ability to reuse entire blend networks - not just limit the result of a single anim - we can limit the result of an entire set of blended anims.

Quote:
Original post by haegarr
Masking is a binary decision, while weighting is (more or less) continuous.


I never said it was binary - there's no reason why you can't blend the resulting transforms via a weight - or a set of weights for each track if needed. ie. Think about an anim as a single node. We can ask (via a mask) for a set of TM's to be evaluated and the result can pass onto the next operation. We can also set a second mask to get the rest of the transforms. The result is exactly the same amount of work as evaluating the entire anim - but the results can be fed into seperate networks and blended with IK or physics as needed. So if you request the results of the upper body, you can blend that with an aim anim - or an aim IK modifier. As i said before, we have over 20 ways of blending anims because there is no single blending op that does everything you need.

Quote:
I don't weight individual channels, i.e. the modifier of an individual position or orientation, but the entirety of channels of a particular _active_ animation (I use "animation" as the modification of a couple of bones).


What happens when you have locomotion blend system set up for basic movement, but you wish to make modifications to specific bones? for example, the character is walking (via blended anims), aiming his gun (via IK or an anim), and has just been shot in the arm (with the reaction handled in physics)? You can't simply blend the physics with the animation data for a pretty obvious reason - the physics has to modify the result of the animations.

So in your case, the aim would be done first as a layer, then what? You need to get the current walk pose from a set of blended anims, and then add in the physics? But then you can't do that, because the physics needs higher priority? But then the physics can't have a higher priority since it needs the results of the blended animations?

Quote:
And masking would need to have more information on the active animation tracks than weighting need. This is because weighting works without the knowledge of other animations, while masks must be build over all (or at least all active) animations.


Who says the masks have to be associated with *each* animation? There are N masks for a given network - you determine how big N is - and then reuse them when and where needed. Given that a 32bit unsigned can store masks for 32 anim tracks, the size of the mask is not that big. Ultimately you can then provide a blend channels node if needed

Quote:
For example: I use Milkshape 3D to import my animation data, but there's no direct support for any kind of parameters like that.


Using dcc file formats for game anims isn't a good idea - for the reasons that you've noticed. I'd recommend using your own custom file format that does have the data you need - add in some form of asset converter/compiler and be as hacky as you want there. If you do happen to change DCC packages, it's trivial to extend the asset converter.

Quote:
If we still speak of the weights of layers: There is simply no sense in pre-defining them, and hence no need for DCC package support. It is a decision of the runtime system. The runtime system decides from the gamer's input and the state of the game object that pulling out a gun and aiming at a target is to be animated.


I disagree - you just need a DCC package and ingame format that's better suited to the job ;)

Quote:
You can of course externally generated masks, too. You can furthur keep the belonging joints, and ignore them later during processing, but that seems me a waste.


Not at all. In some cases it would be a waste, but then if you have a way of describing all possible blend combinations, you can determine which tracks on which anims you can discard as a pre-processing step (i.e. asset converter stage). But then again, the system i'm describing is 100% data driven so part of the asset is the description of the blending system for a character.

#43 haegarr   Crossbones+   -  Reputation: 3237

Like
0Likes
Like

Posted 18 August 2008 - 07:40 AM

@ RobTheBloke

Due to the very different animation systems, we've talked at cross purposes for sure. Even the understanding of masking wasn't the same.

I don't doubt that my animation system, whichever approach I choose, can never compete with a professional solution ... but that should not mean that I didn't try ;) So you got me curious on morpheme, but unfortunately the informations available there at naturalmotion.com seem somewhat concise. Have to try to find free tutorials or somewhat similar ...

#44 haegarr   Crossbones+   -  Reputation: 3237

Like
0Likes
Like

Posted 18 August 2008 - 08:50 AM

Quote:
Original post by RobTheBloke
What happens when you have locomotion blend system set up for basic movement, but you wish to make modifications to specific bones? for example, the character is walking (via blended anims), aiming his gun (via IK or an anim), and has just been shot in the arm (with the reaction handled in physics)? You can't simply blend the physics with the animation data for a pretty obvious reason - the physics has to modify the result of the animations.

So in your case, the aim would be done first as a layer, then what? You need to get the current walk pose from a set of blended anims, and then add in the physics? But then you can't do that, because the physics needs higher priority? But then the physics can't have a higher priority since it needs the results of the blended animations?

Well, IK by itself is no problem, since animations must not be key-framed (I've mentioned it somewhere in one of the long posts above). Animations are all kinds of time based modifiers. An IK based animation simply interpolates with the IK pose as goal.

However, IK is no problem since it is an "active" animation, and as such can be foreseen by the artist. The case of physics based influence you've described above is, say, "passive" instead. I think you're right when saying that it cannot be implemented as an animation when using an animation system as mine (at least not without modification). Nevertheless can a physics system, being separate from animations as defined so far, have an effect on the already computed pose, and can overwrite the pose partly or totally.

But the weakness you've pointed out is perhaps a flawed integration of the systems. The morpheme solution seems to allow an intertwining (hopefully the correct word) where I have a strict separation. Intertwining may obviously allow for a better control by the designer. That is for sure worth to think about it...

#45 Gasim   Members   -  Reputation: 207

Like
0Likes
Like

Posted 18 August 2008 - 03:37 PM

Quote:
Original post by haegarr

Well, IK by itself is no problem, since animations must not be key-framed (I've mentioned it somewhere in one of the long posts above). Animations are all kinds of time based modifiers. An IK based animation simply interpolates with the IK pose as goal.



I want to create an animation system and physical RagDoll. But i've read that RagDoll needs to be FK. If i use Animation System like

 

class CKeyframe {
public:
float time;
float value;

};

enum AnimationTrackType {

TRANSLATEX = 0,
TRANSLATEY,
TRANSLATEZ,
ROTATEX,
ROTATEY,
ROTATEZ,
//And lots of other stuff


};

class CAnimationTrack {
public:
int trackID; //To Find Track By ID
int boneID;
AnimationTrackType trackType;
std::vector<Keyframe *> keys;

};

class CAnimation {
public:
std::vector<CAnimationTrack *> tracks;
char animation_name[32];
float startTime, endTime;

CAnimation();
~CAnimation();

void Animate(CSkeleton *skel);

void AddTrack(int trackId, AnimationTrackType trackType);
void AddKeyframeToTrack(int trackId, float time, float value);
void SetName(char name[32]);

CAnimationTrack *FindTrackById(int trackID);
CKeyframe * FindKeyframeInTrack(int trackID, float time);

static void Blend(Keyframe * outKeyframe, Keyframe * keyPrev, Keyframe *keyCurrent, float blend_factor);


};





Can i use FK? And can i create RagDoll? If yes i'll write my animation system.
if no, how do i need to write it?

Thanks,
Kasya

[Edited by - Kasya on August 18, 2008 10:37:15 PM]

#46 haegarr   Crossbones+   -  Reputation: 3237

Like
0Likes
Like

Posted 18 August 2008 - 08:21 PM

All structures with local co-ordinate frames are inherently FK enabled. AFAIK skeletons are always implemented with local frames. Even if more than a single bone chain can be used, they are still bone _chains_. I.e. a joint's position and orientation denote the axis of a local frame w.r.t. its parental bone. So, as long as you don't set all skeleton global or even world global joint parameters from the animations, you have automatically FK working.

#47 Gasim   Members   -  Reputation: 207

Like
0Likes
Like

Posted 18 August 2008 - 09:16 PM

What does it mean? If its in Local Coordinate how can i Transform it? If i transform it that means im making World Coordinate. That means i first need to use FK then Transform.

I forgot to ask one question?
What is FK for? For Physics or something else?

Thanks,
Kasya

#48 haegarr   Crossbones+   -  Reputation: 3237

Like
0Likes
Like

Posted 18 August 2008 - 09:31 PM

Any co-ordinate is senseful if and only if you also know the co-ordinate system (or frame) to which the co-ordinate is related. Meshes are normally defined w.r.t. a local frame, i.e. its vertex positions and normals and so on are to be interpreted in that frame.

When you transform the co-ordinates but still interpret the result relative to the same frame, you've actually changed the positions and normals and so on. When you look at the mesh in the world, you'll see it translated or rotated or whatever.

On the other hand, if you interpret the frame as a transformation, apply it to the mesh, and (important!) interpret the result w.r.t. the _parental_ frame, then you've changed the co-ordinates but in the world the mesh is at the same location and shows also the same orientation!

Hence you have to distinguish for what purpose you apply a transformation: Do you want to re-locate and/or re-orientate the mesh, or do you want to change the reference system only? A local-to-global transformation is for changing the reference system only, nothing else.


FK means that co-ordinate frames are related together, and transforming a parental frame has an effect on the child frames, but changing a child frame has no effect on the parental frame. IK, on the oether hand, means that changing a child frame _has_ an effect on the parental frame.

#49 RobTheBloke   Crossbones+   -  Reputation: 2225

Like
0Likes
Like

Posted 18 August 2008 - 11:07 PM

Quote:
So you got me curious on morpheme, but unfortunately the informations available there at naturalmotion.com seem somewhat concise. Have to try to find free tutorials or somewhat similar ...

There isn't much info about it on the internet. Unlike endorhpin, there's no learning edition, so no tutorials available.

Quote:
Original post by haegarr
Well, IK by itself is no problem, since animations must not be key-framed (I've mentioned it somewhere in one of the long posts above). Animations are all kinds of time based modifiers. An IK based animation simply interpolates with the IK pose as goal.


IK gives you a rigid fixed solution - by blending that result with animation you can improve the visual look - i.e. overlay small motions caused by breathing etc.

Quote:
Original post by haegarr
However, IK is no problem since it is an "active" animation, and as such can be foreseen by the artist.


Unless the IK solution is running in response to game input - i.e. aiming at a moving point. The only way the animator can forsee that is to have a tools that interact with the engine - more or less what morpheme is designed to do really.

Quote:
Original post by haegarr
Nevertheless can a physics system, being separate from animations as defined so far, have an effect on the already computed pose, and can overwrite the pose partly or totally.


yup - but we need more fidelity and control than that would allow.

Quote:
The morpheme solution seems to allow an intertwining (hopefully the correct word) where I have a strict separation.


More or less what it was designed to do. Most of the stuff we do is AI driven animations, utilising physics and other bits and bobs (you can probably find some euphoria demo's around relating to GTA and star wars) - It just so happens we need a good way to combine all of those things - hence morpheme.

Quote:
Can i use FK? And can i create RagDoll? If yes i'll write my animation system.
if no, how do i need to write it?


That's a pretty good starting point, I'd probably suggest trying 2 small simple demo's first though - the anim system, and a basic physics ragdoll. It's then a bit easier to see how to merge the two solutions into one all singing and dancing system.

Quote:
What does it mean? If its in Local Coordinate how can i Transform it? If i transform it that means im making World Coordinate. That means i first need to use FK then Transform.

A bit old, but might give you a few ideas. A lot of the code can be improved fairly drastically, but I'll leave that as an excercise for the reader ;)

Quote:

FK means that co-ordinate frames are related together, and transforming a parental frame has an effect on the child frames, but changing a child frame has no effect on the parental frame. IK, on the oether hand, means that changing a child frame _has_ an effect on the parental frame.


Not quite. FK means you transform a series of parented transforms until you get an end pose. IK means you start with an End pose (the effector), and use CCD or the Jacobian to determine the positions/orientations of those bones. FK is just a standard hierarchical animation system.

#50 haegarr   Crossbones+   -  Reputation: 3237

Like
0Likes
Like

Posted 18 August 2008 - 11:40 PM

Quote:
Original post by RobTheBloke
Quote:
Original post by haegarr
Well, IK by itself is no problem, since animations must not be key-framed (I've mentioned it somewhere in one of the long posts above). Animations are all kinds of time based modifiers. An IK based animation simply interpolates with the IK pose as goal.

IK gives you a rigid fixed solution - by blending that result with animation you can improve the visual look - i.e. overlay small motions caused by breathing etc.

Seconded, and since IK controlled animation is an Animation, it can be blended with others.

Quote:
Original post by RobTheBloke
Quote:
Original post by haegarr
However, IK is no problem since it is an "active" animation, and as such can be foreseen by the artist.

Unless the IK solution is running in response to game input - i.e. aiming at a moving point. The only way the animator can forsee that is to have a tools that interact with the engine - more or less what morpheme is designed to do really.

That the artist cannot _precompute_ the animation is clear. Else one could use key-framing instead of IK driven animation. However, he can foresee the need for IK driven aiming, and hence integrate scipts/nodes/tracks or however that stuff is implemented, so the system has something available that can be used for blending at all.

Quote:
Original post by RobTheBloke
Quote:
Original post by haegarr
Nevertheless can a physics system, being separate from animations as defined so far, have an effect on the already computed pose, and can overwrite the pose partly or totally.

yup - but we need more fidelity and control than that would allow.

I understand that. See the remaining text in the original post.

#51 Gasim   Members   -  Reputation: 207

Like
0Likes
Like

Posted 22 August 2008 - 07:21 PM

Hello,
Thats my last Updated Code:

Animation Componenets:



class CKeyframe {
public:
float time;
float value;
};

enum CAnimationTrackType {

TRANSLATEX = 0,
TRANSLATEY,
TRANSLATEZ,
ROTATEX,
ROTATEY,
ROTATEZ

};

class CAnimationTrack {
public:
unsigned int trackID;
unsigned int nodeID;
CAnimationTrackType trackType;
std::vector<CKeyframe *> keys;
};





Animation Class:

class CAnimation {
protected:
std::vector<CAnimationTrack *> m_Tracks;
float startTime, endTime;
char name[32];

public:
CAnimation();
~CAnimation();

CAnimationTrack * FindTrackByID(unsigned int trackID);
CKeyframe * FindKeyframeByTime(unsigned int trackID, float keyframe_time);

void AddTrack(unsigned int trackID, unsigned int nodeID, CAnimationTrackType trackType);
void AddKeyframe(unsigned int trackID, float value, float time);
void SetTime(float start_time, float end_time);

CAnimationTrack * operator[](unsigned int index);
CAnimationTrack *AnimationTrack(unsigned int index);

unsigned int NumAnimationTracks();

};





CBlend.cpp


void CBlend::Blend(CVector3 &vPos, CQuat &qRotate, CKeyframe *curKf, CKeyframe *prevKf, float blend_factor, CAnimationTrackType trackType) {

CVector3 vTmpPrev, vTmpCur;
CQuat qTmpPrev, qTmpCur;

if(trackType == TRANSLATEX) {

vTmpPrev.x = prevKf->value;
vTmpCur.x = curKf->value;

}


if(trackType == TRANSLATEY) {

vTmpPrev.y = prevKf->value;
vTmpCur.y = curKf->value;

}

if(trackType == TRANSLATEZ) {

vTmpPrev.z = prevKf->value;
vTmpCur.z = curKf->value;

}

if(trackType == ROTATEZ) {

qTmpPrev.SetAxis(prevKf->value, 1, 0, 0);
qTmpCur.SetAxis(curKf->value, 1, 0, 0);

}

if(trackType == ROTATEZ) {

qTmpPrev.SetAxis(prevKf->value, 0, 1, 0);
qTmpCur.SetAxis(curKf->value, 0, 1, 0);

}

if(trackType == ROTATEZ) {

qTmpPrev.SetAxis(prevKf->value, 0, 0, 1);
qTmpCur.SetAxis(curKf->value, 0, 0, 1);

}

vPos = vTmpCur * (1.0f - blend_factor) + vTmpPrev * blend_factor;
qRotate.Slerp(qTmpCur, qTmpPrev, blend_factor);

}

void CBlend::Blend(CAnimation *pFinalAnim, CAnimation *pCurAnimation, CAnimation *pNextAnimation, float blend_factor) {


CVector3 vTmp;
CQuat qTmp;

for(unsigned int i = 0; i < pCurAnimation->NumAnimationTracks(); i++) {
for(unsigned int n = 0; n < pNextAnimation->NumAnimationTracks(); n++) {

if(pCurAnimation->AnimationTrack(i)->nodeID == pNextAnimation->AnimationTrack(n)->nodeID && pCurAnimation->AnimationTrack(i)->trackType == pNextAnimation->AnimationTrack(n)->trackType) {

for(unsigned int j = 0; j < pCurAnimation->AnimationTrack(i)->keys.size(); j++) {
for(unsigned int k = 0; k < pNextAnimation->AnimationTrack(n)->keys.size(); k++) {

Blend(vTmp, qTmp, pCurAnimation->AnimationTrack(i)->keys[j], pNextAnimation->AnimationTrack(n)->keys[j], blend_factor, pNextAnimation->AnimationTrack(n)->trackType);

for(unsigned int v = 0; v < pFinalAnim->NumAnimationTracks(); v++) {

if(pFinalAnim->AnimationTrack(v)->nodeID == pCurAnimation->AnimationTrack(i)->nodeID && pFinalAnim->AnimationTrack(i)->trackType == pCurAnimation->AnimationTrack(n)->trackType) {

//What to write here??

}
}
}
}
}
}
}


}





Animate Skeleton:


void CSkeletalAnimation::Animate(CSkeleton *pFinalSkel) {

float time = g_Timer.GetSeconds();

//Haven't written loop yet. :)

for(unsigned int i = 0; i < m_Tracks.size(); i++) {

unsigned int uiFrame = 0;
CBone * bone = pFinalSkel->FindBone(m_Tracks[i]->nodeID);
CAnimationTrackType trackType = m_Tracks[i]->trackType;
CVector3 vTmp;
CQuat qTmp;

while(uiFrame < m_Tracks[i]->keys.size() && m_Tracks[i]->keys[uiFrame]->time < time) uiFrame++;

if(uiFrame == 0) {

if(trackType == TRANSLATEX) {

vTmp.x = m_Tracks[i]->keys[0]->value;

}

if(trackType == TRANSLATEY) {

vTmp.y = m_Tracks[i]->keys[0]->value;

}

if(trackType == TRANSLATEZ) {

vTmp.z = m_Tracks[i]->keys[0]->value;

}

if(trackType == ROTATEX) {

qTmp.SetAxis(m_Tracks[i]->keys[0]->value, 1, 0, 0);


}

if(trackType == ROTATEY) {

qTmp.SetAxis(m_Tracks[i]->keys[0]->value, 0, 1, 0);

}

if(trackType == ROTATEZ) {

qTmp.SetAxis(m_Tracks[i]->keys[0]->value, 0, 0, 1);

}

}

if(uiFrame == m_Tracks[i]->keys.size()) {

if(trackType == TRANSLATEX) {

vTmp.x = m_Tracks[i]->keys[uiFrame-1]->value;

}

if(trackType == TRANSLATEY) {

vTmp.y = m_Tracks[i]->keys[uiFrame-1]->value;

}

if(trackType == TRANSLATEZ) {

vTmp.z = m_Tracks[i]->keys[uiFrame-1]->value;

}

if(trackType == ROTATEX) {

qTmp.SetAxis(m_Tracks[i]->keys[uiFrame-1]->value, 1, 0, 0);


}

if(trackType == ROTATEY) {

qTmp.SetAxis(m_Tracks[i]->keys[uiFrame-1]->value, 0, 1, 0);

}

if(trackType == ROTATEZ) {

qTmp.SetAxis(m_Tracks[i]->keys[uiFrame-1]->value, 0, 0, 1);

}
}

else {

CKeyframe *prevKf = m_Tracks[i]->keys[uiFrame-1];
CKeyframe *curKf = m_Tracks[i]->keys[uiFrame];

float delta = curKf->time - prevKf->time;
float blend = (time - prevKf->time) / delta;

CBlend::Blend(vTmp, qTmp, curKf, prevKf, blend, trackType);

}

bone->qRotate = bone->qRotate * qTmp;
bone->vPos = bone->vPos * vTmp;

}
}




Is that code right?? And please answer my question inside Blend::Blend function.

Thanks,
Kasya

P.S. Timer is temporary i'll change it anyway. :)

#52 haegarr   Crossbones+   -  Reputation: 3237

Like
0Likes
Like

Posted 22 August 2008 - 09:59 PM

IMHO there are several problems within the code.

(1) You use 3 times ROTATIONZ but no ROTATIONX and ROTATIONY in the CBlend::Blend for 2 key-frames.

(2) The implementation of the same CBlend::Blend as above is very inefficient. My main grumbling is that the trackType is exactly 1 of the possible values, but you ever process all of them. Consider at least to choose a structure like

// either a rotation ...
if( trackType>=ROTATEX ) {
CQuat qTmpPrev, qTmpCur;
if( trackType==ROTATIONX ) {
qTmpPrev.SetAxis(prevKf->value, 1, 0, 0);
qTmpCur.SetAxis(curKf->value, 1, 0, 0);
}
else if( trackType==ROTATIONY ) {
/// insert appropriate code here
}
else {
/// insert appropriate code here
}
qRotate.Slerp(qTmpCur, qTmpPrev, blend_factor);
}
// ... or else a translation ...
else {
/// insert appropriate code here, similarly to the part above
}

As you can see, this snippet tries to avoid senseless computations. You can use a switch statement for the inner decisions, of course. (Yes, you can say that this is a kind of pre-optimization, if you like. But choosing another implementation here has an impact on the invoking code, so changing it later can IMO introduce more problems than expected so far.)

(3) I would not choose the way of CBlend::Blend of 2 animations as you've done. Your way is appropriate for blending 2 animations, but not necessarily for blending more than 2 animations. Instead of blending animations pairwise, I would use the skeleton instance as an accumulator for the blending. I.e. preset the skeleton instance when entering the blending group, process each animation of the group without knowledge of the other animations, and post-process the skeleton instance when exiting the group.


I'm not sure what kind of animation system you try to implement. So I cannot really evaluate the code at the higher logical levels (this is strictly speaking already relevant for point (3) above). You may consider to tell us exactly what your goals are when we should discuss them.

#53 Gasim   Members   -  Reputation: 207

Like
0Likes
Like

Posted 22 August 2008 - 10:26 PM

Hello,

it there a problem inside
CSkeletalAnimation::Animate(CSkeleton *pFinalSkel); except Time

And Thats my new Blending:


void CBlend::Blend(CVector3 &vPos, CQuat &qRotate, CKeyframe *curKf, CKeyframe *prevKf, float blend_factor, CAnimationTrackType trackType) {

if(trackType >= TRANSLATEX && trackType <= TRANSLATEZ) {

CVector3 vTmpPrev, vTmpCur;

if(trackType == TRANSLATEX) {

vTmpPrev.x = prevKf->value;
vTmpCur.x = curKf->value;

}

else if(trackType == TRANSLATEY) {

vTmpPrev.y = prevKf->value;
vTmpCur.y = curKf->value;

}

else if(trackType == TRANSLATEZ) {

vTmpPrev.z = prevKf->value;
vTmpCur.z = curKf->value;

}

vPos = vTmpCur * (1.0f - blend_factor) + vTmpPrev * blend_factor;

}

else if(trackType >= ROTATEX && trackType <= ROTATEZ) {

CQuat qTmpPrev, qTmpCur;

if(trackType == ROTATEX) {

qTmpPrev.SetAxis(prevKf->value, 1, 0, 0);
qTmpCur.SetAxis(curKf->value, 1, 0, 0);

}

else if(trackType == ROTATEY) {

qTmpPrev.SetAxis(prevKf->value, 0, 1, 0);
qTmpCur.SetAxis(curKf->value, 0, 1, 0);

}

else if(trackType == ROTATEZ) {

qTmpPrev.SetAxis(prevKf->value, 0, 0, 1);
qTmpCur.SetAxis(curKf->value, 0, 0, 1);

}

qRotate.Nlerp(qTmpCur, qTmpPrev, blend_factor);

}
}

void CBlend::Blend(CSkeleton *pFinalSkel, CSkeleton *pLastSkel, CSkeleton *pNextSkel, float blend_factor) {

for(unsigned int i = 0; i < pLastSkel->NumBones(); i++) {

pFinalSkel->bones[i]->vPos = pLastSkel->bones[i]->vPos * ( 1.0 - blend_factor) + pNextSkel->bones[i]->vPos * blend_factor;
pFinalSkel->bones[i]->qRotate.Nlerp(pLastSkel->bones[i]->qRotate, pNextSkel->bones[i]->qRotate, blend_factor);

}

}




Thanks,
Kasya

#54 haegarr   Crossbones+   -  Reputation: 3237

Like
0Likes
Like

Posted 22 August 2008 - 11:44 PM

Okay; although the implementation of CBlend::Blend for 2 key-frames is still away from being optimal, it is good enough until the animation system works as expected. I here only hint at the potential to optimize it and urge you to come back to this topic at appropriate time.

In CSkeletalAnimation::Animate there is IMO an "else" being missed. The structure should probably be

if(uiFrame == 0) {
...
}
else if(uiFrame == m_Tracks[i]->keys.size()) { // <-- notice the else at the beginning
...
else {
...
}

Next, vTmp is a QVector3, and I assume vPos to be one, too. You are multiplying the both in CSkeletalAnimation::Animate
bone->vPos = bone->vPos * vTmp;
but that isn't the correct operation. If you don't want to do animation blending, then addition would be the correct operation. If, on the other hand, you want to do animation blending, then blending would be the correct operation. In the latter case also bone->qRotate must be blended, of course.

I can't tell you about a working structure if you don't tell us what you want to do. So please:
(a) Do you want to integrate animation blending?
(b) Are the tracks of a particular animation unique w.r.t. the affected bone attribute, or else is blending already necessary at this level? (I would suggest the former.)
© What about layering? What kind of layer blending do you prefer, if any?

#55 Gasim   Members   -  Reputation: 207

Like
0Likes
Like

Posted 23 August 2008 - 12:07 AM

Hello,

(a) I want ot use animation blending
(b) The Tracks are animating Bones
© I have no Layering no animation groups. I want to do that too. but after blending.

i changed lots of things here in CSkeleton::Animate(CSkeleton *pFinalSkel); functions.



void CSkeletalAnimation::Animate(CSkeleton *pFinalSkel) {

float time = g_Timer.GetSeconds() * 0.01f;
static float lastTime = startTime;
time += lastTime;
lastTime = time;

sprintf(t, "%f", time);
SetWindowText(GetHWND(), t);

if(time >= endTime) {

if(loop) {

lastTime = startTime;
time = startTime;

}

}

for(unsigned int i = 0; i < m_Tracks.size()-1; i++) {

unsigned int uiFrame = 0;
CBone * bone = pFinalSkel->FindBone(m_Tracks[i]->nodeID);
CAnimationTrackType trackType = m_Tracks[i]->trackType;
CVector3 vTmp;
CQuat qTmp;

while(uiFrame < m_Tracks[i]->keys.size() && m_Tracks[i]->keys[uiFrame]->time < time) uiFrame++;

if(uiFrame == 0) {

if(trackType >= TRANSLATEX && trackType <= TRANSLATEY) {

if(trackType == TRANSLATEX) {

vTmp.x = m_Tracks[i]->keys[0]->value;

}

else if(trackType == TRANSLATEY) {

vTmp.y = m_Tracks[i]->keys[0]->value;

}

else if(trackType == TRANSLATEZ) {

vTmp.z = m_Tracks[i]->keys[0]->value;

}

}

else if(trackType >= ROTATEX && trackType <= ROTATEZ) {

if(trackType == ROTATEX) {

qTmp.SetAxis(m_Tracks[i]->keys[0]->value, 1, 0, 0);


}

else if(trackType == ROTATEY) {

qTmp.SetAxis(m_Tracks[i]->keys[0]->value, 0, 1, 0);

}

else if(trackType == ROTATEZ) {

qTmp.SetAxis(m_Tracks[i]->keys[0]->value, 0, 0, 1);

}

}

}

else if(uiFrame == m_Tracks[i]->keys.size()) {

if(trackType >= TRANSLATEX && trackType <= TRANSLATEY) {

if(trackType == TRANSLATEX) {

vTmp.x = m_Tracks[i]->keys[uiFrame-1]->value;

}

else if(trackType == TRANSLATEY) {

vTmp.y = m_Tracks[i]->keys[uiFrame-1]->value;

}

else if(trackType == TRANSLATEZ) {

vTmp.z = m_Tracks[i]->keys[uiFrame-1]->value;

}

}

else if(trackType >= ROTATEX && trackType <= ROTATEZ) {

if(trackType == ROTATEX) {

qTmp.SetAxis(m_Tracks[i]->keys[uiFrame-1]->value, 1, 0, 0);


}

else if(trackType == ROTATEY) {

qTmp.SetAxis(m_Tracks[i]->keys[uiFrame-1]->value, 0, 1, 0);

}

else if(trackType == ROTATEZ) {

qTmp.SetAxis(m_Tracks[i]->keys[uiFrame-1]->value, 0, 0, 1);

}

}
}

else {

CKeyframe *prevKf = m_Tracks[i]->keys[uiFrame-1];
CKeyframe *curKf = m_Tracks[i]->keys[uiFrame];

float delta = curKf->time - prevKf->time;
float blend = (time - prevKf->time) / delta;

CBlend::Blend(vTmp, qTmp, curKf, prevKf, blend, trackType);

}

bone->qRotate = bone->qRotate * qTmp;
bone->vPos += vTmp;

}
}




I only have time problem. But im gettings rid of it.

Thanks,
Kasya


#56 haegarr   Crossbones+   -  Reputation: 3237

Like
0Likes
Like

Posted 23 August 2008 - 01:08 AM

Quote:
Original post by Kasya
(a) I want ot use animation blending
(b) The Tracks are animating Bones
© I have no Layering no animation groups. I want to do that too. but after blending.

Okay. Answer (b) isn't complete w.r.t. my question, but I assume trackIDs being unique per animation until you constradict explicitely. I furthur assume that more than 2 animations should be able to be blended.

Coming to the timing. I suggest you the following:

How is g_Timer.GetSeconds() advanced? It is a timer provides by the OS? As mentioned earlier, the time used should be freezed for the current video frame. Due to this behaviour, I would expect the current time being overhanded like in
void CSkeletalAnimation::Animate(float currentTime,CSkeleton *pFinalSkel) ...
instead of being fetched inside that routine.

Next, what is lastTime being good for? Especially declaring it as static is probably a bad idea. I suggest something like this:

CSkeletalAnimation:: CSkeletalAnimation( float startTime, bool loop )
: m_startTime( startTime ),
m_loop( loop ) ...

void CSkeletalAnimation::Animate( CSkeleton *pFinalSkel, float currentTime ) {
// relating current time to animation start
currentTime -= m_startTime;
// handling looping if necessary ...
if( currentTime>=m_duration && m_loop ) {
do {
currentTime -= m_duration;
m_startTime += m_duration;
} while( currentTime>=m_duration );
}
// pre-processing the skeleton
// (nothing to do yet)
// iterating tracks
for( unsigned int i=0; i<m_Tracks.size()-1; ++i ) {
m_Tracks[i]->contribute( pFinalSkel, currentTime );
}
// post-processing the skeleton
// (nothing to do yet)
}


What do you think about that? Notice that the animation doesn't care here that tracks are build of key-frames.

#57 Gasim   Members   -  Reputation: 207

Like
0Likes
Like

Posted 23 August 2008 - 07:24 AM

Hello,

What you mean

Quote:

Notice that the animation doesn't care here that tracks are build of key-frames.


?

Does it Care on mine? Where?

Doesn't m_Tracks[i]->Contribute(pFinalSkel, currentTime);'s implementation like mine but inside function.

And for blending more than one animation, i need to do like that:



CSkeleton *pFinalSkel;
CSkeleton tempSkel1, tempSkel2, tempSkel3;
CSkeleton tempFinal;

m_Animation[0].Animate(&tempSkel1, currentTime);
m_Animation[1].Animate(&tempSkel2, currentTime);
m_Animation[2].Animate(&tempSkel3, currentTime);

CBlend::Blend(&tempFinal, &tempSkel1, &tempSkel2, blend_factor);
CBlend::Blend(pFinalSkel, &tempFinal, &tempSkel3, blend_factor);




Thanks,
Kasya





#58 haegarr   Crossbones+   -  Reputation: 3237

Like
0Likes
Like

Posted 24 August 2008 - 12:25 AM

Quote:
Original post by Kasya
What you mean

Quote:

Notice that the animation doesn't care here that tracks are build of key-frames.


?

Does it Care on mine? Where?

Your CSkeletalAnimation::Animate method executes the entire logic of tracks. Hence it also executes the look-up for the surrounding key-frames of the track. So yes, your solution requires the tracks being key-frame tracks. That is not an error; it is not even not a real problem if you stick with key-frame tracks only. But it is away from a clean OOP solution, IMHO. And you'll get into trouble if you decide to allow other kinds of tracks. At least, externalizing that functionality slims CSkeletalAnimation::Animate.

Quote:
Original post by Kasya
Doesn't m_Tracks[i]->Contribute(pFinalSkel, currentTime);'s implementation like mine but inside function.

Mostly.

Quote:
Original post by Kasya
And for blending more than one animation, i need to do like that:
...

Looking at your code snippets shows me a hardcoded handling of an anknown amount of animations. Well, I assume you don't really meant that but gave that as an equivalent example, did you? However, the real problems are others.

First, look at the total weightings. What you suggest is something like
f1 := p2 * w12 + p1 * ( 1 - w12 )
f2 := p3 * w23 + f1 * ( 1 - w23 )
== p3 * w23 + ( p2 * w12 + p1 * ( 1 - w12 ) ) * ( 1 - w23 )
== p3 * w23 + p2 * w12 * ( 1 - w23 ) + p1 * ( 1 - w12 ) * ( 1 - w23 )
Do you notice the double weighting of p2 and p1? If you don't take countermeasures then animations incorporated at the beginning are more and more supressed due to their multiple weightings.

The 2nd problem is that, if you don't have complete animations w.r.t. the count of tracks (i.e. not all skeleton attributes are animated), then you have the need to lately set those attributes to the state of the bind pose. So you have the need to detect at the end of processing all animations, whether or not an attribute has been touched.

Both problems can be handled by using a sum of weights for each attribute. It has an additional advantage: It allows weights to be independent; it is sufficient if each weight is greater than 0 (an animation with weight 0 plays no role by definition, and negative weights are disallowed anyway).

An animation has a track that influences f. Since this is the first animation doing so, the value returned by the track is used as is, but the weight is remembered:
s = w1
f = p1
If no other animation has a track bound to the same attribute, then nothing more happens. If, on the other hand, another animation has a track bound to that attribute, then a blending happens but with an adapted weight:
s += w2
f = blend( f, p2, w2 / s )
So what happens? The adapted weight is
w2 / s == w2 / ( w1 + w2 )
so that the blending is actually computed as
p1 * ( 1 - w2 / ( w1 + w2 ) ) + p2 * w2 / ( w1 + w2 )
== p1 * w1 / ( w1 + w2 ) + p2 * w2 / ( w1 + w2 )
Assuming a 3rd animation track comes into play, then again
s += w3
f = blend( f, p3, w3 / s )
resulting in
p1 * w1 / ( w1 + w2 + w3 ) + p2 * w2 / ( w1 + w2 + w3 ) + p3 * w3 / ( w1 + w2 + w3 )

You see that the weights gets normalized automatically, and each track has an influence with just the weight defined by the animation rather than a mix of weights of various animations!

Moreover, when the animations are all processed, you can investigate the sum of weights and determine whether _any_ track ahd influence; if not, then set the value of the attribute to those of the bind pose.

#59 haegarr   Crossbones+   -  Reputation: 3237

Like
0Likes
Like

Posted 24 August 2008 - 01:02 AM

Addendum:

Notice please how the above scheme of blending animations fits perfectly with the track->contribute thingy. It is so because the said scheme blends animations one-by-one, so it needs only to know a single animation at a time. In other words, the blending can be done by the animation (or its track, in this case) itself. That is an advantage from the implementation's point of view.

To do so, you need to transport the animations weight to the track, of course, like so

// iterating tracks
for( unsigned int i=0; i<m_Tracks.size()-1; ++i ) {
m_Tracks[i]->contribute( pFinalSkel, currentTime, m_weight );
}



And you can see why the comments "pre-processing" and "post-processing" in one of my previous posts are senseful...

#60 Gasim   Members   -  Reputation: 207

Like
0Likes
Like

Posted 24 August 2008 - 08:35 AM

Hello,

You said

Quote:

the blending can be done by the animation (or its track, in this case) itself


and used


m_Tracks[i]->contribute( pFinalSkel, currentTime, m_weight );



That means i need to calculate weight. But when its inside keyframe like


CKeyframe *prevKf = m_Tracks[i]->keys[uiFrame-1];
CKeyframe *curKf = m_Tracks[i]->keys[uiFrame];

float delta = curKf->time - prevKf->time;
float blend = (time - prevKf->time) / delta;

CBlend::Blend(vTmp, qTmp, curKf, prevKf, blend, trackType);




i calculated weight in float blend;.

How can i calculate it before m_Tracks[i]->contribute loop

Thanks,
Kasya




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS