Skeletal Animation System

Started by
23 comments, last by Danicco 10 years ago

It'll work fine for basic types like vectors and euler angles but not as well for types with less separable components such as quaternions and matrices (and they also have to stay normalised).

It’s not really an issue because in fact by working with tracks in this way you are doing what the authoring software (Maya for example) is doing under its hood, meaning you are actually going to stay closer to the real animation.

In Maya, for example, you have an X, Y, and Z rotation. 3 tracks.
These 3 components can be any values each.

As mentioned near the end of the workflow, if you need a rotation matrix or a quaternion, you generate them from these 3 values (since these are the native 3 values of the authoring software it is guaranteed you have some method to convert these values into matrices or quaternions or whatever else you want to use) as a post-processing step after the animations are finished.


It is more efficient for at least 2 reasons (using a matrix as an example but it applies mostly to quaternions as well):
  • Because the rotation is broken into 3 tracks you don’t update components of the rotation that are not actually being animated. A track that works on full matrices has to update rotation, scale, and translation all together if just one component anywhere changes.
  • A track working on full-sized matrices must interpolate from matrix to matrix (same for quaternions). In the case of matrices this means decomposing, interpolating scale (vector), rotation (quaternion), and translation (vector) separately, and recomposing the matrix. In the case of quaternions interpolation is a fairly complex function call with a branch for small angles. In both cases it is easier to simply rebuild the primitive (matrix or quaternion) after doing basic linear scalar interpolation.
Here is a practical example to illustrate how much more efficient this flow is.
As I mentioned our joints have XYZ translations, XYZ scales, and XYZ rotations, each stored separately (they could be 3 vectors or 9 floats; if using vectors, a Track will simply attach directly to the .x, .y, or .z of any given vector, modifying only one component at a time) and the rotational part of the matrix is made by rotating around the axis in XYZ order (first the X axis, then the Y, then the Z).

To construct a rotation matrix around the XYZ axes is as follows:

		CMatrix4x4Base<_tType, _tVector3Type, _tVector4Type> & LSE_FCALL MatrixRotationXZY( _tType _tX, _tType _tY, _tType _tZ ) {
			LSREAL fS1, fC1;
			CMathLib::SinCos( static_cast<LSREAL>(_tX), fS1, fC1 );

			LSREAL fS2, fC2;
			CMathLib::SinCos( static_cast<LSREAL>(_tY), fS2, fC2 );

			LSREAL fS3, fC3;
			CMathLib::SinCos( static_cast<LSREAL>(_tZ), fS3, fC3 );


			_11 = fC1 * fC3;
			_12 = fS1 * fS3 + fC1 * fC3 * fS2;
			_13 = fC3 * fS1 * fS2 - fC1 * fS3;
			_14 = _tType( 0.0 );

			_21 = -fS2;
			_22 = fC1 * fC2;
			_23 = fC2 * fS1;
			_24 = _tType( 0.0 );

			_31 = fC2 * fS3;
			_32 = fC1 * fS2 * fS3 - fC3 * fS1;
			_33 = fC1 * fC3 + fS1 * fS2 * fS3;
			_34 = _tType( 0.0 );

			// Zero the position.
			_41 = _tType( 0.0 );
			_42 = _tType( 0.0 );
			_43 = _tType( 0.0 );
			_44 = _tType( 1.0 );
			return (*this);
		}
In Maya you have animated only the Rotation.X from 0 to 360.

So only a single track is updating for the entire joint.
So the entire workload is as follows:
  • Update 1 track, interpolating a single float value.
  • In post-processing after animation is finished updating, reconstruct matrix as shown below.

  • float fRotX = DEG_TO_RAD( ROT.x );	// Can be optimized so that the tracks do this conversion only when they update values so that non-modified values are already in radians.
    float fRotY = DEG_TO_RAD( ROT.y );
    float fRotZ = DEG_TO_RAD( ROT.z );
    
    float fS1, fC1;
    CMathLib::SinCos( fRotX, fS1, fC1 );	// Calculates sin and cos in 1 instruction.
    float fS2, fC2;
    CMathLib::SinCos( fRotY, fS2, fC2 );
    float fS3, fC3;
    CMathLib::SinCos( fRotY, fS3, fC3 );
    
    
    _11 = (fC1 * fC3) * SCALE.x;
    _12 = (fS1 * fS3 + fC1 * fC3 * fS2) * SCALE.x;
    _13 = (fC3 * fS1 * fS2 - fC1 * fS3) * SCALE.x;
    _14 = 0.0f;
    
    _21 = -fS2 * SCALE.y;
    _22 = (fC1 * fC2) * SCALE.y;
    _23 = (fC2 * fS1) * SCALE.y;
    _24 = 0.0f;
    
    _31 = (fC2 * fS3) * SCALE.z;
    _32 = (fC1 * fS2 * fS3 - fC3 * fS1) * SCALE.z;
    _33 = (fC1 * fC3 + fS1 * fS2 * fS3) * SCALE.z;
    _34 = 0.0f;
    
    
    _41 = TRANS.x;
    _42 = TRANS.y;
    _43 = TRANS.z;
    _44 = 1.0f;
This is clearly faster than interpolating full matrices.

In the end, it’s simply more efficient and it has no drawbacks related to more complex types such as matrices or quaternions. These are rebuilt as the last step of the flow.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

Advertisement


KeyFrame: A single {Time, Value} pair. Can act on an AnimatedSkeleton or anything else; just give it a reference to the value it modifies and another to a dirty flag.

I was going on using KeyFrame with 3 values because I'm using 3DS Max to get my models data, and from what I could get from the files is that if I translate anything on the X-axis, it would create a translate channel (track) for X, Y and Z (with zeros on the unused values).

But animating anything on screen is really great, I hadn't thought about it, thanks!

One question though, I'm supposed to have then an Animation class with a vector of tracks, and these tracks point to a value that they change, and each track has a vector of KeyFrames, the values at which time changes... I'm loading from a FBX file and pairing it with the base Skeleton I get from the file, so the reference values for each track are those bones (like Track #1 is translate X for Bone 1), but then I copy the skeleton to a "skeletonInstance" for the animated model, and the references would still be pointing to the original "pose skeleton".

How can I get to correctly assign the references without making the KeyFrame / Track classes specifically for the Bone/Skeleton structures?

For now I'm having each KeyFrame (changing to track in your structure) assigned to a bone index (int value, not ref), so I know which is which.

And this also means I have to change my loader... ugh!

If you have noticed, the system I have designed is exactly how FBX works, so you can learn a lot about how animations, tracks, key frames, etc. all work inside a game by looking at how the Autodesk® FBX® SDK works.

To attach a track to the correct parameter of the correct object, it needs the identifier of that object (it’s name tree, which is all the names of its parents in order down to the joint’s name) and a special identifier to indicate which parameter of that object the track is to modify (ComponentId). You will notice that a switch case with that identifier is how the Autodesk® FBX® SDK knows to which component to attach a track.
A switch case is sadly the only way, but it’s really no big deal.

For now I'm having each KeyFrame (changing to track in your structure) assigned to a bone index (int value, not ref), so I know which is which.

There doesn’t need to be an association between tracks and joints (or any other object that could be animated) at all.
The name tree simply allows to look up the target object and then the switch case allows to find which component the track will modify on that object, and once the track has a reference or pointer to that component you are done. No other associations need to exist.

It looks like this:

Skeleton has a hierarchy of joints. Inside a scene there can be joints of the same name, so the track has a name tree (names of all the objects from the root node down to the target joint (or any other type of object)) to allow it to locate the object/joint it will animate.
Once found, that object/joint can use a virtual interface to take the track and its ComponentId and allow the object to connect the track to the proper component within itself.


object->Attach( Track &thisTrack, int CompId ) {
    switch ( CompId ) …
}

Each object has a finite amount of components so your switch cases are not gigantic and sprawling.

The ComponentId indicates that this track is meant to animate a float representing TRANSLATE.X etc. On a camera you would handle tracks meant for field-of-view, aperture, etc.

A light would accept tracks with ComponentId’s for light range, falloff, specular.r, specular.g, etc.

L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

I'm not using the FBX SDK, but it's just because I had already made a FBX Loader when I found out about the SDK... I'm reading the ASCII FBX file, but my 3DS Max is quite old (2010) so the format might've changed a bit, or because it's not the SDK there's some differences on what you've said, but it's nothing troublesome to adapt (like having zeros in all translates/rotates channels, on each keyframe that there is another value change anywhere, even if my animation didn't use that channel).

After I found out about the SDK, since I don't plan on using the FBX files (I'll read the data and save in a format of my game files), I didn't implemented it and kept using the old loader I had.


There doesn’t need to be an association between tracks and joints (or any other object that could be animated) at all.
The name tree simply allows to look up the target object and then the switch case allows to find which component the track will modify on that object, and once the track has a reference or pointer to that component you are done. No other associations need to exist.

That's a great concept, I didn't even had "names" in my scene (I was just going with everything by IDs, names were purely conventional and not necessary), and with this I think it'll become easier to have an animation of "Human Skeleton" and apply it to different humanoid skeletons as long as they have some/most bones names.

I've implemented everything and it looks great so far, but I just realized I'm back to a problem I was having when I posted this, which is the Update function, and I was trying to come up with a structure/system that didn't look as bad as this (current):


class PlayTrack
{
    public:
        Track* track;
        float* valueReference;
};

class PlayState
{
    public:
        void PlaySequence(wstring sequenceName);

        void Update(double updateTime);    
    private:
        Animation* _animation;

        vector<PlayTrack*> _tracks; //track instances, with the value refs assigned
        vector<Sequence*> _sequences; //an animation sequence defined by the code
        vector<double> _sequenceTimes; //the time of each sequence, I didn't make another class so I don't call "new" during the update loop
};

//In my game's main code, it looks like this:
void MyGame::Start()
{
    //Loading stuff
    Animation* animation = FBXLoader.GetAnimation("HUMAN");

    //Creating a new sequence of animation from frames 5 to 10 and naming it WALK
    //I'm using frames as time value, from the FBX I'm getting it's always set at 30 FPS but I'll convert it here depending on my update timer
    animation->AddSequence(L"WALK", 5, 10);

    Model* model = Resources.Load_Model(L"MyModel", L"MyAnimationName"); //and some other params

    //Here I'm linking this model's animation instance (PlayState) to the object it's supposed to animate
    //in this case, the model's skeleton
    Scene.Add(model);

    //It'll start playing an animation
    model->PlaySequence(L"WALK"); //will pass this to it's PlayState
}

void PlayState::PlaySequence(wstring& sequenceName)
{
    //Getting the sequence from the original animation, so all models share the same sequences
    Sequence* sequence = _animation->GetSequence(sequenceName);
    if(sequence != 0)
    {
        _sequences.push_back(sequence);
        _sequenceTimes.push_back( (double) sequence->frameStart );
    }
 
    //So I'll have for now a "WALK" only animation running, starting at frame 5, going till 10
}

//Everything's set up already, now the engine call the scene to update the animations with a double updateTime
//from the fixed time step impl., where I convert the frames to whatever my fixed time step update rate is
void PlayState::Update(double updateTime)
{
    //This is where I'm not happy with how it is

    //Updating all times for the playing sequences, this is a must
    for(unsigned int i = 0; i < _sequenceTimes.size(); i++)
    {
        _sequenceTimes[i] += updateTime;
    }

    //Now for each sequence playing...
    for(unsigned int i = 0; i < _sequences.size(); i++)
    {
        //Check all tracks
        for(unsigned int j = 0; j < _tracks.size(); j++)
        {            
            double currentTime = _sequenceTimes[i];

            KeyFrame* prevFrame = 0;
            KeyFrame* nextFrame = _tracks[j]->track->keyFrames[0];
            
            //Check all it's keyframes           
            for(unsigned int k = 1; k < _tracks[j]->track->keyFrames.size(); k++)
            {
                prevFrame = nextFrame;
                nextFrame = _tracks[j]->track->keyFrames[k];

                if(currentTime >= prevFrame->keyFrame && currentTime <= nextFrame->keyFrame)
                {
                    //Here I got the keyFrame pair I'm looking for
                    break;
                }
            }
        }
    }
}

I can't really save the current time and a pair of keyframes per "PlayTrack" (trackInstance) because there might be more than one sequence playing, so I'm having to loop through so many vectors... isn't there a more elegant (ie. smarter) solution than this?

I'm thinking in maybe moving the sequences elsewhere, and making each "PlayState" able to play a single sequence, but that wouldn't change much since all I'd be doing is moving the sequence loop to another place and copying the PlayState multiple times because of it.

Edit: More Code

Maybe I'm worried for nothing and this might not be a big issue but I can't help but worry "am I doing this right?" and if this is the only way it would be reassuring to listen so I don't think too much over it.

Okay, hopefully, my last post in this topic!

Finally got it to work perfectly! I had some trouble with bone's origin of rotation since I was using quaternions and had to adjust a few things...

In the end, I did:


class Skeleton
{
    public:
        //bunch of controls
    private:
        float _bonePositions[MAX_BONES * 3]; //Bone Local Position
        float _boneTranslations[MAX_BONES * 3]; //Bone Translate
        float _boneRotations[MAX_BONES * 4]; //Bone Rotation in Quat
}

The local position mostly never changes, but I had to use it to figure the point of each bone to center the rotation around it... and the shader:


#version 330 core
layout(location = 0) in vec3 vertexPosition;
layout(location = 1) in vec2 vertexUV;
layout(location = 2) in vec3 vertexNormal;
layout(location = 3) in vec4 boneIndexes;
layout(location = 4) in vec4 boneWeights;

out vec2 UV;

uniform mat4 transformationMatrix;

uniform vec3 bonePosition[64];
uniform vec3 boneTranslation[64];
uniform vec4 boneRotation[64];

void skinVertex(in vec3 vertexPosition, in vec3 bonePosition, in vec3 boneTranslation, in vec4 boneRotation, in float boneWeight, out vec3 updatedVertexPosition)
{
    vec3 rotationOrigin = vertexPosition - bonePosition;

    vec3 cross;
    cross.x = (boneRotation.y * rotationOrigin.z - rotationOrigin.y * boneRotation.z) * 2;
    cross.y = (boneRotation.z * rotationOrigin.x - rotationOrigin.z * boneRotation.x) * 2;
    cross.z = (boneRotation.x * rotationOrigin.y - rotationOrigin.x * boneRotation.y) * 2;

    vec3 crossQuat;
    crossQuat.x = boneRotation.y * cross.z - cross.y * boneRotation.z;
    crossQuat.y = boneRotation.z * cross.x - cross.z * boneRotation.x;
    crossQuat.z = boneRotation.x * cross.y - cross.x * boneRotation.y;

    cross.x = cross.x * boneRotation.w;
    cross.y = cross.y * boneRotation.w;
    cross.z = cross.z * boneRotation.w;

    updatedVertexPosition = vertexPosition + ((cross + crossQuat + boneTranslation) * boneWeight);
}

void main()
{
    vec3 skinnedVertex = vertexPosition;

    skinVertex(skinnedVertex, bonePosition[int(boneIndexes.x)], boneTranslation[int(boneIndexes.x)], boneRotation[int(boneIndexes.x)], boneWeights.x, skinnedVertex);
    skinVertex(skinnedVertex, bonePosition[int(boneIndexes.y)], boneTranslation[int(boneIndexes.y)], boneRotation[int(boneIndexes.y)], boneWeights.y, skinnedVertex);
    skinVertex(skinnedVertex, bonePosition[int(boneIndexes.z)], boneTranslation[int(boneIndexes.z)], boneRotation[int(boneIndexes.z)], boneWeights.z, skinnedVertex);	
    skinVertex(skinnedVertex, bonePosition[int(boneIndexes.w)], boneTranslation[int(boneIndexes.w)], boneRotation[int(boneIndexes.w)], boneWeights.w, skinnedVertex);

    gl_Position = transformationMatrix * vec4(skinnedVertex, 1);
	
    UV = vertexUV;
}

And it's working!

Working_zpse6fb3047.jpg

Many many many thanks to everyone, that's been a huge help!

This topic is closed to new replies.

Advertisement