Problems with Skeletal animation!
So I have spend the last month or so trying to figure out how skeletal animation work. I have read tutorials etc. I have learned alot - still it seems I dont have that last bit of knowledge to get a break-through!
Im coding in Java and my designer is using MilkShape but is saving his work as b3d (Blitz3d), a format he believes is the best. So in the future when I get this right I will build a parser/loader for that format. We are planning to build an online mmo-rpg. Well, enough of my reasons to ask this question!
I have build a writer that dumps all the good stuff in a b3d-file to the console with an illustration of the herachi of the nodes aswell. The vertex pool is dumped to a txt-file. This way I have a good visual view of how such a chunk-based 3d-file is build.
Now I know how to build a 4x4 matrix with a quaternion(rotation) and a vector(translation). In java its quite simple to use the Matrix4f-class that simple takes a quaternion and a vector as its arguments. Also transforming vertices is not a problem that is also simple in java: matrix.transform(vector, target vector). Multiplying matrices shouldnt eighter be a problem - but which one the concatenate to which -> IM LOST!
My designer has been kind enough to supply me with ultra-simple animation examples I can practice on. One such example, I will use in this thread. Imagine an example consisting of only 3 joints (could be an arm) joint1 could be a shoulder, joint2 the albow and joint3 would be the wrist. Animation occurs only in joint2 (albow). the rotation in joint2 causes the triangle on joint3 to move.
So here is the ultra simple example I would like some help with:
joint1 O---------------------Ojoint2-----------------O joint3
joint1 info - frame1: matJ1_F1 - (no animation).
joint2 info - frame1: matJ2_F1 - frame10: matJ2_F10 - frame20: matJ2_F20
joint3 info - frame1: matJ3_F1 - (no animation).
The triangle attached to joint3 has 3 vertices, the total mesh consists of these 3 vertices - lets call them: Ver1, ver2 and ver3.
So this is a simple example - still it has it all!
"mat" stands for matrix ofcourse.
------------
My question is as follows: could someone be so kind as to setup some psedo-code of how I calculate the new positions of the vertices in each keyframe 1-10-20. Also how I calculate the matrix that does the transforming for each keyframe. Well I hope the question is clear, cus I really need to understand this!
Please help...
Well, I don't know that I have the time to write out a specific example based on the info you gave, but I can help give you an overview of skinning and point you towards some resources I found invaluable. (Just got a working implementation going myself...)
First off, I'm assuming that you're working with a mesh that is in it's "bind pose", with only one position per vertex, yes? (Read the nvidia paper below if you're confused.) If that's the case then skinning is pretty easy conceptually.
The biggest "problem" that one needs to overcome is the fact that you need to get the verticies in question to rotate around an arbitrariy "pivot point" for the bone, but matrix math tends to like to rotate everything around the origin. To get around this, we actually need to keep two arrays of "bones" (or matricies). One is the bind pose, and the second is the skeleton in the pose that you want. For usability reasons these should both be stored with relative positions/rotations to their parents, but when it comes time to animate you need to find the "world space" matrix for each bone.
This is done by simply starting at the root joint, generating the matrix for it, and then for each of it's children multiplying their matrix by their parents matrix, and so on and so forth recursively. When you're done each bone should have an associated world space matrix. We do this for both the bind pose and the animated pose.
Now, to get the correct vertex transforms that we want, we need to multiply each animated bone's matrix by the INVERSE of the matrix for that bone in it's bind pose. This will nicely sort out all of the odd little transformation issues, and give us a single, consise matrix to mmultiply our points by. It's worth noting that inverting a matrix is not a trivial calculation, but since the bind pose never changes we only have to calculate it once. The animated pose, of course, you'll want to recalculate on a per frame basis, but it's really not that expensive.
Anyways, once you have your final array of transform matricies you simply multiply each vertex by the matrix for the bone that it's associated with. If you're doing "smooth skinning" and each vertex is associated with more than one bone, you simply combine them as such:
Handily enough, to transform the normals of your mesh correctly you simply multiply them by the top 3x3 array (rotations only) of the same set of matricies! Oh, and you should probably normalize them when you're done.
I hope that helps, but I know that I have a tendancy to ramble, so if I made no sense I highly suggest reading through nVidia's great paper on the subject: http://developer.nvidia.com/object/skinning.html
First off, I'm assuming that you're working with a mesh that is in it's "bind pose", with only one position per vertex, yes? (Read the nvidia paper below if you're confused.) If that's the case then skinning is pretty easy conceptually.
The biggest "problem" that one needs to overcome is the fact that you need to get the verticies in question to rotate around an arbitrariy "pivot point" for the bone, but matrix math tends to like to rotate everything around the origin. To get around this, we actually need to keep two arrays of "bones" (or matricies). One is the bind pose, and the second is the skeleton in the pose that you want. For usability reasons these should both be stored with relative positions/rotations to their parents, but when it comes time to animate you need to find the "world space" matrix for each bone.
This is done by simply starting at the root joint, generating the matrix for it, and then for each of it's children multiplying their matrix by their parents matrix, and so on and so forth recursively. When you're done each bone should have an associated world space matrix. We do this for both the bind pose and the animated pose.
Now, to get the correct vertex transforms that we want, we need to multiply each animated bone's matrix by the INVERSE of the matrix for that bone in it's bind pose. This will nicely sort out all of the odd little transformation issues, and give us a single, consise matrix to mmultiply our points by. It's worth noting that inverting a matrix is not a trivial calculation, but since the bind pose never changes we only have to calculate it once. The animated pose, of course, you'll want to recalculate on a per frame basis, but it's really not that expensive.
Anyways, once you have your final array of transform matricies you simply multiply each vertex by the matrix for the bone that it's associated with. If you're doing "smooth skinning" and each vertex is associated with more than one bone, you simply combine them as such:
vertex.finalPosition = Vec3( 0.0f, 0.0f, 0.0f );for(int i = 0; i < vertex.numWeights; i++){ vertex.finalPos += (vertex.pos * bones[vertex.boneIndex].matrix) * vertex.boneWeight;}
Handily enough, to transform the normals of your mesh correctly you simply multiply them by the top 3x3 array (rotations only) of the same set of matricies! Oh, and you should probably normalize them when you're done.
I hope that helps, but I know that I have a tendancy to ramble, so if I made no sense I highly suggest reading through nVidia's great paper on the subject: http://developer.nvidia.com/object/skinning.html
Thank you so much for taking the time to reply to my post. I have had so many people that has tried to help - all had in common that they had not gotten a working implementation done themselves. Seems to be a tough topic! Also it seems that people who fully understand this are a scarce commodity.
-----
Yes, you assumed correct. 1 position per vectex. And its in its "bind-pose".
Also, I should note that my designer does not work with more than 1 bone per vertex. So all vertices has weight = 1.0! Which is good, cus all I want right now is to get a "simple" implementation working... later I can add to it!
Also also, you are right, I also get a "main" pos and rotation for each bone, appart from the bones keyframe/animation rotations and positions. I guess thats the info I need to create the "bind-pose" transformation-matrix. Often I see that this info equals the keyframe1 info! So that the "bind-pose" trans-matrix equals the keyframe 1 transformation-matrix. I guess theres no reason to speculate about why this is...
Im in doubt about something. When You talk about matrices, do you always mean a 4x4 matrix with both translation and rotation. Or do I in some cases have to leave out the translation(pos) for example?
My next question has to do with the construction of a "World-matrix" as you call it. You say that I start with the root-joint matrix and work recursively down to the bone for which Im building a world-matrix. Im not clear about this since AxB doesnt equal BxA when multiplying matrices. So let me get this right. I like examples (as you have already seen=):
if A is the root-joint. B is A´s child joint and C is B´s child!
So if I want to calculate C´s world-matrix (no matter if its the bind-pose "world-matrix or some animation world-matrix), I should do this:
given that the matrices on the left-hand side of the equation are local-matrices.
(AxB)xC = C (world-matrix)
Is this correct?
[Edited by - ASL on August 25, 2005 6:50:59 AM]
-----
Yes, you assumed correct. 1 position per vectex. And its in its "bind-pose".
Also, I should note that my designer does not work with more than 1 bone per vertex. So all vertices has weight = 1.0! Which is good, cus all I want right now is to get a "simple" implementation working... later I can add to it!
Also also, you are right, I also get a "main" pos and rotation for each bone, appart from the bones keyframe/animation rotations and positions. I guess thats the info I need to create the "bind-pose" transformation-matrix. Often I see that this info equals the keyframe1 info! So that the "bind-pose" trans-matrix equals the keyframe 1 transformation-matrix. I guess theres no reason to speculate about why this is...
Im in doubt about something. When You talk about matrices, do you always mean a 4x4 matrix with both translation and rotation. Or do I in some cases have to leave out the translation(pos) for example?
My next question has to do with the construction of a "World-matrix" as you call it. You say that I start with the root-joint matrix and work recursively down to the bone for which Im building a world-matrix. Im not clear about this since AxB doesnt equal BxA when multiplying matrices. So let me get this right. I like examples (as you have already seen=):
if A is the root-joint. B is A´s child joint and C is B´s child!
So if I want to calculate C´s world-matrix (no matter if its the bind-pose "world-matrix or some animation world-matrix), I should do this:
given that the matrices on the left-hand side of the equation are local-matrices.
(AxB)xC = C (world-matrix)
Is this correct?
[Edited by - ASL on August 25, 2005 6:50:59 AM]
Quote:given that the matrices on the left-hand side of the equation are local-matrices.
(AxB)xC = C (world-matrix)
Exactly!
Quote:When You talk about matrices, do you always mean a 4x4 matrix with both translation and rotation. Or do I in some cases have to leave out the translation(pos) for example?
In general you're always going to be dealing with a 4x4 (Translation/Rotation Matrix). There is one exception to this, and that's when you are transforming your normals, in which case you use the upper 3x3 matrix of the original 4x4 matrix. For example, if your 4x4 world matrix is:
1 0 0 0
0 1 0 0
0 0 1 0
5 7 2 1
(So obviously this is a translation only matrix) then you would transform the normal for that point by:
1 0 0
0 1 0
0 0 1
Which in this case is just the identity matrix, so the normal isn't going to change.
Quote:Also, I should note that my designer does not work with more than 1 bone per vertex. So all vertices has weight = 1.0!
Cool! Less work for you! In this case you simply multiply each vertex position by the matrix for the associated bone and you're done!
You're right in that this is a subject that's pretty sparse on information, so I'm glad that I can help! Let me know if you have any more questions!
Thank you again. This is exellent!!
(If I knew how to quote I would do it=))
Quote:
To get around this, we actually need to keep two arrays of "bones" (or matricies). One is the bind pose, and the second is the skeleton in the pose that you want. For usability reasons these should both be stored with relative positions/rotations to their parents,...
relative to their parents? (can there be more than one?) Isnt the the rotation and translation thats given relative to its parent...Should I store it as a matrix or as a quat and a vector? What you mean here, I dont get it - please explain!
Quote:
Now, to get the correct vertex transforms that we want, we need to multiply each animated bone's matrix by the INVERSE of the matrix for that bone in it's bind pose.
Is this:
C(worldmatrix) * C(bind-pose)-inverse = C(ready to use on vertices)
Quote:
In general you're always going to be dealing with a 4x4 (Translation/Rotation Matrix). There is one exception to this, and that's when you are transforming your normals, in which case you use the upper 3x3 matrix of the original 4x4 matrix.
I understand what you say, and I know how to extract the 3x3 rotation matrix from a 4x4 transformation-matrix. I have to use the 3x3 matrix to transform normal. This is clear too. BUT... where do I use normals? It seems that my designer dont use normals as they dont come in the files I get from him. I do see that the fileformat supports normals, but that flag is not set. If it doesnt matter for me to get a simple implementation working, you can just skip this part.
(If I knew how to quote I would do it=))
Quote:
To get around this, we actually need to keep two arrays of "bones" (or matricies). One is the bind pose, and the second is the skeleton in the pose that you want. For usability reasons these should both be stored with relative positions/rotations to their parents,...
relative to their parents? (can there be more than one?) Isnt the the rotation and translation thats given relative to its parent...Should I store it as a matrix or as a quat and a vector? What you mean here, I dont get it - please explain!
Quote:
Now, to get the correct vertex transforms that we want, we need to multiply each animated bone's matrix by the INVERSE of the matrix for that bone in it's bind pose.
Is this:
C(worldmatrix) * C(bind-pose)-inverse = C(ready to use on vertices)
Quote:
In general you're always going to be dealing with a 4x4 (Translation/Rotation Matrix). There is one exception to this, and that's when you are transforming your normals, in which case you use the upper 3x3 matrix of the original 4x4 matrix.
I understand what you say, and I know how to extract the 3x3 rotation matrix from a 4x4 transformation-matrix. I have to use the 3x3 matrix to transform normal. This is clear too. BUT... where do I use normals? It seems that my designer dont use normals as they dont come in the files I get from him. I do see that the fileformat supports normals, but that flag is not set. If it doesnt matter for me to get a simple implementation working, you can just skip this part.
To quote someone use the '
I'm sorry, I'm not always the best at technical explanations. First off, No. Bones cannot have more than one parent (although they can have more than one child). Second: Yes, the translation and rotation given in your files will most likely be relative already, but you want to make sure that this is the case before you jump into the animation code. Also, I prefer to store the bones as a Quat and a Vector simply because it saves space, but you could store them as straight matricies as well. It's a personal preference.
Yes. But in order to prevent confusion, the inverse is an operator that is applied to the bind pose matrix, not subtracted from it. The Java matrix library has an inverse operator in-place already: matrix.inverse()
So, your code would actually look something like this:
I would not reccomend doing the inverse every time like this, though, since it IS an expensive operation. Calculate it as part of the load process.
Normals, as you may or may not know, are used mainly for calculating shading from lights. You will probably need them eventually, but if you're designer is not providing you with them at the moment then don't worry about it. They are not required for skinning, and it's actually a little easier and faster without them. I'm simply providing the information here for when you inevitably do get to that point.
' and '' tags (without spaces). A full listing of tags is in the Forum FAQs
Quote:relative to their parents? (can there be more than one?) Isnt the the rotation and translation thats given relative to its parent...Should I store it as a matrix or as a quat and a vector? What you mean here, I dont get it - please explain!
I'm sorry, I'm not always the best at technical explanations. First off, No. Bones cannot have more than one parent (although they can have more than one child). Second: Yes, the translation and rotation given in your files will most likely be relative already, but you want to make sure that this is the case before you jump into the animation code. Also, I prefer to store the bones as a Quat and a Vector simply because it saves space, but you could store them as straight matricies as well. It's a personal preference.
Quote:Is this: C(worldmatrix) * C(bind-pose)-inverse = C(ready to use on vertices)
Yes. But in order to prevent confusion, the inverse is an operator that is applied to the bind pose matrix, not subtracted from it. The Java matrix library has an inverse operator in-place already: matrix.inverse()
So, your code would actually look something like this:
finalMatrix = animMatrix * bindMatrix.inverse();
I would not reccomend doing the inverse every time like this, though, since it IS an expensive operation. Calculate it as part of the load process.
Quote:where do I use normals? It seems that my designer dont use normals as they dont come in the files I get from him. I do see that the fileformat supports normals, but that flag is not set. If it doesnt matter for me to get a simple implementation working, you can just skip this part.
Normals, as you may or may not know, are used mainly for calculating shading from lights. You will probably need them eventually, but if you're designer is not providing you with them at the moment then don't worry about it. They are not required for skinning, and it's actually a little easier and faster without them. I'm simply providing the information here for when you inevitably do get to that point.
Great!
Let me see if I get this right. In my model when I load it, every bone/joint/node should contain the following:
A "bind-pose" matrix.
A "bind-pose" matrix (Inverted).
A local matrix for each frame (After interpolation) - these matrices give rotaion and position relative to its parent (parents matrix for same frame).
A "world" matrix for each frame, based on multiplication of local-matrices from root to this bone.
A "final" matrix for each frame, based on multiplication of "world"-matrix and inverted "bind-pose" matrix.
Thats quit a lot of matrices jammed in to one bone - is this really right? And how much of it do I calculate loading the model - and what do I calculate on the run (during actual animation)?
----
And yes Im aware of all the goodies in the javax.vecmath package..=)
new Matrix4f(Quat4f q, Vector3f v, scale);
Matrix4f.invert(); <--- though I didnt know it was so expensive. Now I do!
Matrix4f.mul(Matrix4f m);
Matrix4f.get(Matrix3f m);
so on....
----
I cant thank you enough for all this help! You have improved my mood by a factor 100000....=)
Let me see if I get this right. In my model when I load it, every bone/joint/node should contain the following:
A "bind-pose" matrix.
A "bind-pose" matrix (Inverted).
A local matrix for each frame (After interpolation) - these matrices give rotaion and position relative to its parent (parents matrix for same frame).
A "world" matrix for each frame, based on multiplication of local-matrices from root to this bone.
A "final" matrix for each frame, based on multiplication of "world"-matrix and inverted "bind-pose" matrix.
Thats quit a lot of matrices jammed in to one bone - is this really right? And how much of it do I calculate loading the model - and what do I calculate on the run (during actual animation)?
----
And yes Im aware of all the goodies in the javax.vecmath package..=)
new Matrix4f(Quat4f q, Vector3f v, scale);
Matrix4f.invert(); <--- though I didnt know it was so expensive. Now I do!
Matrix4f.mul(Matrix4f m);
Matrix4f.get(Matrix3f m);
so on....
----
I cant thank you enough for all this help! You have improved my mood by a factor 100000....=)
Quote:I cant thank you enough for all this help!
No Problem! I've been helped out enough on odd little things in the past, I figure I owe it to the community as a whole to give some of that back!
Sounds like you've got most of it down pat, I just wanted to make one more note: You only need to store the inverted bind pose. I calculate these once when I first load the model, then discard the original bind-pose matricies. Also, once that is calculated you can discard all local information for the bind pose.
For the actual animation, you need to...
-Calculate the "animated" skeleton matricies for the current time. This usually involes interpolating between two frames and/or blending between multiple animations (running + shooting). For each animation frame you should only be storing the quat/transforms and building the matricies in a seperate array on a per-frame basis.
-Multiply each bone in the animated skeleton by its' inversed-bind-pose counterpart. (Which should be pre-calculated.) Depending on how you're doing your data management, you may just want to store this "final" matrix in the animated matrix array.
-Tranform each Vertex in the mesh by it's counterpart in the final matrix array.
By clever re-use of memory and only storing what's absolutely nessicary you can cut down on the amount of storage space required considerably.
Closing in...=)
Now that I have grown comfortable with all this new infomation I have been reading your first post in my thread again. I found something that I forgot during this info-raid. I thought that I had to build the inverted "bind-pose" based om the local "bind-pose"-matrix. Now I see that you wrote that I have to base it on worldspace-"bind-pose" matrices, build just like the animation-matrices!
And when I have this inverted "world-bindpose" matrix I burn all bridges and trash everything else that has to do with the bindpose.
While reading the thread again a new question has arised:
Do I use the SLERP-function for interpolation? And what do I base the interpolation upon -> the lokal matrix, the world-animation matrix or the final matrix? If I have to base the interpolation upon one of the last two matrices, do I then have to extract the quaternion from the matrix, so I can use it in SLERP?
From your last post I take it that saving as much memory is the way to go. Dont I pay for this with low frame-rate?
So I need the following in the model-class:
1 array of inverted worldspace-"bindpose" matrices - 1 for each bone.
1 array of worldspace-"animation" matrices - 1 for each bone - updated from frame to frame.
I dont store final-matrices - they could be create as part of the transform vertices method, just before I transform.
I have to store the local matrices for each keyframe as well (In the shape af matrices or quat/pos pairs). I guess I will make a joint-class within the model-class that contains this info.
I feel like im ready to test this on some simple cases before I build the loader - so if you have the slightest feeling that im "off-road" please correct me..=)
Now that I have grown comfortable with all this new infomation I have been reading your first post in my thread again. I found something that I forgot during this info-raid. I thought that I had to build the inverted "bind-pose" based om the local "bind-pose"-matrix. Now I see that you wrote that I have to base it on worldspace-"bind-pose" matrices, build just like the animation-matrices!
And when I have this inverted "world-bindpose" matrix I burn all bridges and trash everything else that has to do with the bindpose.
While reading the thread again a new question has arised:
Do I use the SLERP-function for interpolation? And what do I base the interpolation upon -> the lokal matrix, the world-animation matrix or the final matrix? If I have to base the interpolation upon one of the last two matrices, do I then have to extract the quaternion from the matrix, so I can use it in SLERP?
From your last post I take it that saving as much memory is the way to go. Dont I pay for this with low frame-rate?
So I need the following in the model-class:
1 array of inverted worldspace-"bindpose" matrices - 1 for each bone.
1 array of worldspace-"animation" matrices - 1 for each bone - updated from frame to frame.
I dont store final-matrices - they could be create as part of the transform vertices method, just before I transform.
I have to store the local matrices for each keyframe as well (In the shape af matrices or quat/pos pairs). I guess I will make a joint-class within the model-class that contains this info.
I feel like im ready to test this on some simple cases before I build the loader - so if you have the slightest feeling that im "off-road" please correct me..=)
Sorry for the long delay in the reply. Busy day...
This is actually highly dependant on the exact effect you want, but the general case is going to be that you want to interpolate the base quat/transform pair (The Transform linearly, and the Quat with SLERP), then convert the interpolated local bones into matricies and so on...
The biggest reason for this would be combining animations: let's say you're combining a running and shooting animation. The running animation is going to primarily affect the legs, and the shooting animation primarily the arms. If you combine these two after you've calculated the world coordinates for each animation the legs will be bobbing up and down like mad while the guys' torso glides along like it's on a rail. It would look rather weird >_<. If you preform all the matrix calculation AFTER the animation information has been merged, though, all the appropriate heiarical positioning will take effect and your animations will look much more natural.
For me this is true, for you it may not be. Different projects take different priorities, and in mine memory management is key. You do loose a little bit of preformance this way, but in the end I found it to be an acceptable tradeoff. Your mileage may vary.
This is difficult to say for certain, because now we're moving out of the relm of general theory into your specific implementation, which may have different requirements. I can, however show you how mine works and allow you to alter it as you see fit:
I have three major classes that are all associated with animation. The first is the main Model class. This, of course contains the actual mesh, textures, what have you. It also contains an instance of my second class, which is a Skeleton. The main purpose of the Skeleton class is to store the bind pose, compute the final matrix, and render the current skeleton if I need it for debugging.
The skeleton class actually contains two arrays of "Bones": Bind and Animated. The Bind array isn't an actual array of bones, but instead an array of Matricies. These are simply the inverse matricies that we've been talking about, and all other information about these bones is discarded. The Animated array is an actual array of Bone structures, which contains the local quat/transform data (mostly for debugging) and the world matrix for the current frame of animation. It has a method that returns the current "Pose", which simply returns the final matrix array. (This is generated each time the function is called, and not stored anywhere)
That frame is generated by a third Animation class, which stores each keyframe of the animation and returns an array of local-space bones based on the current time of the animation (Which I refer to as a "frame"). This class is the one that actually interpolates the animation information for each frame and blends multiple animations together.
How it all works together from a high-level point of view is this: In the models Update() function, it determines the current animation time and requests a "frame" from the Animation class. This frame is then passed to the Skeleton, which copies that data over to the Animated array and computes the world-matrix for each bone. Later, during the Render() function for that mesh, it requests the current Pose from the skeleton, which gives it the final matrix array. That matrix array, along with the verticies in their bind pose, are then both passed to the renderer, where the actual skinning is preformed either in software or a vertex shader, depending on the render path.
Now, mind you that this is a work-in-progress system, and even by typing that out I can see some improvements that could be made, but the general ideas are all their. Also, this system of doing things may be completely useless to you depending on how your engine is already set up. I think you can adapt it to fit your needs, though.
Good luck! (And if you do have any more questions don't be afraid to ask!)
Quote:Do I use the SLERP-function for interpolation? And what do I base the interpolation upon -> the lokal matrix, the world-animation matrix or the final matrix?
This is actually highly dependant on the exact effect you want, but the general case is going to be that you want to interpolate the base quat/transform pair (The Transform linearly, and the Quat with SLERP), then convert the interpolated local bones into matricies and so on...
The biggest reason for this would be combining animations: let's say you're combining a running and shooting animation. The running animation is going to primarily affect the legs, and the shooting animation primarily the arms. If you combine these two after you've calculated the world coordinates for each animation the legs will be bobbing up and down like mad while the guys' torso glides along like it's on a rail. It would look rather weird >_<. If you preform all the matrix calculation AFTER the animation information has been merged, though, all the appropriate heiarical positioning will take effect and your animations will look much more natural.
Quote:From your last post I take it that saving as much memory is the way to go. Dont I pay for this with low frame-rate?
For me this is true, for you it may not be. Different projects take different priorities, and in mine memory management is key. You do loose a little bit of preformance this way, but in the end I found it to be an acceptable tradeoff. Your mileage may vary.
Quote:So I need the following in the model-class...
This is difficult to say for certain, because now we're moving out of the relm of general theory into your specific implementation, which may have different requirements. I can, however show you how mine works and allow you to alter it as you see fit:
I have three major classes that are all associated with animation. The first is the main Model class. This, of course contains the actual mesh, textures, what have you. It also contains an instance of my second class, which is a Skeleton. The main purpose of the Skeleton class is to store the bind pose, compute the final matrix, and render the current skeleton if I need it for debugging.
The skeleton class actually contains two arrays of "Bones": Bind and Animated. The Bind array isn't an actual array of bones, but instead an array of Matricies. These are simply the inverse matricies that we've been talking about, and all other information about these bones is discarded. The Animated array is an actual array of Bone structures, which contains the local quat/transform data (mostly for debugging) and the world matrix for the current frame of animation. It has a method that returns the current "Pose", which simply returns the final matrix array. (This is generated each time the function is called, and not stored anywhere)
That frame is generated by a third Animation class, which stores each keyframe of the animation and returns an array of local-space bones based on the current time of the animation (Which I refer to as a "frame"). This class is the one that actually interpolates the animation information for each frame and blends multiple animations together.
How it all works together from a high-level point of view is this: In the models Update() function, it determines the current animation time and requests a "frame" from the Animation class. This frame is then passed to the Skeleton, which copies that data over to the Animated array and computes the world-matrix for each bone. Later, during the Render() function for that mesh, it requests the current Pose from the skeleton, which gives it the final matrix array. That matrix array, along with the verticies in their bind pose, are then both passed to the renderer, where the actual skinning is preformed either in software or a vertex shader, depending on the render path.
Now, mind you that this is a work-in-progress system, and even by typing that out I can see some improvements that could be made, but the general ideas are all their. Also, this system of doing things may be completely useless to you depending on how your engine is already set up. I think you can adapt it to fit your needs, though.
Good luck! (And if you do have any more questions don't be afraid to ask!)
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement