• Content count

  • Joined

  • Last visited

Community Reputation

1110 Excellent

About Sector0

  • Rank

Personal Information

  1. Video games as software need to meet functional requirements and it's obvious that the most important functional requirement of a video game is to provide entertainment. Users want to have interesting moments while playing video games and there exists many factors which can bring this entertainment to the players. One of the important factors is the animations within the game. Animation is important because it can affect the game from different aspects. Beauty, controls, narration and driving the logic of the game are among them. This post is trying to consider the animations in terms of responsiveness while trying to discuss some techniques to retain their naturalness as well. In this article I'm going to share some tips we used in the animations of the action-platforming side-scroller game named "Shadow Blade: Reload". SB:R is powered by Unity3D. The PC version has been released 10th 2015 August via Steam and the console versions are on the way. So before going further, let's have a look at some parts of the gameplay here: You may want to check the Steam page too. So here we can discuss the problem. First, consider a simple example in real world. You want to punch into a punching bag. You rotate your hip, torso and shoulder in order and consume energy to rotate and move your different limbs. You are feeling the momentum in your body limbs and muscles and then you are hearing the punch sound just after landing it into the bag. So you are sensing the momentum with your tactile sensation, hearing different voices and sounds related to your action and seeing the desired motion of your body. Everything is synchronized! You are feeling the whole process with your different senses. Everything is ordinary here and this is what our mind knows as something natural. Now consider another example in a virtual world like a video game. This time you have a controller, you are pressing a button and you want to see a desired motion. This motion can be any animation like a jump or a punch. But this punch is different from the mentioned example in real world because the player is just moving his thumb on the controller and the virtual character should move his whole body in response to it. Each time player presses a button the character should do an appropriate move. If you receive a desired motion with good visual and sounds after pressing each button, we can say that you are going to be merged within the game because it's something almost similar to the example of punching in real world. The synchronous response of the animations, controls and audios help the player feel himself more within the game. He uses his tactile sensation while interacting with the controller, uses his eyesight to see the desired motion and his hearing sensation to hear the audio. Having all these synchronously at the right moment can bring both responsiveness and naturalness which is what we like to see in our games. Now the problem is that when you want to have responsiveness, you have to kill some naturalness in animations. In a game like Shadow Blade: Reload, the responsiveness is very important because any extra move can lead the player to fall of the edges or be killed by enemies. However we need good-looking animations as well. So in the next section some tips are going to be listed which have been used to bring both responsiveness and naturalness into our playable character named Kuro. Cases Which Can Help Bring Both Naturalness and Responsiveness into Animations Some of the techniques used in "Shadow Blade: Reload" animations are listed here. They have been used to retain naturalness while having responsiveness: 1- Using Additive Animations: Additive animations can be used to show some asynchronous motions on top of the current animations. We used them in different situations to show the momentum over body while not interrupting the player to show different animations. An example is the land animation. After the player fall ends and he reaches the ground, he can continue running or attacking or throwing shurikens without any interruptions or land animations. So we are directly blending the fall with other animations like running. But blending directly between fall and run doesn't provide an acceptable motion. So here we're just adding an additive land animation on top of the run or other animations to show the momentum over upper body. The additive animations just have visual purposes and the player can continue running or doing other actions without any interruption. The video here shows the additive animation used for the purpose of landing momentum on player's body: In Unity, the additive animations are calculated with respect to the current animation's first frame. So the additive animation pose in each frame, is equal to the difference of the current frame and the first frame pose. The reference pose of the character is used here to bring more generic additive animation which can be added well on a range of different locomotion animations. This additive animation is not being applied on top of the standing idle animation. Standing idle uses a different animation for land. We also used some other additive animations there. For example a windmill additive animation on spine and hands. It's being played when the character stops and starts running consecutively. It can show momentum to hands and spine. As a side note, the additive animations have to be created carefully. If you are an indie developer with no full-time animator, you can do these kind of modifications like additive animations via some other procedural animation techniques like Inverse Kinematics. For instance an IK chain on spine can be defined and be used for modification. This is true for hands and feet as well. However the IK chain have to be defined carefully as well as the procedural animation of the end effector. 2- Specific Turn Animations: You see turn animations in many games. For instance, pressing the movement button in the opposite direction while running, makes the character slide and turn back. While this animation is very good for many games and brings good felling to the motions, it is not suitable for an action-platforming game like SB:R because you are always moving back and forth on the platforms with low areas and such an extra movement can make you fall unintentionally and it also kills responsiveness. So for turning, we just rotate the character 180 degrees in one frame. But again, rotating the character 180 degrees in just one frame, is not providing a good-looking motion. So here we used two different turn animations. They are showing the character turning and are starting in a direction opposite to character's forward vector and end in a direction equal to character's forward vector. When we turn the character in just one frame, we play this animation and the animation can show the turn completely. It has the same speed of run animation so nothing is just going to be changed in terms of responsiveness and you will just see a turn animation which is showing momentum of a turn motion over the body and it can bring good visuals to the game. One thing which has to be considered here is that the turn animation starts in a direction opposite the character's forward vector so for using this animation we turned off the transitional blending because it can make jerky motions on a root bone while blending. To avoid frame mismatches and foot-skating, we used two different turn animations and played them based on the feet phases in run animation. You may check out the turn animation here: 3- Slower Enemies: While the main character is very agile, the enemies are not! Their animations have much more frames. This can help us to get the focus of players out from the main character in many situations . You might know that the human eye has a great ability to focus and zoom on different objects. So when you are looking at one enemy you can only see it clearly and not the others. Slower enemy animations with more frames help us to get the focus out from the player at many points. As a side note, I want to say that I was watching a scientific show about human eyes a while ago and it showed that the women eyes has wider view than men and men has better focusing. You might want to check this research if you are interested about this topic. 4- Safe Blending Intervals to Cancel Animations: Assume a grappling animation. It can be started from idle pose and ended in idle pose again. The animation can do its job in its 50% of length. So the rest of its time is just for the character to get back to its idle pose safe and smoothly. At the most times, players don't want to see the animations until their ending point. They prefer to do other actions. In our game, players usually tend to cancel the attack and grappling animations after they kill enemies. They want to run, jump or dash and continue navigating. So for each animation which can be cancelled, we are setting a safe interval for blending which is used as the time to start cancelling current animation(s). This interval provides poses which can be blended well with run, jump, dash or other attacks. It provides less foot-skating, frame mismatches and good velocity blending while blending between animations. 5- Continuous Animations: In SB:R, most of the animations are animated with respect to the animation(s) which is playing with higher probability before them. For example we have run attacks for the player. When animating them, the animators have concatenated one loop of run before it and created the run attack just after that. With this, we can have a good speed blending between source and destination animations because the run attack animation has been created with respect to the original run animation. Also we can retain the speed and responsiveness of the previous animations into the current animation. Another example here is the edge climb which is starting from the wall run animation. 6- Context-Based Combat: In SB:R we have context-based combat which is helping us using different animations based on the current state of the player (moving, standing, jumping, distance and/or direction to enemies). Attacking from each state, causing different animations to be selected which all are preserving almost the same speed and momentum of the player's current state (moving, standing, diving and so on). For instance, we have run attacks, dash attacks, dive attacks, back stabs, Kusarigama grapples and many other animations. All are being started from their respective animations like run, jump, dash and stand and all trying to preserve the previous motion speed and responsiveness. 7- Physically Simulated Cloths as Secondary Motion: Although responsiveness can lower the rate of naturalness but adding some secondary motions like cloth simulations can help solving this issue. In SB:R we have a scarf for the main character Kuro which helps us showing more acceptable motions. 8- Tense Ragdolls and Lower Crossfade Time in Contacts: Removing crossfade transition times in hits and applying more force to the ragdolls can help more in receiving better hit effects. However this is useful in many games not just in our case. Conclusion Responsiveness vs. naturalness is always a huge challenge in video games and there are ways to achieve both. Most times you have to do trade-offs between both to achieve a decent result. For those who are eager to find more about this topic, I can recommend this good paper from Motion in Games conference: Aline Normoyle, Sophie Jorg, "Trade-offs between Responsiveness and Naturalness for Player Characters", 2014. It shows interesting results about players' responses to animations with different amount of responsiveness and naturalness. Article Update Log 14 August 2015: Initial release 18 August 2015: Added more reference videos to the article 20 August 2015: Minor edits to the texts
  2. Hi Gaiiden,   I added another video with permission of the developer company and stated in the article that the game is powered by Unity3D.
  3. I do agree with the previous comments that some parts are unclear but as I mentioned several times in the article, this is a general purpose article. What you expecting me is to write a book instead of an general purpose article. I mentioned that each of which techniques should be considered as a separated article if someone is eager to learn the details.     I addressed Havok animation as well and sent a link for it. It has source codes , examples and rich documentation. You can find what you need there. Havok has implemented the same technique more in action.     The link belongs to projectanarchy. Havok came up with free tool sets for mobile developers. You can download it free there. The hkVector4f is located in the downloaded packages as well. I've put the link there so audiences can check it more easily. It's free for mobile development.       It seems that you haven't read the paragraph carefully. Here is a quote from article:   "Here we can define a skeleton map for our current skeleton and select more important bones to be updated and ignore the others. For example when you are far from the camera you don't need to update finger bones or neck bone. You can just update spines, head, arms and legs. These are the bones which can be seen from far"   Yeah you're right. You need some bones for logic. You have different cameras for multiplayer games and you need deterministic operations on some type of games. But do finger bones has something to do with this? Do facial bones have something to do with this? You have 28 bones for fingers, 15 bones for facial, 7 bones for ponytail, 5 bones for extra cloths! Do they have anything to do with logic? with physics? I mentioned you should have skeleton map to save the more important bones even if they are far from the camera. The important bones are affecting both visual and logic. So it seems that you haven't worked with complex structured characters before! because in these kind of characters many of the bones are just there to have better visual and nothing to do with logic and if they are not going to be seen, they should not be calculated.     You might have complex blend trees in which you are blending more than 20 animations in a frame. There should be other procedural calculations like IK/FK blending, computing bone rotation limits in your run-time rig and many other things. They are firstly calculated in local or parent space. So having a dirty flag can help you a lot here. These calculations are heavy. They have lots of more overhead than checking for dirty flags! So ignoring these heavy calculations, when you don't need them to be updated is not cheap at all! To produce realistic characters many of these techniques are needed and managing their computation is not a cheap decision.     This part is related to mesh skinning and I mentioned in the introduction part that this article is not going to talk about mesh skinning. Here is a quote from article:   " This article is not going to talk about mesh skinning optimization and it is just going to talk about skeletal animation optimization techniques. There exists plenty of useful articles about mesh skinning around the web."   Overall, the expectation from you as a reviewer is to read the article more carefully. This one is a general purpose article. It tries to give the enthusiastic audiences some cues and introduce some techniques to them generally. I mentioned this several times in the article. So you should not expect details here.  Covering each of the techniques details need a separated heavy weight article.   One article can be more general like a survey and one article can be more expert which is trying to cover all the details and experimental results. You should note that this article is general and the general articles like surveys have their own audiences as well.     The first thing is that you haven't payed attention to the note at the end of the page which is stated that:   "Note: Please offer only positive, constructive comments - we are looking to promote a positive atmosphere where collaboration is valued above all else."   You haven't read the article with patience. The issues you mentioned are very good and useful but the article defined its scope in its different sections.   I really appreciate the comments from Buckeye and Buckshag because they read the article carefully and with patience and they mentioned many valuable comments and stated great issues and they found what the article is tending to say to which group of audiences. I'll try to update the article based on their comments.   You mentioned that I clearly have not enough experience in this field. First I have to say that writing articles is not about promoting ourselves and it's about to share information with other enthusiastic people. So I'm sorry if I'm going to tell you some of my experiences here. I've been in the gaming industry for almost 9 years. I've just focused on animation in these years and made research and development on many animation techniques during these years. I've also been a user of different animation tools, technologies and engines in this interval.   At the end, thanks for your comments. I will update some unclear parts of the article.
  4. Hi Buckshag,   Thanks for comments and the cases you mentioned. All of them are true. BTW I'm following EmotionFX news for several years. I haven't used it before but I know it provides great run-time animation features. Hopefully I will use it in other future projects.
  5. Hi Buckeye   Many thanks for you attention. I'll try to update the article and fix the issues you mentioned. I will edit the unclear parts based on your comment.
  6. Hi AllEightUp,   Thanks for your consideration. I try to answer some of the issues you mentioned.   As I stated in the introduction of the article, the techniques are going to be addressed generally here and I try to introduce references for those who are eager to learn the details. Covering details of these techniques need a separated heavy weight article.       I've quoted a part of the article here about animation tracks: "So a good technique is to keep the bones sequentially in the memory. They should not be separated because of the locality of the references." It says the bones should be allocated linearly in memory not the animation tracks. Very detailed characters have pretty much 70 bones. The bones usually accessed several times in the process of computing animation, skinning and post processes like IK and blending between ragdoll and animation. So keeping bones sequential can help to achieve better performance. However you're right about the animation tracks. Many of the animation tracks would not be accessed in a huge period of time. So we can have a memory manager for loading and unloading them. The memory manager can be strictly related to the game animation system. It can load most recently used animations linearly in memory or it can have a learning process to weight the motion graphs transitions if the game is using the similar transitional animation techniques. So the most recent animations (or parts of animations) remain resident in memory linearly. For animation compression, usually the sample rate is high so the curves can just use linear interpolation and the compression can be just applied during import phase and the tracks will be constructed at import time.         I mentioned that users should adjust the error rate and if they needed the continuity they should leave them continuous. Users can adjust error rates to have both optimization and beauty of a the animation. Here is a quote from article:   " As you can see in Figure1, the keyframe number 2 and 3 are the same. But there is an extra motion between them. You might need this smoothness for many bones so leave it be if you need it."       About the jittering, The jittering would not be occurred for the children bones, if you simplify the parents animation tracks, because the keyframes are always restored relative to each bone binding pose and the binding pose is not changing at run time. Binding pose is stored relative to its parent bone space. So simplification should not affect the child bones motion. On some rare situations where you have translation tracks the jittering will occur. For example you have a weapon bone which is child of the right hand, the character passes it to the left hand in the animation for a short period of time. While the weapon bone is in the left hand, the compression can create jittering on it, because it is the child of the right hand. However I haven't written anywhere in the article that you should apply the error rate separately for each bone. Here is a related quote from article: "The curve simplification should be applied for translation, rotation and scale separately because each of which has its own keyframes and different values. Also their difference is calculated differently"   So it says the scale, translation and rotation error rates should be specified differently. This doesn't mean that you should apply error rates to each bone separately however it can be done separately but it could be hard for user to manipulate it. Again thanks for your critics and again the techniques are discussed generally here. All of them have many details and tricks when you want to have them in action. Each of which should be considered separately in different articles and their experimental results should be addressed as well
  7. Skeletal animation plays an important role in video games. Recent games are using many characters within. Processing huge amount of animations can be computationally intensive and requires much memory as well. To have multiple characters in a real time scene, optimization is highly needed. There exists many techniques which can be used to optimize skeletal animations. This article tends to address some of these techniques. Some of the techniques addressed here have plenty of details, so I just try to define them in a general way and introduce references for those who are eager to learn more. This article is divided into two main sections. The first section is addressing some optimization techniques which can be used within animation systems. The second section is from the perspective of the animation system users and describes some techniques which can be applied by users to leverage the animation system more efficiently. So if you are an animator/technical animator you can read the second section and if you are a programmer and you want to implement an animation system you may read the first section. This article is not going to talk about mesh skinning optimization and it is just going to talk about skeletal animation optimization techniques. There exists plenty of useful articles about mesh skinning around the web. 1.Skeletal Animation Optimization Techniques I assumed that most of the audiences of this article know the basics of skeletal animation, so I'm not going to talk about the basics here. To start, let's have a definition for a skeleton in character animation. A skeleton is an abstract model of a human or animal body in computer graphics. It is a tree data structure. A tree which its nodes are called bones or joints. Bones are just some containers for transformations. For each skeletal animation, there exists animation tracks. Each track has the transformation info of a specific bone. A track is a sequence of keyframes. A keyframe is transformation of a bone at a specific time. The keyframe time is specified from the beginning of the animation. Usually the keyframes are stored relative to a pose of the bone named binding pose. These animation tracks and skeletal representation can be optimized in different ways. In the following sections, I will introduce some of these techniques. As stated before, the techniques are described generally and this article is not going to describe the details here. Each of which can be described in a separated article. Optimizing Animation Tracks An animation consists of animation tracks. Each animation track stores the animation related to one bone. An animation track is a sequence of keyframes where each keyframe contains one of the translation, rotation or scale info. Animation tracks are one thing that can be optimized easily from different aspects. First we have to note that most of the bones in character animations do not have translation. For example we don't need to move fingers or hands. They just need to be rotated. Usually the only bones that need to have translation are the root bone and the props (weapons, shields and so on). The other body organs do not move and they are just being rotated. Also the realistic characters usually do not need to be scaled. Scale is usually applied to cartoony characters. One other thing about the scale is that animators mostly use uniform scale and less non-uniform scale. So based on this information, we can remove scale and translation keyframes for those animation tracks that do not own them. The animation tracks can become light weighted and allocate less memory and calculation by removing unnecessary translation and scale keyframes. Also if we use uniform scale, the scale keyframes can just contain one float instead of a Vector3. Another technique which is very useful for optimization of animation tracks, is animation compression. The most famous one is curve simplification. You may know it as keyframe reduction as well. It reduces the keyframes of an animation track based on a user defined error. With this, the consecutive keyframes which have small differences can be omitted. The curve simplification should be applied for translation, rotation and scale separately because each of which has its own keyframes and different values. Also their difference is calculated differently. You may read this paper about curve simplification to find out more about it. One other thing that can be considered here, is how you store rotation values in the rotation keyframes. Usually the rotations are stored in unit quaternion format because quaternions have some good advantages over Euler Angles. So if you are storing quaternions in your keyframes, you need to store four elements. But in unit quaternions the scalar part can be obtained easily from the vector part. So the quaternions can be stored with just 3 floats instead of four. See this post from my blog to find out how you can obtain the scalar part from the vector part. Representation of a Skeleton in Memory As mentioned in previous sections, a skeleton is a tree data structure. As animation is a dynamic process, the bones may be accessed frequently while the animation is being processed. So a good technique is to keep the bones sequentially in the memory. They should not be separated because of the locality of the references. The sequential allocation of bones in memory can be more cache-friendly for the CPU. Using SSE Instructions To update a character animation, the system has to do lots of calculations. Most of them are based on linear algebra. This means that most of calculations are with vectors. For example, the bones are always being interpolated between two consecutive keyframes. So the system has to LERP between two translations and two scales and SLERP between two quaternion rotations as well. Also there might be animation blending which leads the system to interpolate between two or more different animations based on their weights. LERP and SLERP are calculated with these equations respectively: LERP(V1, V2, a) = (1-a) * V1 + a * V2 SLERP(Q1, Q2, a) = sin((1-a)*t)/sin(t) * Q1 + sin(a*t)/sin(t) * Q2 Where 't' is the angle between Q1 and Q2 and 'a' is interpolation factor and it is a normalized value. These two equations are frequently used in keyframe interpolation and animation blending. Using SSE instructions can help you to achieve the results of these equations efficiently. I highly recommend you check the hkVector4f class from Havok physics/animation SDK as a reference. The hkVector4f class is using SSE instructions very well and it's a very well-designed class. You can define translation, scale and quaternion similar to hkVector4f class. You have to note that if you are using SSE instructions, then your objects which are using it have to be memory aligned otherwise you will run into traps and exceptions. Also you should consider your target platform and see that how it supports these kind of instructions. Multithreading the Animation Pipeline Imagine you have a crowded scene full of NPCs in which each NPC has a bunch of skeletal animations. Maybe a herd of bulls. The animation can take a lot of time to be processed. This can be reduced significantly if the computation of crowds becomes multithreaded. Each entity's animations can be computed in a different thread. Intel introduced a good solution to achieve this goal in this article. It defines a thread pool with worker threads which their count should not be more than CPU cores otherwise the application performance decreases. Each entity has its own animation and skinning calculations and it is considered as a job and is placed in a job queue. Each job is picked by a worker thread and the main thread calls render functions when the jobs are done. If you want to see this computation technique more in action, I suggest you to have a look at Havok animation/physics documentation and study the multithreading in the animation section. To have the docs you have to download the whole SDK here. Also you can find that Havok is handling synchronous and asynchronous jobs there by defining different job types. Updating Animations One important thing in animation systems is how you manage the update rate of a skeleton and its skinning data. Do we always need to update animations each frame? If true do we need to update each bone every frame? So here we should have a LOD manager for skeletal animations. The LOD manager should decide whether to update hierarchy or not. It can consider different states of a character to decide about its update rate. Some possible cases to be considered are listed here: 1- The Priority of The Animated Character: Some characters like NPCs and crowds do not have very high degree of priority so you may not update them in every frame. At most of the times, they are not going to be seen clearly so they can be ignored to be updated every frame. 2- Distance to Camera: If the character is far from the camera, many of its movements cannot be seen. So why should we just compute something that cannot be seen? Here we can define a skeleton map for our current skeleton and select more important bones to be updated and ignore the others. For example when you are far from the camera you don't need to update finger bones or neck bone. You can just update spines, head, arms and legs. These are the bones which can be seen from far. So with this you have a light weighted skeleton and you are ignoring many bones to update. Don't forget that human hands have 28 bones for fingers and 28 bones for a small portion of a mesh is not very efficient. 3- Using Dirty Flags For Bones: In many situations, the bone transformation is not changed in two consecutive frames. For example the animator himself didn't animate that bone in several frames or the curve simplification algorithm reduced consecutive keyframes which are more similar. In these kind of situations, you don't need to update the bone in its local space again. As you might know the bones are firstly calculated in their local space based on animation info and then they will be multiplied by their binding pose and parent transformation to be specified in world or modeling space. Defining dirty flags for each bone can help you to not calculate bones in their local space if they are not changed between two consecutive frames. They can be updated in their local space if they are dirty. 4- Update Just When They Are going To Be Rendered: Imagine a scene in which some agents are following you. You try to run away from them. The agents are not in the camera frustum but their AI controller is monitoring and following you. So do you think we should update the skeleton while the player can't see them? Not in most cases. So you can ignore the update of skeletons which are not in the camera frustum. Both Unity3D and Unreal Engine4 have this feature. They allow you to select whether the skeleton and its skinned mesh should be updated or not if they are not in the camera frustum. You might need to update skeletons even if they are not in the camera frustum. For example you might need to shoot an object to a character's head which is not in the camera. Or you may need to read the root motion data and use it for locomotion extraction. So you need the calculated bone positions. In this kind of situations you can force the skeleton to be updated manually or not using this technique. 2.Optimized Usage of Animation Systems So far, some techniques have been discussed to implement an optimized animation system. As a user of animation systems, you should trust it and assume that the system is well optimized. You assume that the system has many of the techniques described above or even more. So you can produce animations which can be friendlier with an optimized animation system. I'm going to address some of them here. This section is more convenient for animators/technical animators. Do Not Move All Bones Always As mentioned earlier the animation tracks can be optimized and their keyframes can be reduced easily. So by knowing this, you can create animations which are more suitable for this kind of optimization. So do not scale or move the bones if it is not necessary. Do not transform bones that cannot be seen. For example while you are making a fast sword attack, not all of the facial bones can be seen. So you don't need to move them all. In the cutscenes where you have a predefined camera, you know which bones are in the camera frustum. So if you have zoomed the camera on the face of your character, you don't need to move the fingers or hands. With this you will save your own time and will let the system to save much memory by simplifying the animation tracks for bones. One other important thing is duplicating two consecutive keyframes. This occurs frequently in blocking phase of animation. For example, you move fingers in frame 1 and again move them frame 15 and you copy the keyframe 15 to frame 30. Keyframe 15 and 30 are the same. But the default keyframe interpolation techniques are set to make the animation curves smooth. This means that you might get an extra motion between frame 15 and 30. Figure1 shows a curve which is smoothed with keyframe interpolation techniques. Figure1: A smoothed animation curve As you can see in Figure1, the keyframe number 2 and 3 are the same. But there is an extra motion between them. You might need this smoothness for many bones so leave it be if you need it. But if you don't need it make sure to make the two similar consecutive keyframes more linearly as shown in figure 2. With this, the keyframe reduction algorithm can reduce the keyframe samples. Figure2: Two linear consecutive keyframes You should consider this case for finger bones more carefully. Because fingers can be up to 28 bones for a human skeleton and they are showing a small portion of the body but they take much memory and calculation. In the previous example if you make the two similar consecutive keyframes linear, there would be no visual artifact for finger bones and you can drop 28 * (30 - 15 + 1) keyframe samples. Where 28 is the number of finger bones and 30 and 15 are the frames in which keyframes are created by the animator. The sampling rate is one sample per frame in this example. So by setting two consecutive similar keyframes to linear for finger bones, you will save much memory. This amount of memory can't be very huge for one animation but it can become huge when your game has many skeletal animations and characters. Using Additive and Partial Animations instead of Full Body Animation Animation blending has different techniques. Two of them which are very good at both functionality and performance are additive and partial animation blending. These two blending schemes are usually used for asynchronous animation events. For example when you are running and decide to shoot. So lower body continues to run and the upper body blends to shoot animation. Using additive and partial animations can help you to have less animations. Let me describe this with an example. Imagine you have a locomotion animation controller. It blends between 3 animations (walk, run and sprint) based on input speed. You want to add a spine lean animation to this locomotion. So when your character is accelerating the character leans forward for a period of time. First you can make 3 full body walk_lean_fwd, run_lean_fwd and sprint_lean_fwd animations which are blending synchronously with walk, run and sprint respectively. You can change the blend weight to achieve a lean forward animation. Now you have three full body animations with several frames. This means more keyframes, more memory usage and more calculation. Also your blend tree gets more complicated and high dimensional. Imagine that you are adding 6 more animations to your current locomotion system. Two directional walks, two directional runs and two directional sprints. Each of them have to be blended with walk, run and sprint respectively. So with this, if we want to have leaning forward, we have to add two directional walk_lean_fwd, two directional run_fwd and two directional sprint_fwd and blend them respectively with walk, run and sprint blend trees. The blend tree is going to be high dimensional and needs too much full body animations and too much memory and calculation. It even becomes hard for the user to manipulate. You can handle this situation more easily by using a light weighted additive animation. An additive animation is an animation that is going to be added to current animations. Usually it's a difference between two poses So first your current animations are calculated then the additive is going to be multiplied to the current transforms. Usually the additive animation is just a single frame animation which does not really need to affect all of the body parts. In our example the additive animation can be a single frame animation in which spine bones are rotated forward, the head bone is rotated down and the arms are spread a little. You can add this animation to the current locomotion animations by manipulating its weight. You can achieve the same results with just one single frame and half body additive animation and there is no need to produce different lean forward full body animations. So using additive and partial animation blending can reduce your workload and help you to achieve better performance very easily. Using Motion Retargeting A motion retargeting system promises to apply a same animation on different skeletons without visual artifacts. By using it you can share your animations between different characters. For example you make a walk for a specific character and you can use this animation for other characters as well. By using motion retargeting you can save your memory by preventing animation duplication. But just note that a motion retargeting system has its own computations. So it is not just the basic skeletal animation and it needs many other techniques like how to scale the positions and root bone translation, how to limit the joints, ability to mirror animations and many other things. So you may save animation memory and animating time, but the system needs more computation. Usually this computation should not become a bottleneck in your game. Unity3D, Unreal Engine4 and Havok animation all support motion retargeting. If you do not need to share animations between different skeletons, you don't need to use motion retargeting. Conclusion Optimization is always a serious part of video game development. Video games are among the soft real time software so they should respond in a proper time. Animation is always an important part of a video game. It is important from different aspects like visuals, controls, storytelling, gameplay and more. Having lots of character animations in a game can improve it significantly, but the system should be capable of handling too much character animations. This article tried to address some of the techniques which are important in the optimization of skeletal animations. Some of the techniques are highly detailed and they were discussed generally here. The discussed techniques were reviewed from two perspectives. First, from the developers who want to create skeletal animation systems and second, from the users of animation systems. Article Update Log 13 Feb 2015: Initial release 21 Feb 2015: Rearranged some parts.
  8. Animation-transitioning

    It seems that you are using bone transformation matrices in world or modelling space as Ashaman73 mentioned. All of the professional animation systems store the keyframe matrices relative to binding pose or maybe reference pose. During animation blending the keyframe matrices are being interpolated relative to ref or binding pose and the final pose will be calculated in the modeling or world space to be passed to the skinned mesh renderer.   Maybe you are calculating the blended pose in the world or modelling space and add it to the animation track and the animation system you are using is assuming that your blended transformation is calculated in local space and then it adds that local transformation to the final world space matrix.   For a more accurate blending you can interpolate position rotation and scale tracks separately just like what Buckeye suggested. However the way you are interpolating is not going to produce bad results on scale and translation if you are interpolating between two correct matrices. It can cause non-linear interpolation on rotations because you are using LERP instead of SLERP.
  9. Our New Game: I Am Dolphin

    Great job for creating those procedural animations.
  10. Havok for dynamic animation like Euphoria?

    Hi eatmicco,   Havok animation supports blending between physical and keyframe animation very well. It also supports pose matching which help you to switch between physical animation and key frame animation without any glitches. Havok animation SDK provides all of the things you need for combining keyframe animation and ragdoll. However euphoria made the physical contacts intelligent. As far as i know It uses machine learning techniques to control biomechanical forces and make intelligent body collisions.   If you want to make intelligent body contacts you should make a system on top havok physical-keyframe animation blending. If you want to find more about the havok physical animation-keyframe blending, you may search about the "Powered Ragdoll Contoller" and "Rigid Body Ragdoll Controller" on havok animation SDK or havok animation tool.
  11. Animated Parameterized Models

      Actually you should do some more things to apply differentials to the final model. You may just search about the combination of morph and skeletal animation and you'll find out more about it. It's a solved problem :)   The important thing is that you need to save the finalized and customized model in a single buffer. You might have many morph targets for customization and you don't need them after you finalized the model. Having all of the morph targets during gameplay is not a good idea as they can take much memory so saving the finalized model and calculating the differences from base model can help you achieve this better.