Animation Blending
I've been able to get Doom 3 MD5 to load in my game engine, and there's one last thing I need: animation blending. More specifically, I need to play an animation on top of another one. For example, I'll have the player model (marine) running around, and when he shoots, I'd like to keep the current animation idle/walk/crouch animation, and apply the shooting or reloading animation on top of that. The shooting animation should override the top half of the skeleton, I believe. Also, I'd like this system to be flexible in case I'd like to apply this with other animations in different ways.
Hopefully, that makes sense. Right now, I can transition between animations no problem.
Hopefully, that makes sense. Right now, I can transition between animations no problem.
You don't mention what API you're using.
However, in DirectX, I'm not sure you can do what you want using the builtin D3DX functions.
I'm writing my own animation controller (vs. using a D3DXAnimationController) to blend animations just as you mention. I also wrote my own animation data loader using a custom file format to support that, vs loading/creating a mesh and hierarchy using D3DX functions and structures.
With that approach, you create 2 sets of frames (bones), one for the lower part of the hierarchy and one for the upper part. Create 2 "tracks" (just a data structure), one for the upper set of bones, one for the lower set of bones. Assign an appropriate animation to each track - e.g., for the upper: "Reload"; for the lower, "Run." Each "advance time," calculate the bone final matrices for those 2 sets of frames.
I.e., one "track" calculates final bone transforms only for a particular set of bones. Both tracks combined calculate all the final bone transforms.
EDIT: After not thinking about it for a while, you might try the following (I haven't tried it but it seems straightforward):
1. Create 2 lists of bone IDs or indices (however you have the hierarchy arranged), one list each* for the parts of the body you want to have separate animations for.
2. Using one animation, calculate the final transforms for one set of bones.
3. Using the other animatino, calculate the final transforms for the other set of bones.
*E.g., Lower list: - pelvis, 2 thighs, 2 shins, 2 feet. Upper list - spine, neck, head, clavicles, bicepts, lower arms, hands.
However, in DirectX, I'm not sure you can do what you want using the builtin D3DX functions.
I'm writing my own animation controller (vs. using a D3DXAnimationController) to blend animations just as you mention. I also wrote my own animation data loader using a custom file format to support that, vs loading/creating a mesh and hierarchy using D3DX functions and structures.
With that approach, you create 2 sets of frames (bones), one for the lower part of the hierarchy and one for the upper part. Create 2 "tracks" (just a data structure), one for the upper set of bones, one for the lower set of bones. Assign an appropriate animation to each track - e.g., for the upper: "Reload"; for the lower, "Run." Each "advance time," calculate the bone final matrices for those 2 sets of frames.
I.e., one "track" calculates final bone transforms only for a particular set of bones. Both tracks combined calculate all the final bone transforms.
EDIT: After not thinking about it for a while, you might try the following (I haven't tried it but it seems straightforward):
1. Create 2 lists of bone IDs or indices (however you have the hierarchy arranged), one list each* for the parts of the body you want to have separate animations for.
2. Using one animation, calculate the final transforms for one set of bones.
3. Using the other animatino, calculate the final transforms for the other set of bones.
*E.g., Lower list: - pelvis, 2 thighs, 2 shins, 2 feet. Upper list - spine, neck, head, clavicles, bicepts, lower arms, hands.
I'm using OpenGL, so everything has to be done from scratch. I've actually come to the same conclusion as you have, but what I'm looking at is how to implement this in a way that's easy to use code-wise. Right now, my MD5Object class contains two animation channels --each can point to a loaded animation, keep track of playback data on them, and store the results in the animation's temporary skeleton data. They can also transition between animations, and queue up a list of animations to play after each one finishes playback.
I found this online: Blending & State Machines. It explains a lot of complex animation blending concepts that would be nice to have. There's the animation feature I've noticed in the Halo games where if the player turns, the model turns his waist until he meets a certain angle, then has a turn animation for his legs for turning the mesh's lower body to make it appear like their model's legs are planted to the ground.
I found this online: Blending & State Machines. It explains a lot of complex animation blending concepts that would be nice to have. There's the animation feature I've noticed in the Halo games where if the player turns, the model turns his waist until he meets a certain angle, then has a turn animation for his legs for turning the mesh's lower body to make it appear like their model's legs are planted to the ground.
if the player turns, the model turns his waist...[/quote]
It's likely there're (at least) 4 things going on - a transform applied to the entire model, a lower-body animation, an upper-body animation, and a rotation of the upper-body parent bone (probably equivalent to Pelvis or similar). I've done something similar to that by maintaining a "head-facing" angle (calculated from the direction the character is looking - tracking a target, etc.), calculating an appropriate transform and applying that transform to the character's Neck bone* during the animation transforms calculation. It's been a while since I've played Halo, but, as I remember, the characters also "look" in various directions - heads rotating up and down as well as side-to-side. That's probably done similarly.
*Most of the animations I use don't animate the neck/head as I do that separately.
how to implement this in a way that's easy to use code-wise[/quote]
I'm still working on the details, but, at present, my animation controller tracks (channels), in addition to having an animation sequence, playback speed, etc., also take a node (bone, frame, etc.) hierarchy. Those hierarchies can be setup for "whole-body", "upper-body" (pelvis-spine-arms-etc.) or "lower-body" (pelvis-thighs-shins-etc.), each of which needs to be setup only once. The animation controller, using the node hierarchy for each track, calculates only the appropriate animation transforms.
My MD5Object class (renders an instance of an MD5Model with animation applied to it) also uses 2 animations channels. After watching a documentary on the development of Uncharted 2, I thought about maybe having an STL list of animation channels that could be added or removed at any time, and then have be able to set blender operations between each channel similar to how alpha blending work on fixed-function 3D APIs. Then, I thought about creating a target system where the user has an option of choosing the joint in the model's skeleton to be the root bone for that animation controller. What happens for that animation channels is when the animation is being applied, the first bone to be worked on will be specified "root bone" to the animation channel, NOT the skeleton. What I could do with this is apply my running animation to the first channel, and apply the upper body animation to the second channel where the second channel's animation root bone reference is the next spine bone up from the pelvis since the pelvis is generally the root, which will brand the animation off to all bones.
I think this would work well for the MD5 format due to the way the data is saved. It's actually easier to process animation from scratch than the .x file format was. I still haven't given up on the .x format yet though. The only issue is that bones are calculated twice for the upper body. The data from the first channel is completely thrown out... Of course, I could add in blend modes and the blending factor between animations to give more options in case I wanted to support animation transitions between channels. Right now, I support my animation transitions per channel --each animation channel has a current animation pointer and a "next" animation pointer where the resulting animation between the current and next are determined by a transition factor if the animation is in transition. Otherwise, it just calculates the first animation.
I think this would work well for the MD5 format due to the way the data is saved. It's actually easier to process animation from scratch than the .x file format was. I still haven't given up on the .x format yet though. The only issue is that bones are calculated twice for the upper body. The data from the first channel is completely thrown out... Of course, I could add in blend modes and the blending factor between animations to give more options in case I wanted to support animation transitions between channels. Right now, I support my animation transitions per channel --each animation channel has a current animation pointer and a "next" animation pointer where the resulting animation between the current and next are determined by a transition factor if the animation is in transition. Otherwise, it just calculates the first animation.
It's actually easier to process animation from scratch than the .x file format[/quote]
For a bit more advanced control - absolutely. It's unfortunate Microsoft chose not to provide a more easily manipulated interface. However, mesh and animation loading and animation control are no longer supported in DX11, so it's necessary to code these routines, in any case.
I support my animation transitions per channel --each animation channel has a current animation pointer and a "next" animation pointer[/quote]
That's an interesting approach. I haven't done that (yet) and will have to consider it. It may be more efficient than having 2 tracks per hierarchy to support transitioning between animations. [ + ]
Having transitions on a per animation channel basis might be the best way to go since each channel will have its own animation. One feature I don't really support is blending between two arbitrarily. For example, I transition between two animations with a constantly-updating transition factor (0.0f - 1.0f based on elapsed time to animation transitioning time). What if I wanted to hold the animation at 0.5f, or some other factor based on some type of input. If I wanted my character to blend between a walk animation or a run animation based on how much the analog stick is pressed, I'd find the magnitude of the analog being moved as a factor between 0.0f and 1.0f, and apply that as my constant transition factor between two animations. Or, I could just use my blend mode operations to blend between two animations like I thought about. Since I've already written that code in the animation channel's Update(...) method for interpolating between the current and next animation in a transition, I'll have to make a global function that takes two animations, and outputs a resulting skeleton which can also handle blend operations... Maybe write a private method instead since OOP is stressed here, but I'm not sure yet.
It sounds like you'd have to perform some surgery on your code. I have one animation set per track. Similar to ID3DXAnimationController, each track has flags for: enable/disable, transition-from and transition-to*. In addition, each track has a blend-factor and transition time. With that setup, you could do what you want by setting the transition flag on 2 tracks to false (which halts the transition time calc); set blend-factors on 2 tracks to 0.5 (or whatever ratio you want); then enable both tracks. Note: each track also has a hierarchy assigned to it, as mentioned above, and calculates animation matrices only for the bones in that hierarchy.
*transition-from reduces the blend-factor. transition-to increases the blend-factor. When the blend-factor is less than some small value, the track is disabled. I don't have all the bugs worked out yet, as I store animation frame data as quaternions for rotation and vectors for translations. For each track, I slerp the quats and lerp the vectors between keyframes. Then I slerp the quats and lerp the vectors between blended tracks. Then everything is converted to matrices to set into the effect for rendering.
In any case, with 4 tracks, you can set (e.g.) tracks 0 and 1 to "lower-body," and tracks 2 and 3 to "upper-body."
*transition-from reduces the blend-factor. transition-to increases the blend-factor. When the blend-factor is less than some small value, the track is disabled. I don't have all the bugs worked out yet, as I store animation frame data as quaternions for rotation and vectors for translations. For each track, I slerp the quats and lerp the vectors between keyframes. Then I slerp the quats and lerp the vectors between blended tracks. Then everything is converted to matrices to set into the effect for rendering.
In any case, with 4 tracks, you can set (e.g.) tracks 0 and 1 to "lower-body," and tracks 2 and 3 to "upper-body."
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement