Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 30 May 2006
Offline Last Active Today, 01:53 AM

#5260890 Software skinning for a precise AABB

Posted by Buckshag on 07 November 2015 - 10:52 AM

The best solution that is good enough for 99% of cases is to save the AABBs at different frame intervals (granted, you need to do software skinning first) store them (e.g. in the mesh file), and then linearly interpolate between those AABBs.


I wouldn't do that, just go for a static AABB as mentioned before.

What you can do is simply rotate your model in bind pose in some loop or so, and find the maximum AABB, this works pretty well.


Storing AABB's and interpolating them is really overkill, and you need to pass it around your blend trees and handle all the blending and logic in there as well then to get the right AABB. It isn't really worth it I think. Then you could also just build the bounds from the transformed obb's at the end. I doubt that is much slower than this with large blend trees involved. 

#5260118 Problems with exporting FBX files

Posted by Buckshag on 02 November 2015 - 08:48 AM

Are you using the same coordinate system as FBX?

What works for me is just storing the local transformations, but I have to convert into the right handed coordinate system where negative z points into the depth, y points up and x to the right.


fbxNode->LclTranslation.Set( FbxVector4(pos.x, pos.y, -pos.z) );
fbxNode->LclRotation.Set( FbxVector4( Math::RadiansToDegrees(-rot.x), Math::RadiansToDegrees(-rot.y), Math::RadiansToDegrees(rot.z)));
fbxNode->LclScaling.Set( FbxVector4(scale.x, scale.y, scale.z) );

Also make sure to use degrees instead of radians in this function.


I was able to export full characters including bone hierarchies etc for some clients using this code, and they appeared correct in Max after importing the FBX, so I assume this must work for you as well, unless something else is going on.

#5259057 Recomputing AABBs for animated actors

Posted by Buckshag on 25 October 2015 - 08:53 PM

In EMotion FX I offer different ways to calculate the bounds.

1.) Based on the node's (bones) global space positions

2.) Based on the mesh vertices in case software skinning is used

3.) Based on the bone OBBs, taking all corners into account

4.) Based on the bone OBBs, taking just the transformed min and max into account (faster, less accurate)

5.) Based on a static precomputed AABB, which is calculated by calculating the mesh based AABB and then rotating the character in all directions to find the max bounds.


Number 1 to 4 are dynamic and can be calculated once per frame or at another update rate. They require the bone transforms to be known.

Number 5 is the most practical one in games. This also allows you to only process motion extraction/root motion once the character walks out of screen. You move this precomputed AABB with the characters position. This way you do not need to calculate any bone transforms when the character walks out of screen. Only the motion extraction node needs to be calculated, which can be a huge win in performance.


When using method 5, you can of course also use a sphere or capsule or any other shape.

#5179479 Animation states and blend trees

Posted by Buckshag on 10 September 2014 - 10:04 PM

It doesn't matter how you name those parameters really. It comes down to the same, just like you say :)


It is just important to keep in mind that your game code will set the parameter values, which the conditions will check.

I would in your case just have one parameter "forward" or so.


And depending on the control scheme in your game you can set that value to either true or false. If you use a game controller check the stick. If you use keyboard, check the cursor up key for example. The state machine doesn't know about the keyboard or game controller, just about the parameter "forward".


Both the outgoing transition condition from idle to shuffle and from jump to lean forward can check the same "forward" parameter.

#5179365 Animation states and blend trees

Posted by Buckshag on 10 September 2014 - 10:40 AM

You indeed have shared data and unique data per character per anim graph node.

The play offset would be unique as it is different per character, while the transition time will be shared. I do not clone the actual full state machines though, although it would also be possible, but less memory efficient as you have a lot of shared data.


The game is setting parameter values, for example a "button_up_pressed" parameter could be a boolean. Then the state machine can make a transition when that button is pressed using some condition that checks for that boolean to become true. At least that is one way to do it.


Your xml would work to describe a simple state, although there can be more state types, not just with an animation.

Also you need a set of conditions on each transition, rather than just an actionEventType condition. So I would make that more flexible there.

#5178862 Animation states and blend trees

Posted by Buckshag on 08 September 2014 - 08:45 AM

Yes that sounds correct smile.png


States can be anything that output a pose without needing a real input, other than some settings you can configure for that state (like which motion to use).

So anything that 'generates' an output pose can act as state. So instead of a predefined motion node you can also have say a procedurally generated motion node that does some physics based walking motion or so.


These states can be connected with transitions. These transitions can have conditions on them, which define when transitions should be activated. For example if you have an idle state and a move state, you can make a transition from Idle to Move, and put some condition on it, which triggers when the character 'speed' parameter becomes bigger than zero.


So when your game would then pass a value of say 0.5 for speed, it would automatically make the transition from idle to move. During the transition both outputs poses are calculated of both the idle and move state, and a blend is done between them.


The move state could be a blend tree. Blend trees have some final node to which you connect. This final node then represents the output pose of the blend tree.


A state can also be another state machine. So the output of that state would be the output of the state machine it represents.

This way you can build hierarchies. I think Unity 5 will also support that. I think its amazing they didn't/don't support hierarchies yet, as that makes it almost useless or impossible to manage.


I have seen some of our clients of our animation middleware EMotion FX have graphs of over 1500 nodes. Imagine having to place that all in one state machine. That would end up in one big spaghetti of transitions smile.png


Btw, to answer your question if you over-complicate the use of blend trees: you seem to pick between different motions quite frequently.

In theory you wouldn't need a blend tree for that, but in practise that can do that as well. 


You could either make that a state machine that picks the right motion or pose based on say what kind of stance you are in, or you could make a blend tree which has some blend node which takes multiple input motions, and your weight will represent which motion to pick. The blend node can automatically blend between them. Or you make a node that picks a given motion based on an input value, without blending.


For your jump types you would most likely want a state machine rather than blend tree, as you are not likely to blend between different types of jumps.

#5178797 Animation states and blend trees

Posted by Buckshag on 07 September 2014 - 10:47 PM

Generally you have a state machine that has different states. These states can be either for example a single motion, another state machine, or a blend tree.

This way you can build hierarchical systems to keep things much more clear and easy to understand visually.


Mostly you would see an idle state, which can be a state machine in which some random motion is picked for example. You can also make it a blend tree, in which you place a  a state machine. The output of that state machine that picks the idle motion can then act as input to say a lookat node, which makes the character look at a given goal again, allowing to modify the idle motions.


Then next to Idle there can also be some Moving state, which is most likely a blend tree in which you blend different animations together, based on speed, turn angle, etc.


So to answer your first question, no, in modern games you wouldn't really put everything in one blend tree or state. Technically you could, but you will have it hierarchical where your blend tree would contain state machines as well. Generally you have one root state machine, in which you place things like Idle, and Moving, Attacking, etc.


I hope that also answered your question about how state machines relate to blend trees, if not please explain what exactly you mean in further detail smile.png

#5110448 How to compute the bounding volume for an animated (skinned) mesh ?

Posted by Buckshag on 19 November 2013 - 06:46 AM

It depends a bit on how your animation system works, but what works well for static boxes is to just take the bounds of the mesh, and get maximum of width/height/depth and size the box in all dimensions with that amount. So the same as mentioned above but without using any keyframes.


Some methods of calculating bounds that I implemented:

1.) Contain all node/bone global positions

2.) Contain all (skinned) vertex positions

3.) Contain all OBB's of each node/bone

4.) Static box that moves with the position of the actor (method as described above)


The method using keyframes would only work if you do not blend or combine motions or do not do any procedural things. Or you would have to pass your full motion database through it. But this is quite overkill. Also if you choose to go with this method, you most likely will sample your animation curves when importing them from the software you exported with. For example if you load an FBX or use an exporter from Max or Maya you sample the transforms every n frames and possibly optimize that data. This means that it is highly unlikely that you would hit a case where you export just 2 keyframes for a 360 degree rotation.

#4868559 Programmatic walk animation

Posted by Buckshag on 03 October 2011 - 08:12 AM

These kind of animation system are often called physical based animation systems.
You can see it as a physics ragdoll, which also has specified muscles, which are basically motors.
So let's say you have a knee joint that can rotate a given amount of degrees over one axis. And you specify the same for a bunch of other joints.

Now you could specify a goal you try to reach, for example to make the character walk, you have to say something like:
"for every time one of your feet hits the ground, you get 1 penalty points. And for every meter you move forward, you get 1 reward point.".

Now from this point on there are several possibilities. Different AI algorithms can be used to iteratively try to find the most optimal solution, to achieve the maximum reward amount. Each iteration, which is also called a learning or training phase, the character will walk better. In the beginning it will start with random rotational forces on the motors, and it will directly fall on the ground. But slowly it will maximize the reward and your character can walk.

As you can imagine though, it is pretty tricky to use this in games, because each game might need different "behaviors" or however you wish to call them. Like a walking behavior needs to be trained with a given reward structure, but a balancing behavior would need something else again. Also the walk animation won't look as natural as a motion captured animation.

This kind of technology is often used in Robotics, where people train their robots in offline simulations, so that they don't damage the real robots. I've also seen it being used by dinosaur researchers, where they try to figure out how specific dinosaurs would have walked/moved, and how fast they could have be.

Some things you might google for are "physically based character animation", "machine learning" and "reinforcement learning".

I hope this information helps a bit with getting you started :)

#4866134 Skinning Issues

Posted by Buckshag on 26 September 2011 - 10:58 AM

m_skeleton.m_joints[parent].m_worldMatrix * m_transformMatrices[iJoint][m_transformIndex];

Did you try switching these two as well? So like

m_transformMatrices[iJoint][m_transformIndex] * m_skeleton.m_joints[parent].m_worldMatrix

Can you draw your skeleton in lines? You first need to be sure your bone transformations in world space are correct.
Then we can see if its the skinning algorithm or that already.

If you really can't figure it out maybe I can give it a debugging try if you can send the code.

#4866025 Skinning Issues

Posted by Buckshag on 26 September 2011 - 06:00 AM

What you wrote seems correct, the description.
A simple idea, just to double check, but did you try changing your multiplication from: worldMatrix * inverseBindPoseMatrix into inverseBindPoseMatrix * worldMatrix ?

And do you pass them correctly to the shader? Row vs column major.(in case you do it on the gpu)

#4858627 fading between animations in using vertex skinning

Posted by Buckshag on 07 September 2011 - 07:54 AM

You are right about the slerping, and this is generally done on the CPU.
What you basically do is slerp or normalized lerp (nlerp) between the quaternions in local space.
With local space I mean that the bone rotations and positions should be relative to their parent bone.
Slerping is quite expensive, so often a normalized lerp is used. It performs very well.

Animation can still be one of the most CPU intensive things in games, depending of course on the complexity of the blends and amount of characters/bones etc.
In new games some characters will have very complex blend trees, with over 50-100 operations such as blends, sampling motions, inverse kinematics, etc.

To blend two animations, basically the steps are as follows:
1.) Sample the animation local space transformations in quats and position vectors
2.) Sample animation 2 the same way
3.) nlerp/slerp between those local transformations (from motion 1 and 2)
4.) build a local space matrix for each bone, from the position and rotation
5.) perform forward kinematics on the local space matrices to calculate the world space matrices
6.) perform skinning (can be done on gpu)
7.) render (gpu)

To blend between two motions you simply change your interpolation value.
The easiest system to do this is by making a set of layers. Each layer containing one animation.
Then you start at the bottom layer, and blend towards the layer on top, and the output of that blending again to the layer that is on top of that.

You can also go more advanced and build a more complex blend graph. But a simple transition between two motions is nothing more than the slerping between the local transforms of 2 motions.

Hope that helps a bit, if not just ask :)

#4808132 Tips to cache skeletal animation frames

Posted by Buckshag on 08 May 2011 - 10:33 AM

You should have a look into skinning inside a vertex shader.
That way you can push a lot more character and you do not need Cal3D to process the vertex data each update, but only let it update the bone transformation matrices.

I don't know Cal3D, but I assume it is possible with that library.

Or are there any specific reasons why you want to do this on the CPU instead of on the GPU?

#4807773 Simple universal mesh file format recommendations

Posted by Buckshag on 07 May 2011 - 01:48 PM

I think you have the option between the following formats:

1.) .obj which is very limited in what it can store, but it might be enough for your file format as that seems pretty simple as well.
2.) Collada, as you mentioned.
3.) FBX, using the FBX SDK.

I think these are the three most supported formats out there.
Pretty much all 3D software should support those.

The .obj file format is probably the easiest way to implement.

#4805085 [solved] Remap Rotation Matrix Axis System?

Posted by Buckshag on 01 May 2011 - 10:08 AM

You can solve it on different ways.

You can try to multiply the device matrix with a conversion matrix to convert it into your coordinate system.

Try multiplying it with this matrix:
1  0  0
0  0  1
0 -1  0

Another way would be to just swap the second and third rows and columns in the matrix and to negate the z axis inside it.

Edit: It should be second and third rows AND columns, not just the rows. Just in case anyone ever reads this again :)