Advertisement Jump to content
  • Advertisement

Buckshag

Member
  • Content Count

    131
  • Joined

  • Last visited

Community Reputation

899 Good

About Buckshag

  • Rank
    Member

Personal Information

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Buckshag

    EMotion FX 4 vs IKinema

    EMotion FX is a full animation system. It is used in games to animate characters and customize characters, etc. Basically it tells you the positions and rotations of the bones in your character or of your robot, based on the animations you play and how you blend them together etc. So if you have say a game and you want to add animated characters to it, like a character you can control with a game controller, then EMotion FX is a good choice. So here you tell like: if i press this button on the game controller, play an animation that makes the character jump. IKinema is an Inverse Kinematics solver. You tell where you want the hands of your character (or robot) to be, and it will calculate the pose of the bones. So if you try to modify an existing pose or try to make the robot reach for something, then this is what IKinema does. So it works the other way around (hence the Inverse). It does not really animate your character, but allows you to adjust and correct poses so that given bones reach given positions or rotations. EMotion FX also has Inverse Kinematics solvers inside, but they are not as good as IKinema as IKinema is really completely specialized in that. Generally speaking, if you want to tell your robot what to do, IKinema would be something that would be more interesting for you. As you will specify exactly where you want to have the legs etc. IKinema can nicely calculate the pose of your robot required in order to reach that goal. Unfortunately it will most likely be a bit more complex than that, as you have to probably do it physically in a way so that the robot doesn't fall over and lose its balance too. In games we don't really have that problem unless we run a full physics simulation to make a character walk, which would make it similar to making a real robot walk :)   Hope that helps a bit.
  2. Buckshag

    Character Animation book

    It doesn't really explain things in c++ or so, but some of the basics of animation are described in this book.   I do not have the following books, but a quick search on amazon gave this book and this book.
  3. Hi Dirk :)     The update of states (which are essentially emfx graph nodes) update things internally depending on the node. For example a motion node could update its current play timer. It is a bit more complex than this though, with hierarchical synchronization etc. But that is essentially what it does. It outputs non-pose values as well inside the updates. For example math nodes require their values to be calculated already during update stage, as we optimize the graph output by not processing nodes that aren't needed. For example if the weight of some IK node is 0 (calculated by some math nodes), then we do not calculate the full IK on the pose.   The Output would really do the heavy per bone calculations, outputting a real pose that gets blend etc.   Regarding conditions being checked, these are only the ones on the transitions originating from the current state. And also wild card transitions. If multiple transitions can be taken, we pick the one with the highest priority value. We also allow transitions to be interrupted and some other features.   We can also trigger them manually. But that's mostly just used in the editor. Or to force the game in some given state. But normally all is automated.     The evaluation happens inside the Output. Update only updates things like timers etc, and what transition is active. There is no caching of that pose. I pool all poses though, so transitions and nodes do not store poses as that eats more and more memory the more nodes you have. Also I can share the animation graphs among different instances, each having their own parameter values and internal states of the nodes. For example each instance can be in another current state or transition in a given state machine.   The active transition pointer can be null. In that case you still want to know the current state. It is probably why it is stored twice in EMFX. It has been like 10 years ago since it got written, so I cannot remember the exact details right now. But surely it is not perfect, so perhaps not all would be needed :)   Cheers, - John
  4. What I do in EMotion FX is have each state output a pose. A state can be a simple motion, or state machine or blend tree (or anything else that generates a pose). Between the states are transitions with the conditions on them, so exactly as you describe.   The state machine holds a current state and target state and current transition. When a transition is active, the transition pointer points to the transition being active. The transition object outputs a pose as well and can mark itself finished.   I then output the pose of the transition. But the state machine is still in the current source state as current state. Once the transition is fully completed the state machine's current state becomes the transition's target state.   There is a separate Update and Output call in the graph. The Update call would update the node and transition timers. So performing a tick and performs the actual current state changes once transitions finish etc. The Output will calculate the output pose of the root state machine, so processing everything hierarchically.   You only update and output the current and possible target state inside the state machines.   Hope that helps.
  5. I think it depends on the situation.   You should never use it inside a .h file in my opinion. Code readability goes down a bit by not using it, but it is more clear what classes belong to what namespaces, so maybe that compensates for it again.   If your own engine uses only one core lib for example it is perfectly fine to do "using namespace CoreLib" inside your .cpp files in my eyes. If however you later plan to add support for another library that happens to have the same class name, you have to explicitly name that one.   I personally prefer not to use "using namespace", as then you are always safe (unless there is a lib with the same namespace name lol).   I think generally it should be avoided, but in some cases it is fine I think.
  6. Buckshag

    Looking for a free 2D engine

    You should have a look at cocos2d-x, I think it is quite nice: http://www.cocos2d-x.org/ It has a very easy to use API, very straight forward. It also has some editor, but I haven't really used that so can't tell how good or bad that is.
  7.   I wouldn't do that, just go for a static AABB as mentioned before. What you can do is simply rotate your model in bind pose in some loop or so, and find the maximum AABB, this works pretty well.   Storing AABB's and interpolating them is really overkill, and you need to pass it around your blend trees and handle all the blending and logic in there as well then to get the right AABB. It isn't really worth it I think. Then you could also just build the bounds from the transformed obb's at the end. I doubt that is much slower than this with large blend trees involved. 
  8. Buckshag

    Problems with exporting FBX files

    Local transformations are relative to their parent indeed.
  9. Buckshag

    Problems with exporting FBX files

    Are you using the same coordinate system as FBX? What works for me is just storing the local transformations, but I have to convert into the right handed coordinate system where negative z points into the depth, y points up and x to the right.   fbxNode->LclTranslation.Set( FbxVector4(pos.x, pos.y, -pos.z) ); fbxNode->LclRotation.Set( FbxVector4( Math::RadiansToDegrees(-rot.x), Math::RadiansToDegrees(-rot.y), Math::RadiansToDegrees(rot.z))); fbxNode->LclScaling.Set( FbxVector4(scale.x, scale.y, scale.z) ); Also make sure to use degrees instead of radians in this function.   I was able to export full characters including bone hierarchies etc for some clients using this code, and they appeared correct in Max after importing the FBX, so I assume this must work for you as well, unless something else is going on.
  10. In EMotion FX I offer different ways to calculate the bounds. 1.) Based on the node's (bones) global space positions 2.) Based on the mesh vertices in case software skinning is used 3.) Based on the bone OBBs, taking all corners into account 4.) Based on the bone OBBs, taking just the transformed min and max into account (faster, less accurate) 5.) Based on a static precomputed AABB, which is calculated by calculating the mesh based AABB and then rotating the character in all directions to find the max bounds.   Number 1 to 4 are dynamic and can be calculated once per frame or at another update rate. They require the bone transforms to be known. Number 5 is the most practical one in games. This also allows you to only process motion extraction/root motion once the character walks out of screen. You move this precomputed AABB with the characters position. This way you do not need to calculate any bone transforms when the character walks out of screen. Only the motion extraction node needs to be calculated, which can be a huge win in performance.   When using method 5, you can of course also use a sphere or capsule or any other shape.
  11. Yes I know, but this 10 meter tentacles is a 0.001% of the characters you have, an exception. Just wanted to point that out. Also I assume you have configurable compression settings, so you can always either disable compression on such character or make it less agressive.
  12. If the tentacles were going so crazy that it didn't look like what you made in Maya then the compression settings where probably quite a bit too extreme :)   Generally motions look fine without taking the hierarchy into account when optimizing them using keyframe reduction or so. Depends a lot on the settings. But at decent optimization rates the motion should look just fine. Maybe a tiny bit of foot sliding will happen, but yeah you can improve that by taking parent transforms into account. Definitely good to point that out indeed.   Also most of the time error on the legs/feet will be more visual disturbing than on say the arms, because you will directly notice sliding feet while you might not notice that the hands are not at the very exact locations in say an idle motion or so.   There are a few other techniques you can apply as optimizations, such as motion and skeletal LOD. Also you can do some caching when doing keyframe lookups if they are not uniformly sampled. Also you can simplify certain calculations when no scale involved etc, although you kind of mentioned some of those in the "do not transform everything". You have to watch out that the cost of figuring out if something changed is not higher than actually just calculating it though.    Optimizations in animation graphs / blend trees / networks could also be added.   Maybe a next step could be to go into more detail of some of the things you mentioned, so people can see how to implement these.   Anyway, good job on the article :)
  13. Ah sorry I misunderstood about the blending then, you are right you also mentioned the convert, I was a bit sleepy :) Let us know how it goes! :)
  14. That's why I wanted to say that the animation values are relative to the parent and that mostly the x or z axis (in DirectX default coordinates) point toward the child node. So it is not always the Y that you expect to move up and down.   You might want to consider changing your blending into local space rather than world space. Also I'm not sure if blending matrices is very efficient and if that's even correct.   Next to that you have to store a full matrix per keyframe. That's pretty big. Imagine you are only translating 10 units to the right at constant speed. You would need only two vector3 keyframes basically, assuming you do not split your position into 3 different tracks. While now you might need say 30 full matrices.   But yeah, that's a bit off-topic now and if it works for your game and doesn't give you any memory or performance issues, then that's all ok :)
  15. I'm not entirely sure what you mean with "animation matrix", but I assume this is the matrix that represents the current frame's offset transformation relative to the parent node. This is also why you will not see an Y value for your calfs etc. Most likely they are in x or z direction, depending on how the hierarchy has been setup. Depending on the art software (and possibly which plugins) used, the x or z axis will probably point to the child/parent node.   The standard way of doing animation is by decomposing those matrices into pos/rot/scale. So a scale vector, rotation quaternion and scale vector for example. You then put those into key tracks and interpolate between them. Then you rebuild the local space matrices (the ones relative to the parent) from those individually interpolated and blend transform components. Once you have the local space matrices you can calculate the world space matrices as I mentioned.   I never did anything with Collada, so I'm not sure what space the animation data is in there. But it is most likely this relative to parent space. The root node has no parent so it will be the full world transform, which is why you would see 85 in Y on the root.   Most likely you know all this already, but I wanted to be sure so that we're on the same page :)   All you would need to do is find out which nodes you need to follow from the root towards the node you want to calculate the world position of. Then do this for loop with the local * parentWorld matrix and then you can get the translation from that matrix. That will give you the world/global space position.   From what I understand you are already calculating those matrices before multiplying them with the inverse bind pose matrix and passing them to the shader. If that is the case, it is exactly the same as that.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!