Sign in to follow this  
Ardor

How do modeling programs handle transformation hierarchies?

Recommended Posts

I am thinking about a model file format, and stumbled upon this issue. I know of two concepts how to deal with hierarchies. a) A node's coordinates are always relative to the parent. For example, if box A has a child, box B, and box B has the coordinate 0,0,50, then it will remain 0,0,50, no matter if box A moves/rotates/etc. But that also means that the child coordinates are NOT its world coordinates; to get those, the parents transformations have to be applied as well. b) A node's coordinates are always relative to the world/scene. In the previous example this means that the child coordinates DO change when the parent is moved/rotated. 3DS Max seems to use variant (b), but does every modeling program out there use (b) as well? I have to choose, and I tend to (b), but what do you think?

Share this post


Link to post
Share on other sites
well, for me (a) looks more simpler to implement, but you can do (b) as well. It depends of what's your application's purpose. If childs are changing, then there will be a lot more load on the cpu. On the other hand, if the hierarchy is modified often (nodes deleated, rearanged, etc), then i think is more reliable that each node has it's transformations relative to the world system. You have to build a list of all your application's features, and see what method is more fit to use.

Share this post


Link to post
Share on other sites
Well, I want to include some rigid body dynamics as well, and all physics engines give me transformation matrices that are world-oriented, so (b) would fit better with them. I don't know how to deal with this when using (a).

Share this post


Link to post
Share on other sites
well, then we have no real problem here :D. just use (b) (it's much more flexible anyway, if you want to add more features in the future).

Share this post


Link to post
Share on other sites
Hmm, yeah, only problem are modeling programs using (a) (Cinema4D seems to do this for example) but this shouldn't be too hard to convert in the exporter.

Share this post


Link to post
Share on other sites
well, it's obvious that is much easy to convert from (a) to (b) (by simple transformation composing = matrix multiplication) than from (b) to (a).

Share this post


Link to post
Share on other sites
You could go for (a) and provide a mechanism for computing (b) (and maybe even caching world transformations, should computing them become a bottleneck).

Share this post


Link to post
Share on other sites
Quote:
Original post by Koen
You could go for (a) and provide a mechanism for computing (b) (and maybe even caching world transformations, should computing them become a bottleneck).


And what about the physics engine? (a) seems to be incompatible with it. Or did I miss something?

Then again, such a hierarchy doesnt make any sense with a physics engine, since all nodes get their transformations from the physics, thus transformation inheritance becomes pointless...

In fact, I am thinking about the hierarchy because of these reasons:
1) The rigid bodies need some initial transformation
2) Animated characters are usually frozen in the physics simulation and not affected by it
3) Models become easier to move; in the car example, I can have the chassis as "main node", and just move the main node

So for example for a car the decision whether to use a or b is only relevant at the start because once the transformations are there the physics take over. However, if you have a model of a soldier, and he is running, the animation system has control, the soldier ragdoll is frozen. When he gets shot, the system should switch from animation to physics control. This switch should be easy and quick to do. Of course, the soldier may have a gun, which he holds in his hand (the gun would be a child of the soldier model).

[Edited by - Ardor on September 8, 2005 9:48:18 AM]

Share this post


Link to post
Share on other sites
a) is good for animation, it is what max, maya etc use.
b) is good for physics, but rubbish for animation blending (almost impossible to get good results)

so, use a) and calculate the world space matrices (which is after all what you need and takes very little processing time to evaluate). Normally it is unusual to find physics engines using the same geometry hierarchy as used for rendering. For rending, draw all geometry in local space and transform into the correct location using either hierarchical push/pop matrices, or simply the evaluated world space matrix.

Maya actually stores all geometry as stand alone nodes. Instances of that geometry are stored using DagPaths ie,

hip|right_legbone|legmesh
hip|left_legbone|legmesh

This is basically how maya handles instances of the same geometry within a scene - it simply renders all dag paths.....

Share this post


Link to post
Share on other sites
Quote:
Original post by meeshoo
well, it's obvious that is much easy to convert from (a) to (b) (by simple transformation composing = matrix multiplication) than from (b) to (a).


going from b to a is just as simple, it just requires you to multiple a given body by the inverse world matrix of it's parent joint. Thus, transforming between the two representations is not complicated. You might need to re-orthogonalise the resulting local space matrices though (if you want to be really accurate).

Share this post


Link to post
Share on other sites
Quote:
Original post by meeshoo
that is quite true, but what about computing a lot of inverse matrices? isn't that a lot of cpu work?


depends by what you mean by a lot ;) 100 inverse's is slower than say 100 matrix mults, but it's not really going to hit your performance so much as to ever be noticable. Ok, if you were trying to do 4 millions inverses per frame, then you might have a small problem..... Calculation of say, the physics, skinning etc will be far greater in terms of performance hit than inverting some matrices.....

That said, storing animation data in world space would generally be a bad idea (primariloy for reasons of blending). In general, storing the data as Quaternions would be preferable to matrices since it's 4 floats vs 12 (if you ignore the 0,0,0,1 column).

Share this post


Link to post
Share on other sites
Quote:
Original post by Ardor
Well, I want to include some rigid body dynamics as well, and all physics engines give me transformation matrices that are world-oriented, so (b) would fit better with them. I don't know how to deal with this when using (a).


run anims and characters in local space, then calculate the world space matrices. Render local space geometry transformed into world space via glMultMatrix or DX equivalent.

then, either

1) switch to physics and simulate. switch to using the world space matrices from the physics engine for rendering. (ie, once simulating can't go back). To get things colliding with the character whilst it's animating, simply set the rigid bodies to have a fixed motion (ie, they are not affected by dynamics until you switch to simulation). This is the naive way to impliment ragdoll physics.

2) getting the physics to interact with animations to dynamically adjust the anims is somewhat more difficult. There are a couple of was to do it, one method is to decompose the world space matrices into local space, then extract the translate vector and rotation quaternion (you really don't want any scale!!). You can then slerp these values with the anim data to provide your final local space matrices, which you'd then need to construct back into world space.

There are however cleverer ways to do this, but i'm under NDA when it comes to discussing those methods....

I will say that the character setup is by far the most important aspect.

Share this post


Link to post
Share on other sites
yes, A has many advantages. I mean, how would you animate with b? next to impossible, unless your models were extremely simple. it just seems to me, if you go with b, you'll end up regreting it when it comes time for some nice animations, I mean animations are stored as time varying DOFs that are applied to a joint in a heractical fassion of method A, whatever possible benifit B has is outweighed by A as B can be obtained trivially from A where the converse isn't ture. for the person that suggested that bones be stored as quaternions rather then matrix, why not consider using euler angles? isnt' this the most common method for representing dofs? quaternions are nices tho.

Tim

Share this post


Link to post
Share on other sites
There are plenty of replies here already, but I want to bring up one more point :)

Quote:
Original post by Ardor
And what about the physics engine? (a) seems to be incompatible with it. Or did I miss something?


I don't understand why you are talking about physics engines when thinking about a file format design. File formats have nothing to do with physics engines. It's the job of your application to read a file in a specific format and convert it to a data structure that fits your renderer and a possibly different data structure that fits your physics engine.

IMO, (a) is much better than (b) for a file format because (a) does not contain redundant information and (b) does. As pointed out before, it will be trivial to convert (a) to (b) inside your application, if that's what it takes to please your physics engine.

Tom

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this