How do modeling programs handle transformation hierarchies?

Started by
14 comments, last by dimebolt 18 years, 7 months ago
I am thinking about a model file format, and stumbled upon this issue. I know of two concepts how to deal with hierarchies. a) A node's coordinates are always relative to the parent. For example, if box A has a child, box B, and box B has the coordinate 0,0,50, then it will remain 0,0,50, no matter if box A moves/rotates/etc. But that also means that the child coordinates are NOT its world coordinates; to get those, the parents transformations have to be applied as well. b) A node's coordinates are always relative to the world/scene. In the previous example this means that the child coordinates DO change when the parent is moved/rotated. 3DS Max seems to use variant (b), but does every modeling program out there use (b) as well? I have to choose, and I tend to (b), but what do you think?
Advertisement
well, for me (a) looks more simpler to implement, but you can do (b) as well. It depends of what's your application's purpose. If childs are changing, then there will be a lot more load on the cpu. On the other hand, if the hierarchy is modified often (nodes deleated, rearanged, etc), then i think is more reliable that each node has it's transformations relative to the world system. You have to build a list of all your application's features, and see what method is more fit to use.
Well, I want to include some rigid body dynamics as well, and all physics engines give me transformation matrices that are world-oriented, so (b) would fit better with them. I don't know how to deal with this when using (a).
well, then we have no real problem here :D. just use (b) (it's much more flexible anyway, if you want to add more features in the future).
Hmm, yeah, only problem are modeling programs using (a) (Cinema4D seems to do this for example) but this shouldn't be too hard to convert in the exporter.
well, it's obvious that is much easy to convert from (a) to (b) (by simple transformation composing = matrix multiplication) than from (b) to (a).
You could go for (a) and provide a mechanism for computing (b) (and maybe even caching world transformations, should computing them become a bottleneck).
Quote:Original post by Koen
You could go for (a) and provide a mechanism for computing (b) (and maybe even caching world transformations, should computing them become a bottleneck).


And what about the physics engine? (a) seems to be incompatible with it. Or did I miss something?

Then again, such a hierarchy doesnt make any sense with a physics engine, since all nodes get their transformations from the physics, thus transformation inheritance becomes pointless...

In fact, I am thinking about the hierarchy because of these reasons:
1) The rigid bodies need some initial transformation
2) Animated characters are usually frozen in the physics simulation and not affected by it
3) Models become easier to move; in the car example, I can have the chassis as "main node", and just move the main node

So for example for a car the decision whether to use a or b is only relevant at the start because once the transformations are there the physics take over. However, if you have a model of a soldier, and he is running, the animation system has control, the soldier ragdoll is frozen. When he gets shot, the system should switch from animation to physics control. This switch should be easy and quick to do. Of course, the soldier may have a gun, which he holds in his hand (the gun would be a child of the soldier model).

[Edited by - Ardor on September 8, 2005 9:48:18 AM]
a) is good for animation, it is what max, maya etc use.
b) is good for physics, but rubbish for animation blending (almost impossible to get good results)

so, use a) and calculate the world space matrices (which is after all what you need and takes very little processing time to evaluate). Normally it is unusual to find physics engines using the same geometry hierarchy as used for rendering. For rending, draw all geometry in local space and transform into the correct location using either hierarchical push/pop matrices, or simply the evaluated world space matrix.

Maya actually stores all geometry as stand alone nodes. Instances of that geometry are stored using DagPaths ie,

hip|right_legbone|legmesh
hip|left_legbone|legmesh

This is basically how maya handles instances of the same geometry within a scene - it simply renders all dag paths.....

Quote:Original post by meeshoo
well, it's obvious that is much easy to convert from (a) to (b) (by simple transformation composing = matrix multiplication) than from (b) to (a).


going from b to a is just as simple, it just requires you to multiple a given body by the inverse world matrix of it's parent joint. Thus, transforming between the two representations is not complicated. You might need to re-orthogonalise the resulting local space matrices though (if you want to be really accurate).

This topic is closed to new replies.

Advertisement