In game animations

Started by
1 comment, last by NightCabbage 16 years, 5 months ago
Hi everyone, this is a pretty general question, but in games engines, are animations produced using a sequence of pre-outputted objs or are the skeletal rigs actually exported using an exporter and inputted into the engine? I wrote some mesh deformation code using an XSI to collada XML export, but with all the joints and the vertices, there are alot of calculations to be made at every frame even with the transformation matrices saved. Is there another way to do this? or is my deformation code just too inefficient? I'm asking this because in certain games when a character is idling, it may bob up and down on the spot. If it's just an idle motion would it be feesible to do the matrix multiplications with all the things going on in the scene?
Advertisement
Those animations are predefined using an external tool (Maya, 3DS, etc) and imported to the game.
Hmm I've been wondering a similar thing myself.

I think the X format supports bones for animation.

I'm not entirely sure how it works as I haven't done any animations yet...

So I was wondering how it stores an animation?

If not using bones (just using vertices) does it just store the positions of the vertices for each frame?

Wouldn't that be really inefficient?

And for bones, does it just store the position of the bones for each frame?

Or does it store something like a motion tween (in Flash lol) where you just store the start point, and the end point, and it interpolates the points in-between?

This topic is closed to new replies.

Advertisement