Hi. I am currently developing a game engine using Directx 11 api and I want to know if its computationally feasible to use the same animations for multiple models by calculating joint positions at runtime instead of having them pre-calculated. I have skeletal animation working using MD5 model format and I do skinning in the vertex shader. For each frame in an animation, I store a vector of joint structures with their position in model space and orientation in model space. I build an interpolated skeleton every frame and pass it as a structured buffer to the gpu where the vertex shader uses it for skinning. Animations work on different skeletal models with the same hierarchies but if their joint positions are significantly different, the animations look weird. By storing the default joint positions in joint space (before being multiplied by parent joint's orientation in model space and added to parent joint's position in model space) of a skeletal model, I could calculate their model positions and orientations each frame by multiplying their position by the parent's orientation in the animation and adding to the parent joint's model position and then multiplying the joint's orientation by parent orientation for every joint excluding the root joint. For interpolation, the orientations in joint space would be first interpolated between the orientation of the joint for the current and previous frame. Is what I'm suggesting plausible or would it be too expensive in comparison to having the joint positions pre-calculated. Sorry if I haven't explained this properly.
SephireXMember Since 11 Mar 2013
Offline Last Active Sep 19 2013 09:40 AM