Normalized Hermite Quaternion Interpolation

Started by
4 comments, last by TomForsyth 16 years, 5 months ago
Anyone care to share their thoughts on rotational interpolation for long time distance keyframes? What has worked for you and why? Both practical and incorrect-but-optimized suggestions welcomed. I not tied to an animation package so I can explore any option. I'm currently thinking about 2 options, and I'm already using hermite interpolation for keyframe points. 1. Some kind of quaternion interpolation. Perhaps hermite interpolation of quaterion components with normalization, and perhaps some kind of velocity correction. Rough estimation of normalized hermite interpolation for point+quaternion (without any velocity correction methods) is 97 flops. This includes a quaternion to 3x3 rotation matrix conversion (I need a standard rotation matrix as output) which requires a branch or predicated instructions. OR 2. Since each rotation is tied to a keyframed point, I could simply add 2 more points, and build the rotation vector from the points (generate 2 normalized vectors, build matrix with 2 cross products). Rough estimation for this (hermite + generation of 3x3 matrix) is 105 flops. I'm guessing it would be easier to control velocity and torque with option (2.) with fewer control points (which could make up the difference in memory cost, and flops).
_|imothy Farrar :: www.farrarfocus.com/atom
Advertisement
You probably need to be a bit more specific about what you mean by "long distance keyframes" and what the application is. In general, interpolation of rotations over any significant angle is hard. There's just so many problems that can be solved by using more samples and dumber algorithms.

Granny 3D uses non-uniform B-spline interpolation of quaternions, which works well. The interpolation is just a vanilla four-dimensional B-spline. After interpolation we normalise to the unit hypersphere, but the actual interpolation does not know anything special about quaternions. It's simple and cheap and very effective. We use B-splines rather than something like Hermite because although they're the same thing mathematically, B-splines have only a single data type (positions) while Hermite have positions and tangents. This simplifies a lot of code and allows slightly better compression.

I also wouldn't worry too much about the exact number of flops. For animation, the main speed hit is usually things like cache misses and branch misprediction and how fast the decompression is. How much maths you do is relatively moot.
Quote:Original post by TomForsyth
You probably need to be a bit more specific about what you mean by "long distance keyframes" and what the application is. In general, interpolation of rotations over any significant angle is hard. There's just so many problems that can be solved by using more samples and dumber algorithms.


Hence the need for compression!

Quote:I also wouldn't worry too much about the exact number of flops. For animation, the main speed hit is usually things like cache misses and branch misprediction and how fast the decompression is. How much maths you do is relatively moot.


Thanks for the insight. Nurbs makes sense especially if you are going to subdivide them in some kind of editor.

Luckily, what I doing is easy to keep resident in memory with no compression or dynamic loading, and is linear in memory. Resulting in a simple sequential run-through of nodes, with minimal cache issues and nearly devoid of any type of branching.

I was thinking of using Catmull-Rom for the tangents so I would only have points with my Hermite curves, and with sparse key frames, the actual runtime cost of the interpolation and rotation matrix calculation becomes a factor...
_|imothy Farrar :: www.farrarfocus.com/atom
We don't define the splines directly in an editor - the input is a sequence of keyframes which are sampled from whatever arbitrary method the artist used in Max/Maya/XSI (splines, joint constraints, IK, physics simulations, etc). Then we fit splines to those keyframed samples and ensure we match to within some specified tolerance.

You say your data easily fits in memory, yet you need better compression? I'm not sure I understand that.
Quote:You say your data easily fits in memory, yet you need better compression? I'm not sure I understand that.


Compression not because of memory, but to minimize download size!



_|imothy Farrar :: www.farrarfocus.com/atom
Ah, I see. I would split it into two different problems then. The first is compression so you fit in memory, and decompress on the fly as you render. Fast decompression places significant limits on what you can do at runtime because you need to deal with seeking to random places in the curve, and you can't really do very complex operations while maintaining speed. If memory really really isn't a problem, you could just skip this bit and use linearly interpolated keyframes. But professionally speaking, that makes me queasy :-)

The second is the compression for the download. Here the decompression can be much slower because at worst you do it at start of day (e.g. XBLiveArcade titles), at best you have an actual installation process (PC games).

On curve formats and so on - my experience has been that compressing the control data (end points, tangents, etc) heavily to get the size of each one smaller, but then using more of them to compensate for the imprecision, gives you a greater overall win than using a more complex control structure and trying to use fewer of them. But that's just a gut feeling of the direction to look in.

My other experience is that direct editing of this data sucks. I'd sample the original artist-generated data at a high frequency, then fit your curves (or whatever) to that with a certain tolerance. It means the artist doesn't need to re-edit stuff every time you change your compression algorithm - extremely important.

Finally, I'd like to insert a blatant pimp for existing packages that do all this stuff already. Wheel reinvention bad! Code reuse good!

This topic is closed to new replies.

Advertisement