Quaternion gymnastics "challenge"! Now with illustrations. Matrix solutions (etc) also welcome.

Started by
13 comments, last by Mick is Stumped 12 years, 1 month ago
Problem:

You have two sets of quaternion+position pairs and you want to combine the effects of both so that you end up with one quaternion+position pair.

Background:

We have an animated model file and need to convert the vertex data into uniform global space to reflect a snapshot of one of the animation frames, and then rejigger all of the animations to work with the new vertices.

So one quaternion+position set is the original animation data, and the other is the inverse transform to pull the vertices back to their original states.
Advertisement
Hint #1:

Ok. First of all. To be clear. All variables are controlled for. With each quaternion+position pair alone, the correct results are observed. The new pair brings the model into its original (bind pose) state.

This problem is more complicated than intuitively combining the sets it seems. Eg. if you convert each set into a matrix, then get the product of the two, then decompose the product into a quaternion+position pair, you don't get the desired effect either. I just phrased the problem in terms of quaternions because it was about animation and that is the interchange format used by the library used by the project.


What I suspect is the problem is akin to when you want to combine two rotations in a way that is relative. So you do the first rotation, then get the relative axis of rotation from that, and rotate about that axis, to get what feels like a stationary secondary rotation. But I am too stupid to figure out the first problem much less know if this is truly the case. The parallel is that we don't really know the origin of these transformations, the files themselves are pretty soupy in this respect, in that they don't have traditional point to point skeletons, since the original vertex data is also in global space.

PS: The problem is we have people wanting to make new models. But their models tend to have the pieces centered around 0,0,0 in local space. The map editor etc wants to show the global/unanimated coordinates, so we need to be able to convert these user made models into global space raw vertex data wise. And we can't really assume any intention in terms of the animation reference frames etc.
This is code I wrote last week:

Vector3 apply(Q q, Vector3 v) {
Q r = q * Q(0, v.x, v.y, v.z) * conj(q);
return Vector3(r.R_component_2(), r.R_component_3(), r.R_component_4());
}

struct Movement {
Q rotation;
Vector3 translation;

Movement(Q rotation, Vector3 translation) :
rotation(rotation),
translation(translation) {
}
};

Movement operator*(Movement const &m1, Movement const &m2) {
return Movement(m1.rotation*m2.rotation,
m1.translation + apply(m1.rotation, m2.translation));
}


I haven't tested it very carefully, but I think it is correct.
Thanks, that looks just complicated enough that it just might work...

This is code I wrote last week:

Vector3 apply(Q q, Vector3 v) {
Q r = q * Q(0, v.x, v.y, v.z) * conj(q);
return Vector3(r.R_component_2(), r.R_component_3(), r.R_component_4());
}

struct Movement {
Q rotation;
Vector3 translation;

Movement(Q rotation, Vector3 translation) :
rotation(rotation),
translation(translation) {
}
};

Movement operator*(Movement const &m1, Movement const &m2) {
return Movement(m1.rotation*m2.rotation,
m1.translation + apply(m1.rotation, m2.translation));
}


I haven't tested it very carefully, but I think it is correct.


Thanks for the input! That is actually the first thing I tried myself. Just intuitively :)


ssrot = q2xyz(snapshotquat*qrot);
sspos = snapshotquatpos+snapshotquat.Rotate(pos);


I'm pretty irked that the logical thing to do is not taking. Run into these kinds of walls only about once every leapyear.

Everything else checks out forwards and backwards.

I am baffled, but anyway, to restate the obvious, the "snapshotquat"+"snapshotquatpos" part looks right on its own. And the qrot+pos part looks right on its own. But together they blow up. It's impossible to make assumptions about the reference frame... Eg. (0,0,0) could be anywhere. I don't know if the code quoted above does or does not. In theory I don't see any reason why there cannot be a common rotation+position pair to fill both roles in one, that is probably a law. I just have no clue how to find it. But you'd assume it's not impossible given the inputs and outputs.

Disclaimer: q2xyz is just a global that generates Euler angles for the file format in question.

PS: People (http://www.swordofmoonlight.net/bbs2/index.php?topic=47.msg704#msg704) would like to see this work out for the best :) (myself especially)
Sorry, your description of the problem is too verbose and abstract for me to make sense of it. I interpret a quaternion-vector pair as meaning a movement (rotate and translate), and I claim that my code performs the composition of two movements. If I take a point P and two movements T and U, I expect this to hold:

T(U(P)) = (T*U)(P)

If that works, my work is done. If you expect the composition of two movements to have some other property, please describe what it is so we can understand the apparent paradox you are facing. But chances are there is a conceptual mistake at the bottom of this issue.
The simplest explanation is the following (thanks for following BTW)

You have an animation defined by quaternion (for rotation) and vector (for position) pairs.

You have two different animations states that need to be fused into one. The solution must be equivalent under all circumstances.

The first state pulls the vertices being transformed to their binding position: which we will define as the position the vertices are expected to be in by the second state; that is the state the artist originally composed the animation according to.

The second state (to be merged) is the actual animation data.

So in order to get the vertices to where they belong the two states need to be somehow combined into one.

Capisce?

PS: The code is emaculate/proofed except for this one bit; I could post everything, but I don't thing it would aid the discussion. I don't think it's likely a paradox. I just think the intuitive solution is not general enough.

I have also entertained the possibility that the transforms being relative, parent to child, could be the root of the problem. But the two states going in (as mentioned above) are also relative. And I don't think a correct solution should care whether it is relative or not. Coding a solution that is global (solves for all states and then substitutes the difference of the two) is probably the next thing I would try, but that will involve a heck of a lot of work. Probably not as much as I can imagine; once I get to cracking at it. But way to much for me to justify at this point; ie. I am not so desperate as of yet. May be a year yet before that day comes.

EDITED: If anyone can describe a guaranteed solution (under any circumstances) involving traditional (linear algebra) matrices that will work too. I'm considering changing the thread subject if possible. Feel free to nominate a good description. I am kind of at a loss :)
So you have two transforms and you want to blend between them, with a parameter lambda that varies between 0 and 1 indicating how much of the second transform you want in the mix.

You should then use

translation = (1-lambda)*A.translation + lambda*B.translation
rotation = interpolate(A.rotation, B.rotation, lambda)

There are several methods for interpolating, but the main ones are slerp and its cheaper cousin nlerp.

Is that what you were looking for?
^No, it has nothing to do with interpolation. Sorry.

The basic idea is (from the top post or so) the raw data points for the vertices have been changed from out from under the animation. The goal is then to make the animations work identically even though the vertex data is changed.

To achieve that the animations also need to incorporate the transformations applied to the vertices (which chances are is something the animations were already doing since usually you must transform the global coordinates into local space before animating them)

The thesis is: if you have two animations that do what you want, and the second is relative to the other (all changes occur after the first animation) then how to combine them into one set of transformations.


In our case. The first animation pulls the model from its new bind pose (a snapshot of an animation frame chosen by a user) to its original bind pose (which its animations are tailored to) and the second animation is the original animation data...

Now you can claim that visually post animation the vertices are in their correct places, but in reality they are just being transformed. It may be that an intermediate transformation into one or the other space is required, or that a more analytical merge operation is called for. In the former case, I can't see any obvious candidates, since an inverse of either would simply negate one or the other, and in the latter case, I don't know of any such algorithm, but of all the mathematical formulae out there, I assume there must be one, and that there are enough inputs and outputs on the table to solve for a solution.

I am sorry, but I don't understand the way you express yourself. I guess if you could say what you need with perfectly precise language, you would probably also know how to do it. I was hoping to be able to fill in the gaps, but I don't think I can.

It's probably better to concentrate on individual transformations, instead of dealing animations (since animations are complicated things, with many frames and many bones with transformations piled on top of each other). Try to find some very simple example so you can follow the computations by hand.

This topic is closed to new replies.

Advertisement