Quaternion angle with itself > 0.001

Started by
19 comments, last by alvaro 9 years, 8 months ago


Sorry, I'm not sure I understand what you're trying to show me?

Looks like normalizing multiple times, with the results remaining stable and lengths equal to 1.00000000000000000000.

Hello to all my stalkers.

Advertisement


Sorry, I'm not sure I understand what you're trying to show me?

Looks like normalizing multiple times, with the results remaining stable and lengths equal to 1.00000000000000000000.

I understood what was happening, I just didn't understand the purpose of it. But now I notice he's using boost, so I think I get the point.

I understood what was happening, I just didn't understand the purpose of it. But now I notice he's using boost, so I think I get the point.


And now I think that you don't: Using boost has nothing to do with anything. My point is that perhaps your renormalization procedure is broken, since the straight-forward code I wrote doesn't exhibit the undesired behavior you described. Perhaps you should post your renormalization code, because it might explain why you are seeing what you are seeing.

I understood what was happening, I just didn't understand the purpose of it. But now I notice he's using boost, so I think I get the point.


And now I think that you don't: Using boost has nothing to do with anything. My point is that perhaps your renormalization procedure is broken, since the straight-forward code I wrote doesn't exhibit the undesired behavior you described. Perhaps you should post your renormalization code, because it might explain why you are seeing what you are seeing.

It's just a basic normalize method, scaling the quaternion by the inverse of its length [1 / sqrt( Dot(q,q) )] or [ 1 / sqrt(x*x + y*y + z*z + w*w) ]. But I'm pretty sure it's actually the way I compute the length that is causing the rounding error.

Looking at boost's abs() function, which I'm having difficulty tracing through all of their define usage, but I believe they are dividing all of the quaternion's components by the highest absolute component value, then squaring, then summing, then multiplying the result length with the highest absolute component value. I guess it may be worth the extra cycles to get rid of the small error.

It seems that even boost's method doesn't fix the problem. It slightly improves the rate that error happens, but I'm still encountering error every 4-5 random quaternions. For example, here are a few that boost cannot normalize:

0.17163053154945374000, 0.53784161806106567000, -0.38471391797065735000, 0.73024976253509521000

0.08311198651790618900, 0.03789226338267326400, 0.23039449751377106000, 0.96880078315734863000

-0.01666167378425598100, 0.09404919296503067000, -0.17354997992515564000, 0.98018240928649902000

I honestly don't believe the extra computations are worth it, if it only resolves the error in 1 out of every 5 problematic quaternions. Not unless I can come up with some alterations that eliminate it completely.

It's in the nature of floating-point numbers that the length of a vector cannot be made to be exactly 1. The question is, why is this a problem for you? Just make sure all of your code is tolerant to quaternions whose length is close to 1.

It's in the nature of floating-point numbers that the length of a vector cannot be made to be exactly 1. The question is, why is this a problem for you?

I only became concerned about the error when you suspected my normalization was broken, because I assumed you knew what you were doing. What was the point of taking my random quaternions and showing that you could make their length exactly one if they cannot be made to be exactly one?

Since you can completely change which vectors normalize without visible error with small alterations to the math, posting code that shows perfect normalization is as simple as choosing the right math for specific vectors. I just don't understand why you would want to.

Anyway, I was able to reduce the chance to encounter normalization error from boost's 75% to almost 80% (testing with completely random components, 80% of the quaternions normalized so that their length == exactly 1). In addition, I was able to dump out the need to use abs() on each component. I am still playing around with it, and will post the function code when I'm finished. If I can get the chance for error low enough, I may consider actually using it.

I apologize for the confusion I may have introduced. You didn't (and still haven't) posted any code, so I tried to reproduce your results and I failed. Then I posted my attempt and you took it as advice for how to do things, which it wasn't intended to be. The only part that you should learn from what I did is to show complete information, including code to go with what you observe, so others can reproduce what you see.

I've been toying around with the algorithm to compute the length, and (believe) I've discovered a few (strange) things. The results I posted in the previous post was while generating random axis+angle rotations. So all of those quaternions were already close to being normalized (within 0.001 or so). I thought this might not be the best testing environment, so I changed it to generate random quaternions of two random types (50% of both) - both are completely random (using boost's randomizer) - the first type has a length ranging from 0.7 to 1.3, and the second type has a length ranging from 0.0001 to 1000.

With these quaternions, boost's method encounters error for about 45% of them, so apparently it is optimized to deal with nearly-normalized values (makes sense). However, I've been goofing around, toying with numbers, and I've found that if I multiply all of the squared components by the specific value 5.9292812347412109, then multiply the resulting length by its inverse, I get no error in over 94% of all of the quaternions. I then tried using this number again with just the crazy quaternions (+/- 1000), getting 94% accuracy, and then just on nearly-normalized quaternions (+/- 0.3), getting 93% accuracy, so the quaternion values don't seem to influence it much at all.

I don't pretend to understand why its happening, and was actually hoping someone out there may be able to shed some light on it. Also, I'm wondering if this is somehow machine-specific, or if its something that can be used universally on the standard floating point model. As a total side-note, I was able to get 96% error free with the value 83.852615356445313, but I like the idea of using a smaller value better, and the improvement wasn't significant.

Here's the code I've been messing around with. I apologize in advance if, like I said, I'm accidentally rigging the results in some way, or doing something really dumb here. But I appreciate anyone testing it out to see how well it works.




real ComputeLength(real x,real y,real z,real w)
{
    // square all components
    x *= x;
    y *= y;
    z *= z;
    w *= w;

    // prepare crazy scaler with inverse
    real mc = 5.9292812347412109f;
    real mi = 1 / mc;
    mi *= mi;

    // compute length scaled by crazy scaler inverse, then scale back to normal
    return mc * sqrt( x*mi + y*mi + z*mi + w*mi );
}

I apologize for the confusion I may have introduced. You didn't (and still haven't) posted any code, so I tried to reproduce your results and I failed.

I thought I did post all of the code you asked to see. As I said, my original normalization code literally multiplies the components of the quaternion by the inverse of its length - literally { this.Scale( 1 / sqrt(x*x + y*y + z*z + w*w) ); }, where Scale() simply multiplies the components by the scaler given. If there's something else you want to see, let me know.

The only part that you should learn from what I did is to show complete information, including code to go with what you observe, so others can reproduce what you see.

Well, I tried to post all of the code that I thought was relevant. But I'm inexperienced when it comes to battling floating point error, so I don't really know what is relevant. But you are free to look at any of my code you wish. Just let me know what you would like to see.

Hey guys,

I'm in the process of working out my math library, and so I was testing out a function that returns the angle between two quaternions..

{ return 2.0f * acos( Abs( Dot4( qa, qb ) ) ); }

.. but for some reason, I'm either getting a lot of floating point error in the result, or I'm not checking for a situation that I should be. While testing a quaternion (which was generated by a random axis+angle rotation and appears to be very close to normalized)..

{ x=0.0172970667 y=-0.0245058369 z=0.0205858145, w=-0.999337912 }

.. with itself, I'm getting a result angle of 0.00138106791 (or almost 0.1 degrees)..

I'm just wondering if this is acceptable error when working with float variables? And is there anything I can do to improve this issue other than switching to double type or something else as drastic?

edit note: After testing some more, the highest "error angle" I've been able to generate (through random axis-angles) is 0.001953125. And that was getting the angle (from itself) of a quaternion generated by the axis @ angle: { -0.833756,0.551120,-0.033417 @ 2.960138559341 } (quaternion result: { -0.830327,0.548853,-0.033279,0.090603 } )

Thank you

The reason why the error gets so big, is that the inverse cosine function is very steep around 1. This has the effect that the (even exact) inverse cosine of a dot product which is only a tiny bit off, will give a pretty big angle difference.

For example, the cosine of 0.1 degree is 0.99999847691, and so if your dot product would give 0.99999847691 (which is a pretty good approximation of 1), the angle you get will be around 0.1

I bet that the reason why, after renormalizing your quaternions, you did get the correct result, was that this gave a dot product of exactly 1, but I don't think this will work for all quaternions. There will certainly be normalized quaternions, which give a dot product with themselves not exactly equal to 1.

The good news is that it's only this bad when you're computing the angle between quaternions which are almost parallel. For quaternions which are not nearly parallel, the result will be more accurate.

This topic is closed to new replies.

Advertisement