Demons infesting compiler

Recommended Posts

There is probably a very simple reason for this, but it really doesn't make sense to me. This code: FLOAT dist_a = sqrtf( d ); // d == 99.999965f FLOAT dist = dot - dist_a; // dot == dist_a == 9.9999990f return dist; returns 0.0f While this code: return dot - sqrtf( d ); returns -1.9073484e-007f Woo hoo! The function is inline, if that makes any difference. Anyone know why this is happening? How I know this won't happen somewhere in the other thousands of lines of code I wrote without knowing about this issue? [disturbed] Feel free to move this to general programming, if it should be. I'm not sure where demons really fit [smile] edit: adding 1 to -1.9073484e-7 with windows calculator turns it to 0.999999. This is good. But why -1.9073484e-007f and not -0.00001 or such? It's screwing up a whole host of other algorithms, and I don't understand why.

Share on other sites
Well....

Did you SET the value of d explicitly in code, e.g., did you write somewhere:

d = 99.999965f;

before going on to calc dist_a?

And, did you SET the value of dot to be exactly dist_a, e.g., did you write somewhere in code:

dot = dist_a;

If you did, then you've set yourself up to get exactly dot - dist_a = 0.

I suspect you did do something like this. The reason for the 1e-7 weirdness is, ultimately, due to roundoff error. If you calculate some dot product to get dot, using a set of operations, then some other set of operations to get d and dist_a, the different operations to get each value will result in very slight differences in the least significant digits (the ones furthest to the right of the decimal point) and this leads to the very small, but nonzero difference.

The reason I suspect you've created an artificially perfect test with the code you present is the following: your dot = dist_a = 9.999999 is NOT the square root of your d = 99.999965! If you take 9.999999 and square it, you get 99.99998 (not exact). The actual square root of 99.999965 is 9.9999982, in floating point precision, which isn't particularly accurate (square it and you'll come up short).

Share on other sites
Nah, I didn't set any values manually. They are from a sphere-intersect test:

FLOAT dot = ray_direction.Dot( offset_to_sphere );
FLOAT d = dot * dot - ( offset_to_sphere.GetLengthSq() - (sphere_radius * sphere_radius) );

As for the 9.999999 square root, I may have picked the wrong number or rounded the value from the break point value. Sorry about that.

edit: dot was 9.9999990f. I just assumed dist_a was the same because of the 0 return value. Still, my mistake.

Since I did manually set the position of the sphere and a line which becomes calculated into a different angled ray, I do know that the distance from the ray origin to the sphere surface is very close to zero.

I'm still not sure why the same math is giving different results just because I store the values into a temporary variable? Confusing.

Float errors make math coding a real headache. Constantly have to add tiny buffers to let rounding errors slip through :(

Share on other sites
This is somewhat compiler dependent, but intermediate values of floating point numbers are not necessarily of the same type/precision as the input values. For example, intermediate values of floating point calculations under MSVC (at least more recent versions) is are stored by default in a 53-bit precision format. So by storing the value into floating point variable you may be losing information that is represented by the intermediate value; hence the difference changes.

Your compiler may specify a way to change the default precision of intermediate floating point computation values.

Share on other sites
Thanks for the suggestion, I'll look through the compiler settings to see if I can find that [smile]

Oh, and the reason for the other routines messing up was where I forgot to allow for error on checking for less than zero. The -1.9073484e-007f value doesn't seem to bother the compiler. I guess I'm still pretty new to this floating point math deal.

Share on other sites
The real problem is in how you are comparing floating point numbers. With integers (3/2)*2!=(3*2)/2. Same way with float. The rounded that takes place depends upon the order of operations. So rather than checking if things are exactly equal you check if they are approximately equal. When dealing with numbers in the 10's a number near 10-7 should be considered zero.

Share on other sites
I'm not sure how I would always know what range of numbers I'm dealing with. Could be 100's, could be 1000's, could be 0-1. It all depends on the velocity, shape, and size of the objects.

I would prefer to use extremely large floating point storage space, if it meant nearly perfect results [smile]

Unfortunately, DirectX isn't compatible with 256 bit floats [grin]

Share on other sites
I don't see how it could be negative!
with the values you're giving us:
dot    = 9.9999990f = 0x411fffffdist_a = 9.9999980f = 0x411ffffedot - dist_a = (not negative)

it's only 1 bit of precision, but no fpu will ever tell you that it's actually negative! Whatever rounding happened before this point, if you start with those values, then subtract them it can't be negative.

Can you generate the assembly output for the 2 cases (i.e. via temporary vs straight return)? Have you looked at the difference between debug and optimized compilations? Optimizations on should bypass temporaries completely.

Share on other sites
A large float wouldn't eliminate the problem. Numbers like 1/3 or 1/10 in binary take an infinite number of digits. No matter how large the representation there are more numbers that cannot be represented exactly than that can be. A billion byte float, a billion, billion byte float, doesn't matter how big. There will still be rounding, there will be no significant differance in the probability that a give number is rounded and exact comparisions will still fail as a result.

There is a big benefit to limiting your scale. That is that it saves you a multiplication which saves you time. If you do not know the scale of A and B then fabs(A-B)<fabs(A)*epsilon is the way to check if they are equal assuming neither A nor B are zero. If either are zero then you can't tell from A and B what the scale of the numbers are and thus where the cutoff for zero should be.

Share on other sites
I've had a very similar problem a while ago with MSVC 7. If I remember correctly, using double instead of float fixed it for me, couldn't figure out why though.

Share on other sites
It's important to realize that floating-point numbers aren't exact: they sometimes will lose information as they are manipulated. The value will be very close to what the actual value should be, but it will sometimes be slightly off.

The reason for this was that in the early days of computers, there was debate about a decimal number standard. The physicists wanted floating point numbers, as they were faster, consumed less memory, but less precise. It wasn't important to the physicists if the results were slightly off: a few miniscule fractions of a second or meter weren't that important in their field.

The business people, however, wanted fixed-point numbers. These weren't as fast, used more memory, and never lost accuracy. In the financial world, a loss of accuracy could mean a lot of lost money over a period of time.

In the end, the physicists and floating point decimal numbers won, since not only was memory very precious in those days, but physicists and other scientists were the main customers of computers. The business people weren't as important.

So that is an interesting bit of history (at least as told by my software engineering professor) about the origin of the common floating point number we use today.

Floating point numbers are nice when pinpoint accuracy is unimportant, like physics and computer graphics, but they can be bad news for financial applications.

My professor showed us what the IEEE floating point format looks like, and how some less significant bits can get lost as the decimal point moves around. The makeup of the data type itself is the reason for slight inaccuracies one sometimes sees. It was interesting stuff. Unfortunately, native fixed-point data types aren't terribly common, but there are libraries that provide fixed-point types.

Another thing you might want to note is that the larger the floating-point value, the less likely you are to lose accuracy. That's why such errors occur much less with floats than with doubles, and doubles contain twice the number of bits as floats.

Create an account

Register a new account

• Forum Statistics

• Total Topics
627741
• Total Posts
2978887

• 10
• 10
• 21
• 14
• 14