Floating point 'in a nutshell'

Started by
6 comments, last by MGB 19 years, 11 months ago
I have the ACM paper "What every computer scientist should know about floating point arithmetic", but it's just so big and nasty-looking and took me ages just to read the first few pages!! So can anyone just quickly explain the basic pitfalls of floating point numbers? E.g. what/when operations can be bad (inaccurate), why accuracy is affected, what to avoid doing etc. I know this is a complex in-depth topic, so just 'in a nutshell' sort of thing... What I've read / picked up (possibly rubbish?): a) Subtracting similar large floats is inaccurate. b) Using large floats can be inaccurate (because of a?). c) Adding small float to large may be inaccurate. [edited by - aph3x on May 18, 2004 11:21:55 AM]
Advertisement
Beware of trying to use a float as an incrementer. index += 1.0 a 1000 times does not each 1000 :-)... at least much of the time it doesn''t. That is why no sane person would use it for keeping track of monetary values. Division rounding can add up rather quickly also. If your worried about speed, many times you can pull of with ints what your trying to pull off with a float, only a lot faster. Other pitfalls I have had with floats is that watch out for casts. I''ve spent 3 hours trying to figure out why an algorithm wouldn''t work only to figure out even though I was doing an operation to a double, I was dividing an int by and int first.


Basically I would stay away from anything float where you need absolute precision. The reason accuracy is such a problem is that a float is limited in percision (Yes I''m probably spelling it wrong). A float in x86 architecture is usually a 32 bit approximation of a decimal. If you divide 1/3 and then multiply by 3 again, you''ll most likely get a .9999999.... This is because even though you should be adding the same amount back, that .999 off to infinity was loss. Thus you go .9999999 out to however many decimal places a float is (I don''t feel like looking it up).


Floating points are great though for things that don''t have to be accurate just really close. Such is the case with the real world. There is no way to accurately tell what the real world does, so why should it matter if you lose a little accuracy. BTW, what you read about floats is true if you cause the float to overflow.
My name is my sig.
Floating point is interesting. There are very convincing reasons to use floating point and to avoid it.


Pros:
Very easy to represent virtually any real number (not precise)
Floating point hardware has been getting much faster

Cons:
Different floating point hardware will produdce *slightly* different results. This can be a problem for network games.

Some floating point hardware is only barely complient with the floating point specifications. Underflow (the smallest floats) can cause operations to take 100 times as long as normal.

[edited by - pTymN on May 18, 2004 11:34:50 AM]

In general most of the problems of floating point numbers can be understood by understanding that a floating point number only has so many significant digits. For C/C++, a float (32-bits) has roughly 6 digits of accuracy. That means that 0.100001 is fine, because 100001 is six digits worth. However, if you tried to store 0.10000001 in a float, you''d lose that 1 on the end. Actually, you wouldn''t simply lose it, but after enough digits, insignificant digits will be "random", since floating point is in binary, but you''re trying to view it in decimal. Beyond 6 digits, things simply get quite inaccurate.

Thus, to explain a few of the common things you mentioned avoiding, in light of this information:

Subtracting similar floats:

If you have two large floats, 123456789.0 and 123456889.0, you have to realize that they''re probably already not as accurate as you think, since all that you can be guaranteed of are the first 6 digits. Thus, even though the seventh digit should be different, it''s hard to tell what will result if you subtract the two.

Using large floats:

Using large floats isn''t a problem by itself. The problem is when you use large floats, but still want accuracy at the one''s, ten''s or hundred''s level. If you have a number like 1.0e20, then you can''t expect to be accurate at the one''s or ten''s level. But this applies to small numbers, too. If you have 1.0e-20, then this is a small number, but don''t expect to have accuracy beyond 1.0e-26 or so.

Adding small float to large float:

If you have two numbers: 10000.0 and 0.00001, and you add them, you should get:

10000.00000
0.00001
---------------
10000.00001

But you''ll notice that this is plenty more than 6 digits. That last 1 is gonna be lost in a bunch of inaccuracy.


double (64-bit) has roughly 15 digits of accuracy, so it''s a lot better than float, but it still has the same problems; it just takes more to get the problems to occur.
"We should have a great fewer disputes in the world if words were taken for what they are, the signs of our ideas only, and not for things themselves." - John Locke
Thanks for the replies u 3 - some good info in there.
quote:a float is limited in percision (Yes I''m probably spelling it wrong).


precision

spelling ninja attack


"A soldier is a part of the 1% of the population that keeps the other 99% free" - Lt. Colonel Todd, 1/38th Infantry, Ft. Benning, GA
quote:Original post by Aph3x
I have the ACM paper "What every computer scientist should know about floating point arithmetic", but it''s just so big and nasty-looking and took me ages just to read the first few pages!!
...I know this is a complex in-depth topic, so just ''in a nutshell'' sort of thing...
Knowing the rules of thumb isn''t going to help if you don''t understand the issues and the math behind them. For example, you have your three (mostly correct) little rules, but you have no idea how to change your code to accomodate them?

Here''s a useful rule -- Rather than checking if two floats are equal, check if they are close. So, how close is close enough? You could pick a number that seems ok, but how do you know it is really ok?

On the other hand, the paper you cited has a big problem. Even though it is titled "What every computer scientist should know...", it is far too difficult every computer scientist to read. It also contains much more than what every computer scientist needs to know.

Unfortunately, I don''t know a good place to learn about the intricacies of floating point. Hopefully, somebody can suggest a "Floating Point For Dummies".
Hmm... looks like I may have to try and distill the ACM paper then :o

This topic is closed to new replies.

Advertisement