C++'s floats not very good?

Started by
27 comments, last by SiCrane 13 years, 4 months ago
I've found that C++'s float values seem to alter themselves. For example, I set a float variable to 0.795, and I check it while the program is running to find that it's now 0.795000000129748. Is there some reason why this is happening? Does it always happen?
It's very annoying, and it makes equality impossible. Instead of:
if(floatVar == 0.795)


You have to use the infinitely more complex:
if((floatVar >= 0.795) && (floatVar < 0.910))

Where 0.910 stands for some other value.
Advertisement
Not all real values can be accurately represented in a fixed-space binary encoding on a computer -- thus, all floats have these inherent problems. You should almost never use pure equality to compare floats, instead preferring epsilon-based range checks. Read this.
It's not C++, but the nature of floating point numbers.

Google - floating point accuracy problems

"I can't believe I'm defending logic to a turing machine." - Kent Woolworth [Other Space]

It is pretty appalling how badly many introductory books fail to emphasise this quality in floating point arithmetic.
I'm sure the links posted already provide all the gory details, but to summarize, what you are seeing is not a bug -- its a feature of floating point numbers.

Take a 32bit integer -- it can hold a number between around -2 billion to around +2 billion signed, or 0 to around 4 billion unsigned. With a 32bit number, you get 2^32 (or 2^31 with signedness) "increments" which are evenly distributed.

With floating point, the format was designed to solve the problem of representing both *really* small numbers and *really* big numbers with one format. a 32-bit floating point value can represent a number as large as ~ 3.4028234 × 10^38 -- thats a number about 29 orders of magnitude larger than an Integer value containing the same number of bits. At the same time, it can represent a number as small as 1.17549435 × 10-38 -- to translate to decimal, that's:

0.0000000000000000000000000000000000000117549435

If you encode that number using a fixed-point format, you'd need 128 bits and you'd have a range of 0 to just shy of 4. This is why floating point is cool, if a bit strange.

How does floating point get so much range out of 32 bits? Well, it gives up on having a linear distribution of "increments". The key observation that's behind floating point is that really big numbers don't need to be nearly as accurate to be "in the ballpark", and conversely, really small numbers have to be very, very close to their real value to be in the same (realative) ballpark. So that's what they designed it around -- the difference between the smallest representable number and the next smallest is super small, but at the other end of the scale, the difference between the largest possible number and the next largest is comparatively massive. I'm not sure of the exact difference, but I'd wager its 10s of orders of magnitude larger than the difference between the smallest representable numbers.

In scientific applications, things that are small tend to be *really small* -- say, the distance between atoms in some material; things that are large tend to be *really large* -- say, the distance between solar systems. So if you're planning a deep space mission and your calculations are 1000KM off, the error is basically nothing, but if you're mapping the structure of steel or something, and your calculations are off by even one-one-thousandth of a micrometer, then you've not even close.

Also, if your calculations mix numbers from opposite ends of this spectrum, you'll shed accuracy like mad, so do your best to structure your calculations such that the numbers being operated on are of a similar magnitude where possible.

Basically, you just have to deal with it. The "fuzzyness" of floating point equality is called an "Epsilon" value. For single-precision floats an epsilon value of 0.000001 is, I believe, fairly standard.

throw table_exception("(? ???)? ? ???");

Or, put another way: Write down 1 divided by 3 exactly, as a single real number. It's not possible and is similar to the problem the CPU has with storing some numbers.
Quote:
if(floatVar == 0.795)
You have to use the infinitely more complex:
if((floatVar >= 0.795f) && (floatVar < 0.910))

Faster, and less arbitrary:
float temp(floatVar - 0.795f);
if (temp * temp < std::numeric_limits<float>::min()) ...
Depending on the mathematical operations used to reach the value of the variables, you might need to compare to numeric_limits<float>::epsilon() or a multiple thereof instead.

I would very strongly recommend you read the first chapter of Modern Mathematical Methods for Physicists and Engineers

I swear to gawd that a computer architecture course should be an absolute requirement before any programming course in an accredited CS program.
"But who prays for Satan? Who, in eighteen centuries, has had the common humanity to pray for the one sinner that needed it most?" --Mark Twain

~~~~~~~~~~~~~~~Looking for a high-performance, easy to use, and lightweight math library? http://www.cmldev.net/ (note: I'm not associated with that project; just a user)
Quote:Original post by Prune
I swear to gawd that a computer architecture course should be an absolute requirement before any programming course in an accredited CS program.

In my experience of taking a computer architecture course floats aren't covered. It's more about memory, caches, pages, and pipelines. Floating point representation was covered in the assembly class. :\ Just goes to show that not every class covers the same stuff at every university.
Quote:Original post by Sirisian
Quote:Original post by Prune
I swear to gawd that a computer architecture course should be an absolute requirement before any programming course in an accredited CS program.

In my experience of taking a computer architecture course floats aren't covered. It's more about memory, caches, pages, and pipelines. Floating point representation was covered in the assembly class. :\ Just goes to show that not every class covers the same stuff at every university.


yes for me, I'm just finishing up computer architecture and we covered floating point numbers. From its internal representation to algorithms for truncatination and so on.
Edge cases will show your design flaws in your code!
Visit my site
Visit my FaceBook
Visit my github
Quote:Original post by Sirisian
Quote:Original post by Prune
I swear to gawd that a computer architecture course should be an absolute requirement before any programming course in an accredited CS program.

In my experience of taking a computer architecture course floats aren't covered.


Bizarrely, at my university, in my program, I found out that this was covered (at least to some extent) in the "basic" stream of the first-year CSC course, but not in the "advanced" stream (which I was in). o_O Of course, it was more in the form of random facts that they had to memorize for the exam, and relatively little to do with actually writing software; I don't think any of the assignments for either version of the course really required floating-point calculations in any meaningful way... my memory of this is really foggy, though.

This topic is closed to new replies.

Advertisement