# Is there any danger in rounding floating points?

This topic is 3545 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

The common method I see for dealing with floating point errors usually consists of
if( fabs( expectedValue - n ) < 0.00001 )
{
//they're equal
}


The problem with this method is I'm trying to calculate offsets, and I can't have an expected value for something I'm trying to calculate. My current method of dealing dealing with floating point errors is this function:
float rndf( float n, unsigned int p )
{
return floor( ( n * pow( 10.0, p ) + 0.5 ) / pow( 10.0, p ) );
}


I round anything that could cause something to blow up, usually to the 5th decimal point. Like with physics, I do any rounding at the end of important calculations. Is there any danger in using this method I should be aware of?

##### Share on other sites
The floating-points are going to be as precise as they possibly can be, so there's no point in rounding, which will only introduce error in the worst-case. If you don't have a known value to compare against, then you don't really know what the actual error is anyway, only a bounded error of ±X based on the calculations, the CPU, etc.

##### Share on other sites
Quote:
 Original post by Zipsterthere's no point in rounding

Not exactly.

I have a sweep test algorithm that works like this:
//Find all potential collisions and sort them by time
while( there's collisions to handle)
//handle next collision and find any new potential ones.

The problem with my sweep test is that I have no way of handling overlaps. Theoretically, this shouldn't be a problem because my objects are never supposed to overlap.

This is where floating point errors get to be annoying. I can get situations like:
//Values before: 40, -100, 0.4.
xPos += xVel * time;
//Values after: -5.96046e-007, -100, 0.4.

That -5.96046e-007 is supposed to be 0, but the sweep test algorithm doesn't care. Next time it looks for a collision it'll find the overlap and think the next time of collision is 0 seconds. Floating point errors usually happen around the 6 digit so if I round to 5th digit, the problems go away.

I just want to make sure they don't lead to new ones later on down the road.

##### Share on other sites
Quote:
 Original post by Lazy FooThat -5.96046e-007 is supposed to be 0, but the sweep test algorithm doesn't care. Next time it looks for a collision it'll find the overlap and think the next time of collision is 0 seconds. Floating point errors usually happen around the 6 digit so if I round to 5th digit, the problems go away.I just want to make sure they don't lead to new ones later on down the road.
In that case, your "sweep test algorithm" is simply missing the classic fabs/epsilon test you mentioned at the top.

Using your function to round in cases other than when values are around 1.0 will easily spell disaster.

Floating point error doesn't occur at around a particular number of decimal places, or binary places for that matter. If you numbers are in the 8 digits range, then you'll get inaccuracy in the last few digits, before the decimal point. If you are dealing with numbers normally less than 0.00000001 then you wont get any error for about 6 decimal places after that.
This brings me to the point that even the original test is flawed. Better tests do something like:
if (fabs(n/expectedValue - 1) < 0.00001)
This looks for a percentage error rather than an absolute error, and so will identify numbers that should compare equal, no matter how large or small they are.

As zipster states: "there's no point in rounding" - QFT.
Not only that, but 'pow' is pretty slow anyway!

##### Share on other sites
Quote:
 Original post by iMalc[...]Not only that, but 'pow' is pretty slow anyway!
If you're using integer exponents with 'pow', my experience is that it is worth it to implement your own 'ipow' function using exponentiation by squaring (with the recursion converted to iteration).

##### Share on other sites
Quote:
 Original post by Lazy FooThat -5.96046e-007 is supposed to be 0, but the sweep test algorithm doesn't care. Next time it looks for a collision it'll find the overlap and think the next time of collision is 0 seconds. Floating point errors usually happen around the 6 digit so if I round to 5th digit, the problems go away.I just want to make sure they don't lead to new ones later on down the road.

They might happen around the 6th digit when you're testing with numbers in the 0.0 - 100.0 range, but the magnitude of floating point inaccuracy depends on a number's size. If your object is located at 10000.0; -5000.0 you'll see much larger errors.

For example, the next representable floating point number after 2.0 is 2.00000024. There are no numbers in between a float can assume, so the inaccuracy seems to be after the 6th decimal place. The next representable floating point number after 20,000,000 however is 20,000,002. That's right, now we're dealing with floating point inaccuracies in whole numbers.

There's an interesting article explaining a technique that can be used to compare floating point numbers safely, independent of their magnitude.

http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm

-Markus-

##### Share on other sites
Well I just fixed my sweep test algorithm.

I really wished somebody just told me to never, ever compare floating point numbers directly.

[Edited by - Lazy Foo on January 6, 2009 11:04:34 PM]

##### Share on other sites
Quote:
 Original post by Lazy FooWell I just fixed my sweep test algorithm.I really wished somebody just told me to never, ever compare floating point numbers directly.
It's not so much that you should never compare them directly, it's just that you should never compare them directly when you can't afford to get a wrong answer.
For example if you are say z-sorting polygons, you can quite happily use less-than to sort them. Sure, you could use some kind of epsilon test to ensure that one number really is less than another, in such a case, but that's just burning more CPU time for no good reason.

##### Share on other sites
For proper enlightenment, here's a gem:

• Goldberg, "What every [computer] scientist should know about floating point math" (html, pdf)

It really is not only for scientists, but also for every graphics programmer and anyone who relies on floats - it tells you everything about ieee floating point numbers, and if you ever intend to seriously discuss about those issues, grok it (I haven't read it completely, but then I also do not discuss this topic :P). Have fun!

##### Share on other sites
Quote:
Original post by iMalc
Quote:
 Original post by Lazy FooWell I just fixed my sweep test algorithm.I really wished somebody just told me to never, ever compare floating point numbers directly.
It's not so much that you should never compare them directly, it's just that you should never compare them directly when you can't afford to get a wrong answer.
For example if you are say z-sorting polygons, you can quite happily use less-than to sort them. Sure, you could use some kind of epsilon test to ensure that one number really is less than another, in such a case, but that's just burning more CPU time for no good reason.

Here's another interesting article that shows how hard it is to get it right;)

Most computer systems just work with numbers at a fixed precision that’s immediately available from the underlying hardware. And if one can’t do anything to increase the precision, it’s simply not possible to always get the right answers for binary-to-decimal and decimal-to-binary conversions.

1. 1
2. 2
Rutin
18
3. 3
4. 4
5. 5
frob
13

• 11
• 9
• 30
• 16
• 9
• ### Forum Statistics

• Total Topics
632611
• Total Posts
3007417

×