Cos is not returning 0 for 90 degrees or PI/2

Started by
32 comments, last by ApochPiQ 11 years, 11 months ago

[quote name='Zoomulator' timestamp='1337590478' post='4941864']
Not if you're calculating the mass defect of an atom nucleus decaying by beta-radiation cool.png

In which case choosing stock float as the data type to use is quite obviously a baaaaaaaaaaaaaaaaaad idea.
[/quote]
True enough.. though a double float would work fine.
Advertisement

Not if you're calculating the mass defect of an atom nucleus decaying by beta-radiation cool.png

Which he isn't doing...


[quote name='mhagain' timestamp='1337591469' post='4941868']
In which case choosing stock float as the data type to use is quite obviously a baaaaaaaaaaaaaaaaaad idea.

True enough.. though a double float would work fine.
[/quote]
A [font=courier new,courier,monospace]double[/font] only has about 15-17 decimal digits of significant precision. Yes, it's more than a [font=courier new,courier,monospace]float[/font], but it all depends on what you're doing with your data. Both [font=courier new,courier,monospace]float[/font]s and [font=courier new,courier,monospace]double[/font]s can approximate 4.014e-31, which isn't the problem (and subsequently, both can approximate 4.014e-31 + 4.014e-31 pretty well). Usually, the problem isn't "Can a [font=courier new,courier,monospace]float[/font]/[font=courier new,courier,monospace]double[/font] represent this number?" but instead "Can a [font=courier new,courier,monospace]float[/font]/[font=courier new,courier,monospace]double[/font] approximate the mathematical operation between these two (or more) numbers?" Just throwing out a small number doesn't mean that [font=courier new,courier,monospace]float[/font]s are ruled out; it all depends on what you're doing with that small number.

I'm not saying a [font=courier new,courier,monospace]double[/font] wouldn't work or that it wouldn't be the better choice in your case. I'm saying it isn't the number that's usually limiting you, it's usually the numbers and what you're doing with the numbers. It's a subtle but significant difference of focus.
[size=2][ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]

I'm not saying a [font=courier new,courier,monospace]double[/font] wouldn't work or that it wouldn't be the better choice in your case. I'm saying it isn't the number that's usually limiting you, it's usually the numbers and what you're doing with the numbers. It's a subtle but significant difference of focus.

Yup, which is why the other discussion is pointless. The OP said what he was doing.

The OP is computing a cosine.

For the operation the scale is (0,1).

Considering the range of the result, the margin of error is based on 1.0*FLT_EPSILON, which in turn puts his stated result as within the error tolerance of 0.

[quote name='taby' timestamp='1337410191' post='4941368']
it's likely because the person who coded the calculator put in a special conditional statement that says something similar to "if(answer < epsilon && answer > -epsilon) { answer = 0; }", where epsilon is some tiny number like 1e-7.

It turns out the language has a constant for the actual epsilon that should be used for that. Proper use of FLT_EPSILON would apply here.

( In this case, the answer is ten times smaller than FLT_EPSILON, so ten times smaller than rounding error. )
[/quote]

There's always numeric_limit's epsilon(), and that would technically be the proper use of the language, but whatever you say.

Calculators generally work to a much higher precision than what they display. For example the Windows calculator (at least on recent versions of windows) has about 40 significant digits of precision (calculate sqrt(4) - 2 to see that). They then round the answer to fit the number of digits on the display.

Note that FLT_EPSILON is simply the minimum positive float such that 1.0f + FLT_EPSILON != 1.0f so the epsilon value you want to use depends on the magnitude of the expected answer. In the case of sin and cos it's probably the right epsilon to use, but in other cases it won't be.


Thanks for that. :)

What Every Computer Scientist Should Know About Floating-Point Arithmetic

-4.37114e-008 = -.0000000437114 (which is pretty dang close to zero)


Add this to that list (which makes the ramifications of difference in order of magnitude a bit more transparent):
http://www.drdobbs.com/cpp/184403224

[quote name='Cornstalks' timestamp='1337608410' post='4941910']
I'm not saying a [font=courier new,courier,monospace]double[/font] wouldn't work or that it wouldn't be the better choice in your case. I'm saying it isn't the number that's usually limiting you, it's usually the numbers and what you're doing with the numbers. It's a subtle but significant difference of focus.

Yup, which is why the other discussion is pointless. The OP said what he was doing.

The OP is computing a cosine.

For the operation the scale is (0,1).

Considering the range of the result, the margin of error is based on 1.0*FLT_EPSILON, which in turn puts his stated result as within the error tolerance of 0.
[/quote]

Not to be nit-picky or anything, but it's not computing the cosine that introduces an error here: The error in this situation comes from not being able to represent pi/2 precisely as a float. We are computing the cosine of pi/2+epsilon, which should be very close to -epsilon.

The smallest value of epsilon for which pi/2+epsilon is representable is .00000004371139000186... , so the number the OP was getting is very well explained by my story, and the error introduced by the computation of the cosine is a second-order effect.

Not to be nit-picky or anything, but it's not computing the cosine that introduces an error here: The error in this situation comes from not being able to represent pi/2 precisely as a float. We are computing the cosine of pi/2+epsilon, which should be very close to -epsilon.

The smallest value of epsilon for which pi/2+epsilon is representable is .00000004371139000186... , so the number the OP was getting is very well explained by my story, and the error introduced by the computation of the cosine is a second-order effect.


I don't think that you're being too nitpicky, because while I do think that both sides of the coin are applicable here, the error does technically spring up from the less-than-perfect definition of pi. I should have thrown in a mention of float pi_half = acos(0.0f); cout << cos(pi_half) << endl; and of epsilon() alongside the mention of irrational numbers to explain what I meant a little better, because it's how I indirectly double-checked the OP's definition of pi. mellow.png

I also mentioned digital computers to contrast against analog computers, but again, I didn't explain very well what I meant.
Some very good information is explained on this blog. Actually most (if not all) of his recent blog posts refer to floating point calculations and the mystery behind them.

Scroll down to the 'Catastrophic cancellation, hiding in plain sight' section.
http://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/

Alvaro basically hit it spot on. i.e. You're not calculating the sin of PI... you're calculating the sin of (float)pi, or (double)pi. PI, as a whole, can't be represented as a float or even a double. So there is error in the calculations.

Some very good information is explained on this blog. Actually most (if not all) of his recent blog posts refer to floating point calculations and the mystery behind them.

Scroll down to the 'Catastrophic cancellation, hiding in plain sight' section.
http://randomascii.w...s-2012-edition/

Alvaro basically hit it spot on. i.e. You're not calculating the sin of PI... you're calculating the sin of (float)pi, or (double)pi. PI, as a whole, can't be represented as a float or even a double. So there is error in the calculations.


Yeah... because it's irrational... rolleyes.gif (BTW, that's not an annoyed eyeroll, that's a "look up" eyeroll)

This topic is closed to new replies.

Advertisement