• Create Account

Awesome job so far everyone! Please give us your feedback on how our article efforts are going. We still need more finished articles for our May contest theme: Remake the Classics

# Cos is not returning 0 for 90 degrees or PI/2

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

33 replies to this topic

### #1Muzzy A  Members   -  Reputation: 307

Like
0Likes
Like

Posted 18 May 2012 - 11:21 PM

```
Vector2 Rotate( Vector2 &vec,float Angle )

{

/*

cos -sin

sin  cos

*/

// When the Angle is PI/2 (90 degrees), it doesn't return 0, we get 4.37114e-008

float Cos = cos(Angle);

float Sin = sin(Angle);

Vector2 rot = Vector2( Cos*vec.x - Sin*vec.y , Sin*vec.x + Cos*vec.y );

return rot;

}

// If i pass this vector

Vector2 vec = Vector2(0,1);

vec = Rotate(vec,PI*.5f);

// The answer should be '(1,0)'

// But i ACTUALLY get '(-1,-4.37114e-008)'....

/* Anyone know what's going on here, is sin and cos incapable of returning 0? */

```

Edited by Muzzy A, 18 May 2012 - 11:22 PM.

### #2taby  Members   -  Reputation: 285

Like
0Likes
Like

Posted 19 May 2012 - 12:49 AM

I get the same behaviour, where the answer is -4.37114e-008 for float and 6.12323e-017 for double. Consider that and get an answer of precisely zero, it's likely because the person who coded the calculator put in a special conditional statement that says something similar to "if(answer < epsilon && answer > -epsilon) { answer = 0; }", where epsilon is some tiny number like 1e-7.

Don't lose sleep over it.

Edited by taby, 19 May 2012 - 12:53 AM.

### #3Álvaro  Members   -  Reputation: 5845

Like
2Likes
Like

Posted 19 May 2012 - 12:51 AM

// The answer should be '(1,0)'
// But i ACTUALLY get '(-1,-4.37114e-008)'....

Well, the answer should be (-1,0), which it basically is... You can't expect infinite precision in floating-point operations.

EDIT: If instead of an angle you were to use a unit vector containing the angle's cosine and sine, you could just do this and not have the problem:
```Vector2D rotate(Vector2D vec, Vector2D rot) {
return Vector2D(vec.x*rot.x - vec.y*rot.y, vec.x*rot.y + vec.y*rot.x); // Notice this is just multiplication of complex numbers!
}

Vector2 vec = Vector2(0,1);
vec = Rotate(vec,Vector2D(0,1));
```

Edited by alvaro, 19 May 2012 - 01:05 AM.

### #4Álvaro  Members   -  Reputation: 5845

Like
0Likes
Like

Posted 19 May 2012 - 12:54 AM

Not that it has anything to do with your question, but `vec' should not be a non-const reference: Either make it const or remove the `&'.

### #5Cornstalks  Moderator*   -  Reputation: 5392

Like
2Likes
Like

Posted 19 May 2012 - 01:13 AM

What Every Computer Scientist Should Know About Floating-Point Arithmetic

-4.37114e-008 = -.0000000437114 (which is pretty dang close to zero)
[ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]

### #6Muzzy A  Members   -  Reputation: 307

Like
1Likes
Like

Posted 19 May 2012 - 02:58 AM

What Every Computer Scientist Should Know About Floating-Point Arithmetic

-4.37114e-008 = -.0000000437114 (which is pretty dang close to zero)

oh, i assumed it was a garbage number that it spit out cause it didn't know what to do lol

thanks guys

// The answer should be '(1,0)'
// But i ACTUALLY get '(-1,-4.37114e-008)'....

Well, the answer should be (-1,0)

lol oops

Edited by Muzzy A, 19 May 2012 - 03:00 AM.

### #7frob  Moderators   -  Reputation: 7736

Like
0Likes
Like

Posted 19 May 2012 - 11:12 PM

it's likely because the person who coded the calculator put in a special conditional statement that says something similar to "if(answer < epsilon && answer > -epsilon) { answer = 0; }", where epsilon is some tiny number like 1e-7.

It turns out the language has a constant for the actual epsilon that should be used for that. Proper use of FLT_EPSILON would apply here.

( In this case, the answer is ten times smaller than FLT_EPSILON, so ten times smaller than rounding error. )

### #8Adam_42  Members   -  Reputation: 1412

Like
2Likes
Like

Posted 20 May 2012 - 05:55 AM

Calculators generally work to a much higher precision than what they display. For example the Windows calculator (at least on recent versions of windows) has about 40 significant digits of precision (calculate sqrt(4) - 2 to see that). They then round the answer to fit the number of digits on the display.

Note that FLT_EPSILON is simply the minimum positive float such that 1.0f + FLT_EPSILON != 1.0f so the epsilon value you want to use depends on the magnitude of the expected answer. In the case of sin and cos it's probably the right epsilon to use, but in other cases it won't be.

### #9Zoomulator  Members   -  Reputation: 269

Like
0Likes
Like

Posted 21 May 2012 - 02:54 AM

-4.37114e-008 = -.0000000437114 (which is pretty dang close to zero)

Not if you're calculating the mass defect of an atom nucleus decaying by beta-radiation
which I currently have to do for class... 4,014e-31 kg...

### #10mhagain  Members   -  Reputation: 3827

Like
0Likes
Like

Posted 21 May 2012 - 03:11 AM

Not if you're calculating the mass defect of an atom nucleus decaying by beta-radiation

In which case choosing stock float as the data type to use is quite obviously a baaaaaaaaaaaaaaaaaad idea.

It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.

### #11Zoomulator  Members   -  Reputation: 269

Like
0Likes
Like

Posted 21 May 2012 - 03:42 AM

Not if you're calculating the mass defect of an atom nucleus decaying by beta-radiation

In which case choosing stock float as the data type to use is quite obviously a baaaaaaaaaaaaaaaaaad idea.

True enough.. though a double float would work fine.

### #12Cornstalks  Moderator*   -  Reputation: 5392

Like
3Likes
Like

Posted 21 May 2012 - 07:53 AM

Not if you're calculating the mass defect of an atom nucleus decaying by beta-radiation

Which he isn't doing...

In which case choosing stock float as the data type to use is quite obviously a baaaaaaaaaaaaaaaaaad idea.

True enough.. though a double float would work fine.

A double only has about 15-17 decimal digits of significant precision. Yes, it's more than a float, but it all depends on what you're doing with your data. Both floats and doubles can approximate 4.014e-31, which isn't the problem (and subsequently, both can approximate 4.014e-31 + 4.014e-31 pretty well). Usually, the problem isn't "Can a float/double represent this number?" but instead "Can a float/double approximate the mathematical operation between these two (or more) numbers?" Just throwing out a small number doesn't mean that floats are ruled out; it all depends on what you're doing with that small number.

I'm not saying a double wouldn't work or that it wouldn't be the better choice in your case. I'm saying it isn't the number that's usually limiting you, it's usually the numbers and what you're doing with the numbers. It's a subtle but significant difference of focus.
[ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]

### #13frob  Moderators   -  Reputation: 7736

Like
1Likes
Like

Posted 21 May 2012 - 10:23 AM

I'm not saying a double wouldn't work or that it wouldn't be the better choice in your case. I'm saying it isn't the number that's usually limiting you, it's usually the numbers and what you're doing with the numbers. It's a subtle but significant difference of focus.

Yup, which is why the other discussion is pointless. The OP said what he was doing.

The OP is computing a cosine.

For the operation the scale is (0,1).

Considering the range of the result, the margin of error is based on 1.0*FLT_EPSILON, which in turn puts his stated result as within the error tolerance of 0.

### #14taby  Members   -  Reputation: 285

Like
0Likes
Like

Posted 21 May 2012 - 11:29 AM

it's likely because the person who coded the calculator put in a special conditional statement that says something similar to "if(answer < epsilon && answer > -epsilon) { answer = 0; }", where epsilon is some tiny number like 1e-7.

It turns out the language has a constant for the actual epsilon that should be used for that. Proper use of FLT_EPSILON would apply here.

( In this case, the answer is ten times smaller than FLT_EPSILON, so ten times smaller than rounding error. )

There's always numeric_limit's epsilon(), and that would technically be the proper use of the language, but whatever you say.

### #15taby  Members   -  Reputation: 285

Like
0Likes
Like

Posted 21 May 2012 - 11:32 AM

Calculators generally work to a much higher precision than what they display. For example the Windows calculator (at least on recent versions of windows) has about 40 significant digits of precision (calculate sqrt(4) - 2 to see that). They then round the answer to fit the number of digits on the display.

Note that FLT_EPSILON is simply the minimum positive float such that 1.0f + FLT_EPSILON != 1.0f so the epsilon value you want to use depends on the magnitude of the expected answer. In the case of sin and cos it's probably the right epsilon to use, but in other cases it won't be.

Thanks for that.

Edited by taby, 21 May 2012 - 11:34 AM.

### #16taby  Members   -  Reputation: 285

Like
0Likes
Like

Posted 21 May 2012 - 11:39 AM

What Every Computer Scientist Should Know About Floating-Point Arithmetic

-4.37114e-008 = -.0000000437114 (which is pretty dang close to zero)

Add this to that list (which makes the ramifications of difference in order of magnitude a bit more transparent):
http://www.drdobbs.com/cpp/184403224

Edited by taby, 21 May 2012 - 11:54 AM.

### #17Álvaro  Members   -  Reputation: 5845

Like
1Likes
Like

Posted 21 May 2012 - 11:57 AM

I'm not saying a double wouldn't work or that it wouldn't be the better choice in your case. I'm saying it isn't the number that's usually limiting you, it's usually the numbers and what you're doing with the numbers. It's a subtle but significant difference of focus.

Yup, which is why the other discussion is pointless. The OP said what he was doing.

The OP is computing a cosine.

For the operation the scale is (0,1).

Considering the range of the result, the margin of error is based on 1.0*FLT_EPSILON, which in turn puts his stated result as within the error tolerance of 0.

Not to be nit-picky or anything, but it's not computing the cosine that introduces an error here: The error in this situation comes from not being able to represent pi/2 precisely as a float. We are computing the cosine of pi/2+epsilon, which should be very close to -epsilon.

The smallest value of epsilon for which pi/2+epsilon is representable is .00000004371139000186... , so the number the OP was getting is very well explained by my story, and the error introduced by the computation of the cosine is a second-order effect.

Edited by alvaro, 21 May 2012 - 11:58 AM.

### #18taby  Members   -  Reputation: 285

Like
0Likes
Like

Posted 21 May 2012 - 12:06 PM

Not to be nit-picky or anything, but it's not computing the cosine that introduces an error here: The error in this situation comes from not being able to represent pi/2 precisely as a float. We are computing the cosine of pi/2+epsilon, which should be very close to -epsilon.

The smallest value of epsilon for which pi/2+epsilon is representable is .00000004371139000186... , so the number the OP was getting is very well explained by my story, and the error introduced by the computation of the cosine is a second-order effect.

I don't think that you're being too nitpicky, because while I do think that both sides of the coin are applicable here, the error does technically spring up from the less-than-perfect definition of pi. I should have thrown in a mention of float pi_half = acos(0.0f); cout << cos(pi_half) << endl; and of epsilon() alongside the mention of irrational numbers to explain what I meant a little better, because it's how I indirectly double-checked the OP's definition of pi.

I also mentioned digital computers to contrast against analog computers, but again, I didn't explain very well what I meant.

Edited by taby, 21 May 2012 - 12:30 PM.

### #19web383  Members   -  Reputation: 553

Like
0Likes
Like

Posted 21 May 2012 - 03:01 PM

Some very good information is explained on this blog. Actually most (if not all) of his recent blog posts refer to floating point calculations and the mystery behind them.

Scroll down to the 'Catastrophic cancellation, hiding in plain sight' section.
http://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/

Alvaro basically hit it spot on. i.e. You're not calculating the sin of PI... you're calculating the sin of (float)pi, or (double)pi. PI, as a whole, can't be represented as a float or even a double. So there is error in the calculations.

### #20taby  Members   -  Reputation: 285

Like
0Likes
Like

Posted 21 May 2012 - 04:46 PM

Some very good information is explained on this blog. Actually most (if not all) of his recent blog posts refer to floating point calculations and the mystery behind them.

Scroll down to the 'Catastrophic cancellation, hiding in plain sight' section.
http://randomascii.w...s-2012-edition/

Alvaro basically hit it spot on. i.e. You're not calculating the sin of PI... you're calculating the sin of (float)pi, or (double)pi. PI, as a whole, can't be represented as a float or even a double. So there is error in the calculations.

Yeah... because it's irrational... (BTW, that's not an annoyed eyeroll, that's a "look up" eyeroll)

Edited by taby, 21 May 2012 - 04:48 PM.

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

PARTNERS