• Advertisement
Sign in to follow this  

Bizzare Floating Point Error

This topic is 4714 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I appear to be getting some seriously strange results out of some straightforward double precision math in my program. Consider the following code... double val1 = 12345.12345; double val2 = 12345.12345; result = val1 * val2; My development computer produces the correct answer, 152402072.9957 but... My testing computer program produces the wrong answer, 152402080.0000 Further testing shows that the test computer is doing some strange truncation or rounding when computing the answer to large precision double values. It only seems to be doing this in one program. I wrote a small console app and the same math comes out correctly. I have never seen this behavior before. Does anyone have any idea what could be causing this? The development computer is a Dell P4 2.3 Ghz with Win 2000. The testing machine is a Dell P4 3 Ghz with Win 2000. All machines have Dev Studio 2003.

Share this post


Link to post
Share on other sites
Advertisement
Are you running the same binary on both machines? Different compiler options can give you slightly different results (which magnify as you continue to use them).

7-place accuracy is about as good as you're going to get with floating point math anyway. If you're not already familiar with the problems inherent in floating point then Google for "What Every
Computer Scientist Should Know About Floating-Point Arithmetic".

You should figure out exactly where this difference is coming from for your own piece of mind, but also realize that neither answer is necessarily wrong.

Share this post


Link to post
Share on other sites
Yup, running the same binary on both machines and I get different results, so I'm pretty sure this has nothing to do with compiler options. I'm using doubles, not floats, and I'm familiar with the precision limits. I'm definately getting wrong answers from some basic arithmatic operations.

Share this post


Link to post
Share on other sites
It looks like the internal floating point precision is different on your testing computer.

Heres what I get on my PC by varying the FPU precision:

24 bit single precision:
152402080.000000

53 bit double precision:
152402072.995740

64 bit double extended precision:
152402072.995740

It looks like you need to set the interal FPU precision to either 53 or 64 bits to get what you want. You can do this with the FLDCW instruction.


// 53 bit precision

unsigned short precision=0;

__asm FSTCW precision;

precision &= 0xFCFF;
precision |= 0x0200;

__asm FLDCW precision;



/////////////////////////////////////////////
// 64 bit precision

unsigned short precision=0;

__asm FSTCW precision;

precision &= 0xFCFF;
precision |= 0x0300;

__asm FLDCW precision;




You should probably restore the original precision when you are done.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement