• Advertisement
Sign in to follow this  

Float precision errors - cpu dependent?

This topic is 3731 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, just to be sure, do the (single precision) floating point precision errors differ for different cpu's like Intel vs. AMD, 32bit vs 64bit or maybe even different models from AMD/Intel? If so, would using the strict or precise floating point model prevent this? Thanks.

Share this post


Link to post
Share on other sites
Advertisement
they are exactly the same ('cept when there is a serious bug in the cpu such as the old pentium fdiv bug)

Share this post


Link to post
Share on other sites
They are the same for all processors respecting the IEEE 754 representation of floating-point numbers. If a given processor doesn't, the errors are unspecified and possibly random (I remember there was an Intel CPU which caused a different rounding error depending on whether your thread was rescheduled during the computation or not).

Share this post


Link to post
Share on other sites
Doesn't C++ require float/double to follow IEEE754 though? So even if the CPU didn't respect it, the compiler would (in theory, anyway) have to emulate it?

Also, doesn't the IEEE standard leave a few details up to the implementer? IIRC, there are a few areas where the IEEE standard only specifies minimum precision or something like that, so an arithmetic operation might yield slightly different (but both valid) results on different implementations?
(Been a while since I had to read through these exact details of it, so I could be wrong)

In any case, I probably wouldn't *rely* on two CPU's producing exactly identical results in floating-point math.

Share this post


Link to post
Share on other sites
Since all amd/intel cpus use IEEE754 the representation of a given number should be the same. But the results of a series of operations may change due to the order in which they are executed, and this order may change from chip to chip (and if i 'm not mistaken, even if you execute the same program twice on the same machine), but the difference should be rather limited.

Share this post


Link to post
Share on other sites
Quote:
They are the same for all processors respecting the IEEE 754 representation of floating-point numbers.

Unfortunately it is not that simple.

Quote:
Also, doesn't the IEEE standard leave a few details up to the implementer?

Yep, a few, but enough to most definitely cause differing results between systems.

IEEE prescribes exact results (correctly rounding to its precision) for a result's **destination**. The trouble is, you don't necessarily have control over whether the CPU uses extended-width registers or not. (it is possible, on x86, to cause rounding to 64-bit precision.) Separately, I believe AMD at one point calculated sqrts to a higher precision (wider result).
This is all the worse because compiler handling of register spilling may differ. Apropos, some latitude is given to compilers when converting between decimal and binary (e.g. constants). Finally, if the compiler decides to use goodies such as FMA a.k.a. MADD on an IEEE-conformant system, you get different results due to that as well.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement