numerical precision

Started by
7 comments, last by Conner McCloud 18 years, 8 months ago
Does anyone know how many significant digits are in the double data type in C++? I'm using the VS.Net2003 compiler on a WinXP machine, with a Xeon CPU, and I'd like to know how variable values will differ between compilers, OSes, and hardware platforms (MIPS, Intel, AMD) Thanks
Advertisement
A "double" in C++ is a double-precision floating point type, stored on x86 Windows machines in 64 bits. I would assume (but I may not be completely correct) that this gives you 62 binary significant figures. In decimal? Harder to say...
{[JohnE, Chief Architect and Senior Programmer, Twilight Dragon Media{[+++{GCC/MinGW}+++{Code::Blocks IDE}+++{wxWidgets Cross-Platform Native UI Framework}+++
Quote:Original post by TDragon
A "double" in C++ is a double-precision floating point type, stored on x86 Windows machines in 64 bits. I would assume (but I may not be completely correct) that this gives you 62 binary significant figures. In decimal? Harder to say...

No. There's an 11 bit exponent, so you only have 52 binary bits to work with for the actual value. I think single-precision has an 8 bit exponent, which leaves a 23 bit mantissa.

What Every Computer Scientist Should Know About Floating-Point Arithmetic
Wikipedia

CM
From this page on MSDN, approximately 15 decimal digits for a double, and approximately 6 decimal digits for a float.
"We should have a great fewer disputes in the world if words were taken for what they are, the signs of our ideas only, and not for things themselves." - John Locke
11-bit exponent...ouch
{[JohnE, Chief Architect and Senior Programmer, Twilight Dragon Media{[+++{GCC/MinGW}+++{Code::Blocks IDE}+++{wxWidgets Cross-Platform Native UI Framework}+++
Ok thanks,

Is knowing the size in bytes of the data type enough to tell if two machines support the same amount of precision?

ie say sizeof( double ) = 8 on Machine A and sizeof( double ) = 8 on machine B. Can we assume they support the same range of numbers? (I am guessing they all follow IEEE standards?)

Quote:Original post by indigox3
Ok thanks,

Is knowing the size in bytes of the data type enough to tell if two machines support the same amount of precision?

ie say sizeof( double ) = 8 on Machine A and sizeof( double ) = 8 on machine B. Can we assume they support the same range of numbers? (I am guessing they all follow IEEE standards?)

If they're following the IEEE754 standard, then yes. It explicitly defines 32- and 64-bit sizes for single and double precision. If it is some weird size, then I'm not entirely sure [there are two extended formats for odd sizes, but I don't know how they're defined].

And naturally, you can't neccessarily assume that just because its's a 32-bit float, that its an IEEE754 single-precision float. There are other standards, although not as widely used.

CM
All "modern" architectures use IEEE 754 for the representation of floating point
numbers. They may of course be in little endian byte order (x86, IA-64, Opteron),
or in big endian byte order (Power PC, IA-64).

So, there is usually nothing to worry about weird formats here ...

Quote:Original post by starmole
All "modern" architectures use IEEE 754 for the representation of floating point
numbers. They may of course be in little endian byte order (x86, IA-64, Opteron),
or in big endian byte order (Power PC, IA-64).

So, there is usually nothing to worry about weird formats here ...

If you restrict yourself to PCs. If you look at the gambit of hardware platforms, you see other formats springing up.

CM

This topic is closed to new replies.

Advertisement