# numerical precision

This topic is 4851 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Does anyone know how many significant digits are in the double data type in C++? I'm using the VS.Net2003 compiler on a WinXP machine, with a Xeon CPU, and I'd like to know how variable values will differ between compilers, OSes, and hardware platforms (MIPS, Intel, AMD) Thanks

##### Share on other sites
A "double" in C++ is a double-precision floating point type, stored on x86 Windows machines in 64 bits. I would assume (but I may not be completely correct) that this gives you 62 binary significant figures. In decimal? Harder to say...

##### Share on other sites
Quote:
 Original post by TDragonA "double" in C++ is a double-precision floating point type, stored on x86 Windows machines in 64 bits. I would assume (but I may not be completely correct) that this gives you 62 binary significant figures. In decimal? Harder to say...

No. There's an 11 bit exponent, so you only have 52 binary bits to work with for the actual value. I think single-precision has an 8 bit exponent, which leaves a 23 bit mantissa.

What Every Computer Scientist Should Know About Floating-Point Arithmetic
Wikipedia

CM

##### Share on other sites
From this page on MSDN, approximately 15 decimal digits for a double, and approximately 6 decimal digits for a float.

##### Share on other sites
11-bit exponent...ouch

##### Share on other sites
Ok thanks,

Is knowing the size in bytes of the data type enough to tell if two machines support the same amount of precision?

ie say sizeof( double ) = 8 on Machine A and sizeof( double ) = 8 on machine B. Can we assume they support the same range of numbers? (I am guessing they all follow IEEE standards?)

##### Share on other sites
Quote:
 Original post by indigox3Ok thanks,Is knowing the size in bytes of the data type enough to tell if two machines support the same amount of precision?ie say sizeof( double ) = 8 on Machine A and sizeof( double ) = 8 on machine B. Can we assume they support the same range of numbers? (I am guessing they all follow IEEE standards?)

If they're following the IEEE754 standard, then yes. It explicitly defines 32- and 64-bit sizes for single and double precision. If it is some weird size, then I'm not entirely sure [there are two extended formats for odd sizes, but I don't know how they're defined].

And naturally, you can't neccessarily assume that just because its's a 32-bit float, that its an IEEE754 single-precision float. There are other standards, although not as widely used.

CM

##### Share on other sites
All "modern" architectures use IEEE 754 for the representation of floating point
numbers. They may of course be in little endian byte order (x86, IA-64, Opteron),
or in big endian byte order (Power PC, IA-64).

So, there is usually nothing to worry about weird formats here ...

##### Share on other sites
Quote:
 Original post by starmoleAll "modern" architectures use IEEE 754 for the representation of floating point numbers. They may of course be in little endian byte order (x86, IA-64, Opteron),or in big endian byte order (Power PC, IA-64).So, there is usually nothing to worry about weird formats here ...

If you restrict yourself to PCs. If you look at the gambit of hardware platforms, you see other formats springing up.

CM

1. 1
Rutin
73
2. 2
3. 3
4. 4
5. 5

• 21
• 10
• 33
• 20
• 9
• ### Forum Statistics

• Total Topics
633425
• Total Posts
3011810
• ### Who's Online (See full list)

There are no registered users currently online

×