Jump to content
  • Advertisement
Sign in to follow this  
indigox3

numerical precision

This topic is 4851 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Does anyone know how many significant digits are in the double data type in C++? I'm using the VS.Net2003 compiler on a WinXP machine, with a Xeon CPU, and I'd like to know how variable values will differ between compilers, OSes, and hardware platforms (MIPS, Intel, AMD) Thanks

Share this post


Link to post
Share on other sites
Advertisement
A "double" in C++ is a double-precision floating point type, stored on x86 Windows machines in 64 bits. I would assume (but I may not be completely correct) that this gives you 62 binary significant figures. In decimal? Harder to say...

Share this post


Link to post
Share on other sites
Quote:
Original post by TDragon
A "double" in C++ is a double-precision floating point type, stored on x86 Windows machines in 64 bits. I would assume (but I may not be completely correct) that this gives you 62 binary significant figures. In decimal? Harder to say...

No. There's an 11 bit exponent, so you only have 52 binary bits to work with for the actual value. I think single-precision has an 8 bit exponent, which leaves a 23 bit mantissa.

What Every Computer Scientist Should Know About Floating-Point Arithmetic
Wikipedia

CM

Share this post


Link to post
Share on other sites
Ok thanks,

Is knowing the size in bytes of the data type enough to tell if two machines support the same amount of precision?

ie say sizeof( double ) = 8 on Machine A and sizeof( double ) = 8 on machine B. Can we assume they support the same range of numbers? (I am guessing they all follow IEEE standards?)

Share this post


Link to post
Share on other sites
Quote:
Original post by indigox3
Ok thanks,

Is knowing the size in bytes of the data type enough to tell if two machines support the same amount of precision?

ie say sizeof( double ) = 8 on Machine A and sizeof( double ) = 8 on machine B. Can we assume they support the same range of numbers? (I am guessing they all follow IEEE standards?)

If they're following the IEEE754 standard, then yes. It explicitly defines 32- and 64-bit sizes for single and double precision. If it is some weird size, then I'm not entirely sure [there are two extended formats for odd sizes, but I don't know how they're defined].

And naturally, you can't neccessarily assume that just because its's a 32-bit float, that its an IEEE754 single-precision float. There are other standards, although not as widely used.

CM

Share this post


Link to post
Share on other sites
All "modern" architectures use IEEE 754 for the representation of floating point
numbers. They may of course be in little endian byte order (x86, IA-64, Opteron),
or in big endian byte order (Power PC, IA-64).

So, there is usually nothing to worry about weird formats here ...

Share this post


Link to post
Share on other sites
Quote:
Original post by starmole
All "modern" architectures use IEEE 754 for the representation of floating point
numbers. They may of course be in little endian byte order (x86, IA-64, Opteron),
or in big endian byte order (Power PC, IA-64).

So, there is usually nothing to worry about weird formats here ...

If you restrict yourself to PCs. If you look at the gambit of hardware platforms, you see other formats springing up.

CM

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!