I was just playing with visual studio 2013 rtm and compile some old demos to see if there were some changes to the standard library (like "max()" and "min()" functions moved to <algorithm>....)
And I had a little surprise with a sample code for write and read binary files with the standard library: using the following double values (111.111, 222.222, 333.333, 444.444), when I read back the data I have completely different data (like -6.27744e+066 -6.27744e+066 -6.27744e+066 -6.27744e+066). It is also strange that the values are the same in debug mode in 32 bit and 64 bit configuration, but they change in release mode in 64 bit and 32 bit configurations. Changing the float precision model ( "precise" 80 bit, strict and fast) doesn't solve the problem.
Note also the sample works well if I change the double data to other values... Maybe that is due to some "nice" behaviour of the IEEE standard? ... I don't remember nothing about that in the floating point model, to me that seems just a bug (code or compiler... dunno XD )
Here is the code:
#include <fstream>
#include <iostream>
int main()
{
double a[ 4 ] = { 111.111, 222.222, 333.333, 444.444 };
double* i = new( double[ 4 ] );
std::fstream file( "file", std::ios::out | std::ios::binary );
file.write( ( char* )&a, sizeof( a ) );
file.close();
file.open( "file", std::ios::in | std::ios::binary );
file.read( ( char* )&*i, sizeof ( double )* 4 );
file.close( );
for( int j = 0; j < 4; ++j )
{
std::cout << i[ j ] << " ";
}
std::cout << std::endl;
return( 0 );
}