I suggest you read up on how IEEE 754 floating point variables work. I'll explain things simply (assuming 32 bit float and 32 bit int):
int has 32 bits and can represent any integer between -2147483648 and +2147483647
float has 32 bits, but they're used differently. 1 bit is for the sign (positive or negative), 8 bits are for the exponent (so if you write it in scientific form: number * 2exponent, these 8 bits store the exponent), and 23 bits for the mantissa (the number part).
In short, you've only got 23 bits of precision to work with when trying to represent a number in floating point. The other 9 bits have nothing to do with precision (they just determine positive/negative and magnitude, but not precision).
int, on the other hand, has 32 bits of precision to work with. You can't stuff 32 bits of precision into 23 bits without expecting to lose some precision. Hence, you get a warning.
A 32 bit float only gives you 6 decimal digits of guaranteed precision. int, on the other hand, gives you 9 decimal digits of guaranteed precision.
To test it out, try the following:
std::cout.precision(50); // print out a ton of digits
int i = 1234567890;
float f = (float)i;
std::cout << i << std::endl;
std::cout << f << std::endl;
For me it prints out:
Notice how the first 7 decimal digits of the float are correct, and after that they're bogus. The warning is there to let you know you might lose some data. If you explicitly cast (like I did), it eliminates the warning because it tells the compiler "Look, I know they're not the same data type and that I might lose some data, but this is what I really want to do; it's not a mistake."
Edited by Cornstalks, 26 March 2013 - 11:14 AM.