0.0f, 0.0, 0

Started by
12 comments, last by brulle 18 years, 7 months ago
0.0f or 0.0 or 0? What difference does it make when passing values to functions or initializing variables? Not just for 0 but all integer values. Arn't they casted automagically?
Advertisement
they are converted automatically by your compiler (assuming you're using VC++)
Quote:Original post by brulle

0.0f or 0.0 or 0? What difference does it make when passing values to functions or initializing variables? Not just for 0 but all integer values. Arn't they casted automagically?


Yes, but you may get warnings about loss of precision if you don't use the "correct" constant types.
Quote:Yes, but you may get warnings about loss of precision if you don't use the "correct" constant types.


Really? What could i possibly be deprived of? Could the compiler actually assign a float or double a value that is not exactly equal to 0.0?

Is (int)0 == (float/double)0.0?
Quote:Original post by brulle
Arn't they casted automagically?


Not when using function templates. You will also run into trouble when using variadic functions like printf.
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." — Brian W. Kernighan
Quote:Original post by brulle
Quote:Yes, but you may get warnings about loss of precision if you don't use the "correct" constant types.


Really? What could i possibly be deprived of? Could the compiler actually assign a float or double a value that is not exactly equal to 0.0?

Is (int)0 == (float/double)0.0?


To the compiler, they are just parameters. It doesn't care that you are passing 0, rather than some other number. It only cares that you are passing an int to a function expecting a float/double, etc.

Quote:
To the compiler, they are just parameters. It doesn't care that you are passing 0, rather than some other number. It only cares that you are passing an int to a function expecting a float/double, etc.


Yes, sure compilers aren't exacly smart, but, could it ever cause a problem assigning an int to a float/double. I see people writing 0. or 0.0f et c and am actually wondering if this is just for the sake of clarity?
Quote:Original post by brulle
Quote:Yes, but you may get warnings about loss of precision if you don't use the "correct" constant types.


Really? What could i possibly be deprived of? Could the compiler actually assign a float or double a value that is not exactly equal to 0.0?

Is (int)0 == (float/double)0.0?


The compiler recognizes, that a floating point is compared with an int, and casts it to an int. Maybe the way around, but i think ints are easier for the compiler to handle.
The same with functions.

int foo(int a)
{
return a;
}

foo(0.123f) would return 0 and a warning.

If you put a 'f' behind a floating point number, the compiler uses a float than cast it from double to float (I read that once somewhere).
So float b = 0.0f should be a little faster than float b = 0.0 or float b = 0;
"and am actually wondering if this is just for the sake of clarity?"

clarity is THE key to good programming.
Quote:Original post by brulle
Yes, sure compilers aren't exacly smart, but, could it ever cause a problem assigning an int to a float/double.

Yes, when dealing with variadic functions, overloaded functions, and templates. The former is an issue because the bit representations are different, but the compiler won't cast automatically because it doesn't know what type to cast to. The latter two are problems because the compiler executes code that you don't want it too...namely, the int version of whatever function you're dealing with rather than the float/double version.

CM

This topic is closed to new replies.

Advertisement