Int, float, double?

Started by
14 comments, last by Super Llama 15 years, 5 months ago
What is the difference between int, float, double, right now all I can tell is int doesn't handle decimals(or does it?). I ran an experiment, adding .1 to a float and double value and displaying the result, both displayed different numbers, the float treated a 1.0 as a 0 while the double treated it as a 1(as was expected) and when both approached 0 some very stranger numbers became displayed( some of them, 0 in both cases contained the letter 'e'). Can someone help me clear the confusion? Thanks.
Advertisement
You are correct regarding ints, an int (integer) is a non-decimal, non-fractional number. 1, 6222, and -35226 are all ints.

Floats and doubles are both floating-point numbers (numbers with a decimal). They are called "floating" point because the location of the point (decimal) "floats". Doubles are double-precision, i.e. they use more bits to represent the number, and hence can represent a wider range of values precisely.

A float should not treat 1.0 as 0... something is wrong here.

I'm not yet a master on floating-point representation, but I'm sure you can find more details on it either online or in a good book.
Quote:Original post by Antonym
all I can tell is int doesn't handle decimals

Neither do float nor double. Those two handle binary fractions and can only approximate most decimal fractions.
Look here.
So float and double are basicly the same, except the latter is used for greater precision? I think I made a mistake with the experiment, I ran it again and they both treated the numbers as expected. Still, when approaching 0 again they gave strange numbers and at 0 both contained the letter 'e'...

So which one should I use for my decimal needs? float? I don't need much precision.
The 'e' you see is part of the floating-point notation. It tells us where to put the decimal. As the number approaches 0, it's getting much smaller, such as .000000000000001523. Writing all those 0s out is a pain so the 'e' part tells us how many 0s to add (so we can just write .1523). You can look this notation up.

I don't remember the reasoning, but I usually, as a standard, just use doubles (even though I rarely require the precision). I read somewhere that often (but depending on the implementation) doubles are optimal, though I forget why (might have to do with alignment).
The word you are looking for is scientific notation.
5.0e-3 is the same as .005 is the same as 5.0 x 10**-3
it is just shorthand.

Read and learn.
The main difference is the space they take up.

I'm not sure if:
int is 32 bits, (4 bytes) with a range from -2147483647 to 2147483648.
or if:
int is 16 bits, (2 bytes) with a range from -32767 to 32768.

Basically the more bytes you have, the more specific a value you can have. Floats and doubles take up more space than ints and their max is not as high, but they can have lots of decimal values in-between which actually gives them a much greater number of values.
It's a sofa! It's a camel! No! It's Super Llama!
Quote:Original post by DevFred
Quote:Original post by Antonym
all I can tell is int doesn't handle decimals

Neither do float nor double. Those two handle binary fractions and can only approximate most decimal fractions.


I'm sorry, but I do not understand this and I was hoping that you could explain it a bit more. I was under the impression that every decimal fraction has exactly one corresponding binary fraction, and that the two are precisely equal - not approximations. I know that there are some oddities, such as fractions that terminate in one base may not terminate in another (is this what you are referring to?). I'm just curious as to what you mean by this.

Thanks,
-Brian

Edit: This'll teach me to post without reading links. I get it now, thanks.
<hl>My Blog

This topic is closed to new replies.

Advertisement