Sign in to follow this  
Antonym

Int, float, double?

Recommended Posts

What is the difference between int, float, double, right now all I can tell is int doesn't handle decimals(or does it?). I ran an experiment, adding .1 to a float and double value and displaying the result, both displayed different numbers, the float treated a 1.0 as a 0 while the double treated it as a 1(as was expected) and when both approached 0 some very stranger numbers became displayed( some of them, 0 in both cases contained the letter 'e'). Can someone help me clear the confusion? Thanks.

Share this post


Link to post
Share on other sites
You are correct regarding ints, an int (integer) is a non-decimal, non-fractional number. 1, 6222, and -35226 are all ints.

Floats and doubles are both floating-point numbers (numbers with a decimal). They are called "floating" point because the location of the point (decimal) "floats". Doubles are double-precision, i.e. they use more bits to represent the number, and hence can represent a wider range of values precisely.

A float should not treat 1.0 as 0... something is wrong here.

I'm not yet a master on floating-point representation, but I'm sure you can find more details on it either online or in a good book.

Share this post


Link to post
Share on other sites
Quote:
Original post by Antonym
all I can tell is int doesn't handle decimals

Neither do float nor double. Those two handle binary fractions and can only approximate most decimal fractions.

Share this post


Link to post
Share on other sites
So float and double are basicly the same, except the latter is used for greater precision? I think I made a mistake with the experiment, I ran it again and they both treated the numbers as expected. Still, when approaching 0 again they gave strange numbers and at 0 both contained the letter 'e'...

So which one should I use for my decimal needs? float? I don't need much precision.

Share this post


Link to post
Share on other sites
The 'e' you see is part of the floating-point notation. It tells us where to put the decimal. As the number approaches 0, it's getting much smaller, such as .000000000000001523. Writing all those 0s out is a pain so the 'e' part tells us how many 0s to add (so we can just write .1523). You can look this notation up.

I don't remember the reasoning, but I usually, as a standard, just use doubles (even though I rarely require the precision). I read somewhere that often (but depending on the implementation) doubles are optimal, though I forget why (might have to do with alignment).

Share this post


Link to post
Share on other sites
The main difference is the space they take up.

I'm not sure if:
int is 32 bits, (4 bytes) with a range from -2147483647 to 2147483648.
or if:
int is 16 bits, (2 bytes) with a range from -32767 to 32768.

Basically the more bytes you have, the more specific a value you can have. Floats and doubles take up more space than ints and their max is not as high, but they can have lots of decimal values in-between which actually gives them a much greater number of values.

Share this post


Link to post
Share on other sites
Quote:
Original post by DevFred
Quote:
Original post by Antonym
all I can tell is int doesn't handle decimals

Neither do float nor double. Those two handle binary fractions and can only approximate most decimal fractions.


I'm sorry, but I do not understand this and I was hoping that you could explain it a bit more. I was under the impression that every decimal fraction has exactly one corresponding binary fraction, and that the two are precisely equal - not approximations. I know that there are some oddities, such as fractions that terminate in one base may not terminate in another (is this what you are referring to?). I'm just curious as to what you mean by this.

Thanks,
-Brian

Edit: This'll teach me to post without reading links. I get it now, thanks.

Share this post


Link to post
Share on other sites
An int is defined to be the same size as the natural word size of the proccessor, so for a 32 bit platform(x86) it has 32 bits, on x64 however it would have 64 bits. There are even examples where it has 24 bits. I have read that an int must have at least 16 bits, but I'm not sure on that one.

Share this post


Link to post
Share on other sites
The problem is not that you can't represent any base 10 fraction in base 2. Since numbers aren't any different in different bases, of course you can represent a fraction in base 2, 3, 5, 7, 10, or base 65535. It doesn't make any difference in the world what base you use.

The problem is that a float represent a chain of binary fractions, and you have a limited number of bit to use.
1/2 1/4 1/8 1/16 1/32 ....

with 3 bits, you could only represent:
1/8 = 1/8
2/8 = 1/4
3/8 = 1/4 + 1/8
4/8 = 1/2
5/8 = 1/2 + 1/8
6/8 = 1/2 + 1/4
7/8 = 1/2 + 1/4 + 1/8

so with 24 bits you can only represent the decimal fractions
i / 2**24 | i is a positive integer

the exponent allows you to multiply that fraction by
2**j | j is an integer

As you can see, you can only represent a finite collection of numbers this way, and so it will never match exactly all the real fractions you can make.

Share this post


Link to post
Share on other sites
Quote:
Original post by shou4577
I know that there are some oddities, such as fractions that terminate in one base may not terminate in another (is this what you are referring to?).

Yes. For example, the number "one tenth" is
in decimal: 0.1
in binary: 0.00011001100110011001100110011001100...

Since you have to make a cut somewhere (you don't have infinite precision), the decimal number 0.1 cannot be represented exactly as a binary fraction.

The approximations for 0.1 are
0.100000001490116119384765625 for float and
0.1000000000000000055511151231257827021181583404541015625 for double.

Share this post


Link to post
Share on other sites
Lol why can't they just be represented the same way as an int but with an extra byte for decimal position? XD that would make more sense but the processor wouldn't understand it as quickly.

Also reading the last post, I understand that:

1/2, 1/4, 1/8, 1/16, 1/32, 1/64, 1/128, 1/256

are all terminating fractions in binary?
like:

1/2 = 0.1
1/4 = 0.01
1/8 = 0.001
1/16 = 0.0001
1/32 = 0.00001
1/64 = 0.000001
1/128 = 0.0000001
1/256 = 0.00000001

That's because binary is base 2, and instead of 1, 10, 100, its 2, 4, 8.

Wow I never noticed that before :D

the inner workings of computers are very interesting to study...

[Edited by - Super Llama on November 13, 2008 10:02:04 AM]

Share this post


Link to post
Share on other sites
Quote:

Lol why can't they just be represented the same way as an int but with an extra byte for decimal position? XD that would make more sense but the processor wouldn't understand it as quickly.

That would not solve anything though. It's a valid alternative representation for numbers with non-whole portions (often called 'fixed point,' and more typically implemented as N bits of "left of decimal" integer and M bits of "right of decimal" integer such that N + M is some natural size (16, 32, etc).

But it doesn't solve any problems without creating equivalent ones.

Historically, however, it was faster -- fixed point arithmetic is done with integers and some clever shifting and reliance on simple mathematical truisms. Since it is integer-based, it was traditionally faster than floating-point because some chips used to have very slow FPUs or no FPU at all; some chips had offboard FPUs for which it could be expensive to push and pop values onto the FPU stack, etc.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this