Data type sizes

Started by
26 comments, last by Zahlman 19 years, 7 months ago
Yeah it would but I can't think of any other reason to limit the range like that. [totally]
Advertisement
Quote:Original post by sakky
It seams, from what I’ve recently read, is that a int’s size (in bytes) and range are purely dependant on the CPU and Tools used. But a short or a long are not. They are the same size every where and are much better in my opinion to use because I can always rely on them.

Wrong. short is shorthand for short int. long is shorthand for long int. The size isn't fixed. Only that sizeof(short) <= sizeof(int) and sizeof(long) >= sizeof(int).

Quote:Original post by sakky
I use Visual C++.NET and I have a Athlon XP 1800. When I compile my program with VC, sizeof( int ) = 4 and sizeof( long) = 4. But, I can’t put 2 or 4 million something in a int, but I can with an long. Even though they take up the same amount of memory.

Wrong. From limits.h:
#define INT_MAX       2147483647    /* maximum (signed) int value */#define UINT_MAX      0xffffffff    /* maximum unsigned int value */#define LONG_MIN    (-2147483647L - 1) /* minimum (signed) long value */#define LONG_MAX      2147483647L   /* maximum (signed) long value */#define ULONG_MAX     0xffffffffUL  /* maximum unsigned long value */

They're exactly the same.

Quote:Original post by sakky
Hence, the long will always be 4 bytes and be able to hold huge values, where as a int has a variable size and can’t hold values as big as a long. This is my point and I think ints suck and shorts and longs rule!

As I said above, a long isn't guaranteed to be 4 bytes at all. On a 32-bit platform and compiler, a long can hold the same range as an int.

Quote:Original post by sakky
[edit] What does 'nvm' mean?

nvm = NeVer Mind I think

EDIT: The results of running your program in the other thread on a 64-bit linux machine with GCC:
[root@Winry root]# ./a.out
A int is 4 in bytes
A long is 8 in bytes


[Edited by - Evil Steve on September 20, 2004 7:07:28 PM]
Quote:Original post by sakky
My whole point on all of this is that the int data type has a variant size depending of the processors’ bus. The char, short, and long always stay the same size.


Again, I got all of my info directly from the C++ standard, which I quoted in the previous thread. I don't see why you have such trouble believing this.

Quote:Original post by sakky
The int can only take values from 0 to 65,535 unsigned, and -32,768 to 32,767 signed.

That is completely compiler dependent. For instance, on a 32 bit machine in the VS .NET 2003 compiler an unsigned int will have a range of 0 to 2^32 - 1(over 4 billion) and an int will have a rang of -2^16 to 2^16-1 (under -2 billion and over 2 billion respectively). The ranges you gave are if an int happened to be 16 bits using a 2s compliment system which is not guaranteed to be true (and usually is NOT true as most people nowadays use 32-bit or 64-bit systems). The exact size and range is not given by the standard, it only has guidelines, just like all of the other fundamental integral types excluding the char types (which must be exactly 1 byte in C++ storage units).

Quote:Original post by sakky
Even if it takes up 4 bytes, it can only hold those values.

Again, you are mistaken. It's compiler dependent.


Quote:Original post by sakky
Don’t believe me, try testing it out! But the long however, can hold 2,147,483,648 to 2,147,483,647 signed, and 0 to 4,294,967,295 unsigned.

I quoted the standard which is more reliable than testing a compiler (since a compiler can be noncompliant), but since you are so stubborn, the test shows that an int and an unsigned int have the same range as a long on this compiler (which are the ranges I gave earlier in my reply, which is also the range you gave here). Anyway, if the test came out differently, it wouldn't have meant you were right since it is completely valid for a compiler to do that (though on a 32-bit or 64-bit machine it would be pretty stupid for a compiler to do that). It's up to the compiler how they wish to implement it.

Quote:Original post by sakky
The screwed up thing is, that the int and the long are the same size on my 32 bit processor. But the int STILL can’t even hold the amount of the long because of that standard or some other reason. Because on my machine, sizeof( int ) = 4. And on my friends machine sizeof( int ) = 8. However, sizeof( long ) = 4 on both of ours machines.

Either you have a noncompliant compiler or you are simply lying (sorry to make such an accusation, but I've quoted the standard, tested with the same compiler that you are using, yet you still are claiming these results)!

Since you are continuing on, I will again quote the standard:


1.7
-1-
Quote:"The fundamental storage unit in the C++ memory model is the byte." ...




5.3.3
-1-
Quote:"The sizeof operator yields the number of bytes in the object representation of its operand." ...




3.9.1
-2-
Quote:"There are four signed integer types: ``signed char'', ``short int'', ``int'', and ``long int.'' In this list, each type provides at least as much storage as those preceding it in the list." ...





Since 1.7 declares the fundamental storage unit to be a byte, and 5.3.3 declares the sizeof operator to yield the number of bytes (defined in 1.7 to be the unit of storage) that an instantiation of a type occupies, and 3.9.1 declares that each type in the mentioned list provides at least as much storage (defined in 1.7 as bytes) as the previous one in the list where in comes before long, then that means that sizeof long would always have to yield a value at least as large as sizeof int.
int = 4 bytes
long = 4 bytes
Press any key to continue

Is what I get on my system. Why is my machine so weird then? Hmmm, I give up!
Take back the internet with the most awsome browser around, FireFox
I'm with Evil Steve and Polymorphic OOP on this. Except for the fact that Evil Steve shouldn't log in as root. [evil]

sakky - Try this. Note that char, signed char, and unsigned char are actually three different types. All other integral types are signed by default.

#include <iostream>#include <limits>int main(){  std::cout << "char size          " << sizeof(char) << std::endl            << "char min           " << std::numeric_limits<char>::min() << std::endl            << "char max           " << std::numeric_limits<char>::max() << std::endl            << "signed char min    " << std::numeric_limits<signed char>::min() << std::endl            << "signed char max    " << std::numeric_limits<signed char>::max() << std::endl            << "unsigned char min  " << std::numeric_limits<unsigned char>::min() << std::endl            << "unsigned char max  " << std::numeric_limits<unsigned char>::max() << std::endl            << std::endl;  std::cout << "short size         " << sizeof(short) << std::endl            << "short min          " << std::numeric_limits<short>::min() << std::endl            << "short max          " << std::numeric_limits<short>::max() << std::endl            << "unsigned short min " << std::numeric_limits<unsigned short>::min() << std::endl            << "unsigned short max " << std::numeric_limits<unsigned short>::max() << std::endl            << std::endl;  std::cout << "int size           " << sizeof(int) << std::endl            << "int min            " << std::numeric_limits<int>::min() << std::endl            << "int max            " << std::numeric_limits<int>::max() << std::endl            << "unsigned int min   " << std::numeric_limits<unsigned int>::min() << std::endl            << "unsigned int max   " << std::numeric_limits<unsigned int>::max() << std::endl            << std::endl;  std::cout << "long size          " << sizeof(long) << std::endl            << "long min           " << std::numeric_limits<long>::min() << std::endl            << "long min           " << std::numeric_limits<long>::max() << std::endl            << "unsigned long min  " << std::numeric_limits<unsigned long>::min() << std::endl            << "unsigned long min  " << std::numeric_limits<unsigned long>::max() << std::endl            << std::endl;}
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." — Brian W. Kernighan
Quote:Original post by sakky
int = 4 bytes
long = 4 bytes
Press any key to continue

Is what I get on my system. Why is my machine so weird then? Hmmm, I give up!

Again, that is completely valid. A long has to be at least as big as an int. In your example it gave a 4 and a 4. 4 is greater than or equal to 4. There is nothing weird about your system. Again, this doesn't mean that a long is 4 bytes on any compiler.

Quote:Original post by Empirical
But wouldnt that imply he's on a 16 bit system? Or are there other reasons for a int being 2 bytes?

sizeof int can be 2 bytes on a 32 bit system, or even on a 64 bit system, or it can even by 1 byte. The most common reason for this is because the C++ standard never guarantees that a byte has to be 8 bits nor that a byte in C++ terms has to be the same thing as a byte in the systems terms. For instance, a char must be at least 8 bits, so there is nothing stopping it from being 16 bits. A char is guaranteed to be 1 byte, which therefore means that the C++ byte on this example compiler is in fact 16 bits (which is a 2 byte value in the systems terms). Since one byte is 16 bits, that means that a 32 bit int can be represented in 2 C++ bytes on this compiler. Therefore, sizeof int can yield 2 bytes, which is still a 32 bit integer and can hold just as much a a 32 bit int on a system which yields 4 bytes with a byte defined to be 8 bits (assuming they both use the same way of representing signed integers, most-likely 2s compliment).
Quote:Original post by sakky
int = 4 bytes
long = 4 bytes
Press any key to continue

Is what I get on my system. Why is my machine so weird then? Hmmm, I give up!

Because your system is 32-bits :P Or the compiler is

Quote:Original post by Fruny
Except for the fact that Evil Steve shouldn't log in as root.

Actually, it was Pouya's machine. Blame him :P
Polymorphic OOP>
Ah I see. But he also says that on his system an int holds 0-65535 (unsigned). That implies a 16 bit value. But why unless he is on a 16bit platform?
Quote:Original post by Empirical
Polymorphic OOP>
Ah I see. But he also says that on his system an int holds 0-65535 (unsigned). That implies a 16 bit value. But why unless he is on a 16bit platform?


ints aren't necessarily in 2s compliment format, so you aren't guaranteed that 32 bits can hold 2^32 different values. For instance, if an implementation instead used a signed bit, it would have 2^32 - 1 values it could represent (since it would have a positive and negative 0), and conceivably, some off-the-wall implementation could represent an even smaller range (or a shifted range), though it would be highly unlikely. Of course, in practice it would be tough to find an example of these situations, but it is important to note that the standard does leave it open as a possibility, and as such, you still can make no assumptions. A 32 bit int's range is never defined. Since his int can hold that range, it's extremely likely that he is running a 16 bit machine with a compiler that represents an int with 16 bits in 2s compliment form, but that is not guaranteed.
I admit, I was wrong. I guess I was thing about something else
:(
Take back the internet with the most awsome browser around, FireFox

This topic is closed to new replies.

Advertisement