Jump to content
  • Advertisement
Sign in to follow this  
sakky

Data type sizes

This topic is 5141 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Okay, this is a continue of a different post “WORD & DWORD”. To Polymorphic OOP The compiler I use is VS.NET ,does this explain any thing? And the char is one byte, hence the reason I say byte all the time. I would like to know where you got that info, because it is wrong on my machines and my friends. The int is larger then the long on a 64 bit processor. My whole point on all of this is that the int data type has a variant size depending of the processors’ bus. The char, short, and long always stay the same size. I believe that GCC produces that same size as VS.NET. The char is 1 byte, the shrot is 2 bytes, and the long is 4 bytes. The int however, is 4 bytes on my 32 bit processor, and 8 bytes on the friends’ 64 bit processor. That standard sounds not entirely true full, or you miss understand it. The int can only take values from 0 to 65,535 unsigned, and -32,768 to 32,767 signed. Even if it takes up 4 bytes, it can only hold those values. Don’t believe me, try testing it out! But the long however, can hold 2,147,483,648 to 2,147,483,647 signed, and 0 to 4,294,967,295 unsigned. The screwed up thing is, that the int and the long are the same size on my 32 bit processor. But the int STILL can’t even hold the amount of the long because of that standard or some other reason. Because on my machine, sizeof( int ) = 4. And on my friends machine sizeof( int ) = 8. However, sizeof( long ) = 4 on both of ours machines.

Share this post


Link to post
Share on other sites
Advertisement
Quote:
The int can only take values from 0 to 65,535 unsigned, and -32,768 to 32,767 signed.


What do you mean? On a 32bit system an unsigned int takes upto 4.2 billion, give or take. Or am i misunderstanding you?

Share this post


Link to post
Share on other sites
Not sure which language you're talking about so...

From the ISO C99 Standard:

[6.2.5.8] For any two integer types with the same signedness and different integer conversion rank (see 6.3.1.1), the range of values of the type with smaller integer conversion rank is a subrange of the values of the other type.

[6.3.1.1.1] ...The rank of long long int shall be greater than the rank of long int, which shall be greater than the rank of int...

Therefore, it seems to me that long int must always have the same or greater range as the int in C.

------------------------------

Someone already quoted this from the ISO C++ 2003 Standard but for completeness:

[3.9.1.2] There are four signed integer types: “signed char”, “short int”, “int”, and “long int.” In this
list, each type provides at least as much storage as those preceding it in the list.


This plainly says that long int provides at least as much storage space as the int. Presumably the compiler wouldn't limit the value range of a type to less than is available in the storage space. I don't know whether this is explicitly prevented by the standard or not... So for C++, I dunno.

Share this post


Link to post
Share on other sites
It matters to some of us. And these things are standardized, at least to a certain extent. So its not really just magically "different" for no reason at all.

Sorry to quote again but:
[C99 6.2.5.5] ... A ‘‘plain’’ int object has the natural size suggested by the architecture of the execution environment...

One of the things I hate about web programming is the lack of concern about following standards. Fortunately, things have been getting better over the last few years.

I should mention that if the range of an unsigned int is limited to 0-65535 for you, it would be specific to your compiler or execution environment, not because of a strange C or C++ rule.

[Edited by - dcosborn on September 20, 2004 6:00:39 PM]

Share this post


Link to post
Share on other sites
Quote:
Original post by sakky
My whole point on all of this is that the int data type has a variant size depending of the processors’ bus. The char, short, and long always stay the same size.

I would like to know where you got this information. I don't think it is right.

Quote:
Original post by sakky
I believe that GCC produces that same size as VS.NET. The char is 1 byte, the shrot is 2 bytes, and the long is 4 bytes.

GCC can compile C/C++ code for several platforms. The sizes of the types are different for different platforms. In addition, there are a couple of versions of GCC for the PS2, and a long is 64 bits in one version, but 32 bits in the other.

Quote:
Original post by sakky
The int can only take values from 0 to 65,535 unsigned, and -32,768 to 32,767 signed. Even if it takes up 4 bytes, it can only hold those values.

You are very very confused.

Share this post


Link to post
Share on other sites
Well what I mean is that a int is the same size on a 32 bit processor in the amount of memory it takes up. On a 64 bit processor the int takes up more memory. This is what I’m getting at.

It seams, from what I’ve recently read, is that a int’s size (in bytes) and range are purely dependant on the CPU and Tools used. But a short or a long are not. They are the same size every where and are much better in my opinion to use because I can always rely on them.

I use Visual C++.NET and I have a Athlon XP 1800. When I compile my program with VC, sizeof( int ) = 4 and sizeof( long) = 4. But, I can’t put 2 or 4 million something in a int, but I can with an long. Even though they take up the same amount of memory.

Hence, the long will always be 4 bytes and be able to hold huge values, where as a int has a variable size and can’t hold values as big as a long. This is my point and I think ints suck and shorts and longs rule!

[edit] What does 'nvm' mean?

Share this post


Link to post
Share on other sites
Quote:
Hence, the long will always be 4 bytes and be able to hold huge values, where as a int has a variable size and can’t hold values as big as a long. This is my point and I think ints suck and shorts and longs rule!


Im sorry. But i dont understand what your talking about! Shurly if a long is 4 bytes in length it can hold a value between 0 and 4.2 billion if unsigned. If an int is the same length it will hold the same. Why would a compiler allocate 4 bytes of emeory and only use 2? (Which is your implication in the top post)

Quote:
What does 'nvm' mean?


Never mind.

Share this post


Link to post
Share on other sites
The long and short are just as volatile as the the int. The short, int, and long must each be equal or larger than the previous. So they're all dependent on one another. If an int grows, it may be necessary that the long grow too. The int should never be larger than the long.

I wonder if alignment could be causing your sizeof(int) to equal 4 when its actually 2 or something?

Does this give you 0 or 65536?

#include <iostream>
int main()
{
unsigned int a = 65535;
std::cout << ++a;
return 0;
}

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!