• Advertisement

Archived

This topic is now archived and is closed to further replies.

32 bit integers

This topic is 5995 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Advertisement
It would be better to use int when you don''t really care about tha size of the integer (for example, if you''re writing it to a file, then you do care what size it is. __int32 and friends are MS specific, though.)

The reason you should use int when you don''t actually care about the size is that with a 64-bit compiler, int should be a 64-bit number (I''m not sure if it actually is in the IA64 compiler, does anyone know?) because that''s the natural size for an integer on a 64-bit processor.


codeka.com - Just click it.

Share this post


Link to post
Share on other sites
would it be int that would be 32 or long though? (in a 64 bit processor)?

You know, I never wanted to be a programmer...

Alexandre Moura

Share this post


Link to post
Share on other sites
The only thing that you are guaranteed by standard C/C++ types is that an int is equal to or larger than a short and equal to or smaller than a long. Other than that, there''s real guideline to go by.

You are probably safe in assuming that an int will be the ''native'' size on your machine. For example, on a 32 bit system, ints will be 32 bits. On a 64 bit system, ints will be 64 bits. This is USUALLY how it works.

Share this post


Link to post
Share on other sites
If I remember correctly, the C/C++ standard gaurantees only the following:

1 == sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long)

Note that "1" means "1 unit", not "1 byte". If your char was 16-bits long, sizeof(char) would still be 1, it''s just that everything else is a multiple of 16-bits.

Oh yeah, and sizeof(signed) == sizeof(unsigned) (for any of the types)

If you''ve tried Microsoft''s Visual Studio.NET beta, you''ll notice that it now gives warnings when you assume anything about the size of a variable (this is in anticipation of their 64-bit compiler). One strange thing, though is that it gives a warning when you cast from a pointer to an int, which makes me think that an int will still be 32-bit (since the pointer will be 64-bit) maybe it''s just a sanity check - you shouldn''t assume anything about the size of a pointer anyway...


codeka.com - Just click it.

Share this post


Link to post
Share on other sites
Because of all this int size uncertaintly, I think it''s better if they just changed to using "byte", "word", "dword" and "qword", because there can be no uncertainty in size then Of course, that will never happen

Share this post


Link to post
Share on other sites

  • Advertisement