Jump to content
  • Advertisement
Sign in to follow this  
MarlboroKing

Exactly what's the point of 'int32_t', etc.

This topic is 2038 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I see very little point of using the 'int32_t' compared to 'signed int'. The project irrLicht actually shows one valid example for it in .\irrTypes.h:

#if defined(_MSC_VER) || ((__BORLANDC__ >= 0x530) && !defined(__STRICT_ANSI__))
typedef unsigned __int64            u64;
#elif __GNUC__
#if __WORDSIZE == 64
typedef unsigned long int           u64;
#else
__extension__ typedef unsigned long long    u64;
#endif
#else
typedef unsigned long long          u64;
#endif

MSDN writes little about the 'fixed size' data types it supports, such as __int32:
"The types __int8__int16, and __int32 are synonyms for the ANSI types that have the same size, and are useful for writing portable code that behaves identically across multiple platforms" -- Great, it's "ANSI principle", I suppose.

 

I guess my point is; why does MSVC declare the data types prefixed with "__" a fixed size, yet \most\ other compilers determine the globally used "int"(etc.) as a fixed size? Is there any scenario that I should dread about a non-fixed size integer?

 

Edit: I am aware that 'int' on a x86 build is 32 bits, yet should be 64 bits on a x64 build.

Edited by MarlboroKing

Share this post


Link to post
Share on other sites
Advertisement

Alright, I see. If you're expecting your plain int has at least 32 bits of storage, however, the platform it's being deployed on creates only 16 bits of storage.

So if you are performing a bit shift operation to take the last 16 bits out of the supposed 32 bits, bam.

 

Thanks!

Share this post


Link to post
Share on other sites

It is VERY important when you take portability into account.

 

A very simple and common example in games would be a binary protocol, if you create an struct:

struct pos_update_package_st{
  long id;
  float x, y, z;
};

And the code runs on 32 bits machine the x memory will start at the 5th byte while in a 64 machine it would start at the 9th byte. In this example you would surely run into a bug and very likely into a seg_fault.

Share this post


Link to post
Share on other sites


And the code runs on 32 bits machine the x memory will start at the 5th byte while in a 64 machine it would start at the 9th byte. In this example you would surely run into a bug and very likely into a seg_fault.

This ain't necessarily so. In MSVC all int types have the same size on 32 and 64 bit; i.e. long is also 32 bit.

Share this post


Link to post
Share on other sites

 


And the code runs on 32 bits machine the x memory will start at the 5th byte while in a 64 machine it would start at the 9th byte. In this example you would surely run into a bug and very likely into a seg_fault.

This ain't necessarily so. In MSVC all int types have the same size on 32 and 64 bit; i.e. long is also 32 bit.

 

 

Most Windows compilers (even the Windows version of gcc) use 32bit for longs. (atleast on IA64 and x86-64)

on UNIX, Linux and OS X however long is usually 64bit on 64bit versions of the OS.

 

If you are writing a multiplayer game you might want to support Linux servers and OS X clients and then it really helps if you know exactly what it is you are sending over the network.

Edited by SimonForsman

Share this post


Link to post
Share on other sites

I guess my point is; why does MSVC declare the data types prefixed with "__" a fixed size, yet \most\ other compilers determine the globally used "int"(etc.) as a fixed size? Is there any scenario that I should dread about a non-fixed size integer?

Yes, you should always assume that any size that isn't fixed might change in the future or from OS to OS or from compiler to compiler or from hardware to hardware. Unless you use a fixed size.

uint32_t, int32_t, uint64_t, etc... are standardized.
 

Edit: I am aware that 'int' on a x86 build is 32 bits, yet should be 64 bits on a x64 build.

Actually, we've decided that 'int' should now be a constant 32 bits, even on 64 bit systems, because some programmers (dry.png) assumed it'd be a constant size, instead of using the fixed size ints that they were supposed to, and too much code would've broke in the transition.

Int wasn't supposed to be a fixed size, it was supposed to be the best size for that system - that system's native size. But too many people didn't get the memo.
If your code needs a variable to be a certain size, then enforce it in the code. Don't have the code assume it, make the code guarantee it.

Share this post


Link to post
Share on other sites

Just to elaborate a little further. The thing with the standard types in C is that... well, they are not really well defined. According to the C90 standard, and I quote:

 

A ‘‘plain’’ int object has the natural size suggested by the architecture of the execution environment (large enough to contain any value in the range

INT_MIN to INT_MAX as de?ned in the header <limits.h>)

 

 
 
Of course,  what constitutes a "natural size" is not defined anywhere. The common understanding is that it should be 16 bits in 16 bits systems, 32 bits in 32 bits systems, 64 bits y 64 bits systems, and so on. Of course, that is not the case. For example for Visual Studio int is always 32 bits regardless on where you are, and that's it. I bet they did this for compatibility reasons (and because, let's face it, Microsoft is not big on standards).
 
The bottom line is that in reality you should never assume what the size of an int is (not without checking, at least).
 
However, having fixed-size types is a pretty common requirement. You need it if you are going to store files that can be read by other applications (or the same application in another platform). You need it to send data over a network. You need it to use shared memory, etc. 
 
Microsoft's response to this need was to create the special __int types. But of course this is Microsoft's only. The people in the C committee noticed this problem also and, in the C99 standard they created the stdint.h header, with improved definitions for integral numbers. You have a good overview of the new defined types here: http://www.cplusplus.com/reference/cstdint/ .
 
This header (renamed cstdint, because that's the naming convention for accessing C standard headers from C++ programs) in C++11. Although to the best of my knowledge compilers supporting C99 already had it and of course you could include in your C++ programs without problem.
 
I recommend following that last link because I think that correctly using the new types is important... (although it makes you really think what you want, whether it is the maximum speed of the minimum memory footprint).
Edited by Javier Meseguer de Paz

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!