• FEATURED

View more

View more

View more

### Image of the Day Submit

IOTD | Top Screenshots

### The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

# Exactly what's the point of 'int32_t', etc.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

24 replies to this topic

### #1MarlboroKing  Members

Posted 17 March 2014 - 12:07 PM

I see very little point of using the 'int32_t' compared to 'signed int'. The project irrLicht actually shows one valid example for it in .\irrTypes.h:

#if defined(_MSC_VER) || ((__BORLANDC__ >= 0x530) && !defined(__STRICT_ANSI__))
typedef unsigned __int64            u64;
#elif __GNUC__
#if __WORDSIZE == 64
typedef unsigned long int           u64;
#else
__extension__ typedef unsigned long long    u64;
#endif
#else
typedef unsigned long long          u64;
#endif


MSDN writes little about the 'fixed size' data types it supports, such as __int32:
"The types __int8__int16, and __int32 are synonyms for the ANSI types that have the same size, and are useful for writing portable code that behaves identically across multiple platforms" -- Great, it's "ANSI principle", I suppose.

I guess my point is; why does MSVC declare the data types prefixed with "__" a fixed size, yet \most\ other compilers determine the globally used "int"(etc.) as a fixed size? Is there any scenario that I should dread about a non-fixed size integer?

Edit: I am aware that 'int' on a x86 build is 32 bits, yet should be 64 bits on a x64 build.

Edited by MarlboroKing, 17 March 2014 - 12:09 PM.

### #2samoth  Members

Posted 17 March 2014 - 12:18 PM

POPULAR

The point is that you know that this type has 32 bits. Whereas for signed int you only know that it is at least the size of short, which is at least 16 bits. It might be 16, 32, or 64 bits. Or something else.

Sometimes it just doesn't matter what exact size a variable has, but sometimes it does. Using types like int32_t is a standard, portable way of being sure.

Similar is often done for various APIs, for example types like DWORD, HANDLE, or GLint. This, too, is for portability (and compatibility), but in this case in the exact opposite way. Whatever the real type is is completely opaque to you. You need not know and you don't want to know. If the API changes, they'll just change the typedefs, and your source code will still work the same.

Edited by samoth, 17 March 2014 - 12:23 PM.

### #3MarlboroKing  Members

Posted 17 March 2014 - 12:25 PM

Alright, I see. If you're expecting your plain int has at least 32 bits of storage, however, the platform it's being deployed on creates only 16 bits of storage.

So if you are performing a bit shift operation to take the last 16 bits out of the supposed 32 bits, bam.

Thanks!

### #4samoth  Members

Posted 17 March 2014 - 12:35 PM

POPULAR

Not at least, but exactly.

If you want at least, you can use int_least32_t or int_fast32_t

(The former guarantees at least the specified width, and the latter guarantees the fastest possible type while guaranteeing at least the specified witdth. Usually they're all identical, but not necessarily so!)

int32_t guarantees that you get exactly what you ask for, no bit more and no bit less (irrespective of what C type this corresponds to on your platform, so you can for example use that type on two platforms and be sure that a struct that you send over the network from one machine to the other will have elements of the exact same size -- this still leaves pointers and endianness as a problem, but those are different stories, as far as size is concerned, you're good).

Edited by samoth, 17 March 2014 - 12:39 PM.

### #5KnolanCross  Members

Posted 17 March 2014 - 01:16 PM

It is VERY important when you take portability into account.

A very simple and common example in games would be a binary protocol, if you create an struct:

struct pos_update_package_st{
long id;
float x, y, z;
};


And the code runs on 32 bits machine the x memory will start at the 5th byte while in a 64 machine it would start at the 9th byte. In this example you would surely run into a bug and very likely into a seg_fault.

Currently working on a scene editor for ORX (http://orx-project.org), using kivy (http://kivy.org).

### #6cdoubleplusgood  Members

Posted 17 March 2014 - 02:19 PM

And the code runs on 32 bits machine the x memory will start at the 5th byte while in a 64 machine it would start at the 9th byte. In this example you would surely run into a bug and very likely into a seg_fault.

This ain't necessarily so. In MSVC all int types have the same size on 32 and 64 bit; i.e. long is also 32 bit.

### #7SimonForsman  Members

Posted 17 March 2014 - 03:11 PM

And the code runs on 32 bits machine the x memory will start at the 5th byte while in a 64 machine it would start at the 9th byte. In this example you would surely run into a bug and very likely into a seg_fault.

This ain't necessarily so. In MSVC all int types have the same size on 32 and 64 bit; i.e. long is also 32 bit.

Most Windows compilers (even the Windows version of gcc) use 32bit for longs. (atleast on IA64 and x86-64)

on UNIX, Linux and OS X however long is usually 64bit on 64bit versions of the OS.

If you are writing a multiplayer game you might want to support Linux servers and OS X clients and then it really helps if you know exactly what it is you are sending over the network.

Edited by SimonForsman, 17 March 2014 - 03:12 PM.

I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!

### #8Ryan_001  Prime Members

Posted 17 March 2014 - 04:41 PM

POPULAR

The types int8_t, int16_t, ect... are what the fundamental types should have been in the first place.

### #9Servant of the Lord  Members

Posted 17 March 2014 - 05:18 PM

I guess my point is; why does MSVC declare the data types prefixed with "__" a fixed size, yet \most\ other compilers determine the globally used "int"(etc.) as a fixed size? Is there any scenario that I should dread about a non-fixed size integer?

Yes, you should always assume that any size that isn't fixed might change in the future or from OS to OS or from compiler to compiler or from hardware to hardware. Unless you use a fixed size.

uint32_t, int32_t, uint64_t, etc... are standardized.

Edit: I am aware that 'int' on a x86 build is 32 bits, yet should be 64 bits on a x64 build.

Actually, we've decided that 'int' should now be a constant 32 bits, even on 64 bit systems, because some programmers () assumed it'd be a constant size, instead of using the fixed size ints that they were supposed to, and too much code would've broke in the transition.

Int wasn't supposed to be a fixed size, it was supposed to be the best size for that system - that system's native size. But too many people didn't get the memo.
If your code needs a variable to be a certain size, then enforce it in the code. Don't have the code assume it, make the code guarantee it.

It's perfectly fine to abbreviate my username to 'Servant' or 'SotL' rather than copy+pasting it all the time.
All glory be to the Man at the right hand... On David's throne the King will reign, and the Government will rest upon His shoulders. All the earth will see the salvation of God.
Of Stranger Flames -

### #10Javier Meseguer de Paz  Members

Posted 17 March 2014 - 09:08 PM

Just to elaborate a little further. The thing with the standard types in C is that... well, they are not really well defined. According to the C90 standard, and I quote:

A ‘‘plain’’ int object has the natural size suggested by the architecture of the execution environment (large enough to contain any value in the range

INT_MIN to INT_MAX as deﬁned in the header <limits.h>)

Of course,  what constitutes a "natural size" is not defined anywhere. The common understanding is that it should be 16 bits in 16 bits systems, 32 bits in 32 bits systems, 64 bits y 64 bits systems, and so on. Of course, that is not the case. For example for Visual Studio int is always 32 bits regardless on where you are, and that's it. I bet they did this for compatibility reasons (and because, let's face it, Microsoft is not big on standards).

The bottom line is that in reality you should never assume what the size of an int is (not without checking, at least).

However, having fixed-size types is a pretty common requirement. You need it if you are going to store files that can be read by other applications (or the same application in another platform). You need it to send data over a network. You need it to use shared memory, etc.

Microsoft's response to this need was to create the special __int types. But of course this is Microsoft's only. The people in the C committee noticed this problem also and, in the C99 standard they created the stdint.h header, with improved definitions for integral numbers. You have a good overview of the new defined types here: http://www.cplusplus.com/reference/cstdint/ .

This header (renamed cstdint, because that's the naming convention for accessing C standard headers from C++ programs) in C++11. Although to the best of my knowledge compilers supporting C99 already had it and of course you could include in your C++ programs without problem.

I recommend following that last link because I think that correctly using the new types is important... (although it makes you really think what you want, whether it is the maximum speed of the minimum memory footprint).

Edited by Javier Meseguer de Paz, 17 March 2014 - 09:09 PM.

“We should forget about small efficiencies, say about 97% of the time; premature optimization is the root of all evil” -  Donald E. Knuth, Structured Programming with go to Statements

"First you learn the value of abstraction, then you learn the cost of abstraction, then you're ready to engineer" - Ken Beck, Twitter

### #11Hodgman  Moderators

Posted 17 March 2014 - 09:41 PM

POPULAR

Edit: I am aware that 'int' on a x86 build is 32 bits, yet should be 64 bits on a x64 build.

Actually, we've decided that 'int' should now be a constant 32 bits, even on 64 bit systems, because some programmers () assumed it'd be a constant size, instead of using the fixed size ints that they were supposed to, and too much code would've broke in the transition.
Int wasn't supposed to be a fixed size, it was supposed to be the best size for that system - that system's native size. But too many people didn't get the memo.
If your code needs a variable to be a certain size, then enforce it in the code. Don't have the code assume it, make the code guarantee it.
Thats not the only reason... Floats aren't native to the CPU, haven't been for a long time, but we still use them more commonly than doubles, because it's usually only harmful to double the memory allocation and bandwidth requirements. Likewise, using int64's when int32's would suffice does you absolutely no good.

### #12King Mir  Members

Posted 18 March 2014 - 10:51 AM

As others have mentioned int32_t is a standard way to garuntee 32 bit integers. Another thing int32_t family of types does is garuntee twos compliment arithmatic.

### #13Rattrap  Members

Posted 18 March 2014 - 11:37 AM

Of course, what constitutes a "natural size" is not defined anywhere. The common understanding is that it should be 16 bits in 16 bits systems, 32 bits in 32 bits systems, 64 bits y 64 bits systems, and so on.

Wouldn't intptr_t and uintptr_t be considered the "natural size", since they are supposed to be able to hold the size of a void pointer?

"I can't believe I'm defending logic to a turing machine." - Kent Woolworth [Other Space]

### #14Javier Meseguer de Paz  Members

Posted 18 March 2014 - 12:37 PM

Of course, what constitutes a "natural size" is not defined anywhere. The common understanding is that it should be 16 bits in 16 bits systems, 32 bits in 32 bits systems, 64 bits y 64 bits systems, and so on.

Wouldn't intptr_t and uintptr_t be considered the "natural size", since they are supposed to be able to hold the size of a void pointer?

I am not sure I understand your question... the term "natural size" was used in the standard to describe the intended size of an int. No definition of that term was provided, so the size of an int was essentially undefined.

IF we define natural size as the size of a Word in the architecture, then yes, the size of intptr_t and uintptr_t should be the natural size.  But that is we use that definition for "natural size". Since the standard provides none, we could use another one. In fact, Microsoft Visual Studio uses another: "natural size" (the size of int) is 32 bits no matter what. Yet intptr_t and uintptr_t must be 64 bits in 64 bits architectures, so by their definition, they wouldn't be the natural size... It's just a matter of how we define what, which is open to interpretation since the standard is not clear.

I don't know if I am giving a clear answer...

PS: Regardless, those types were introduced in C99, 9 years after "natural size" was used for the first time to describe int's size.

Edited by Javier Meseguer de Paz, 18 March 2014 - 12:39 PM.

“We should forget about small efficiencies, say about 97% of the time; premature optimization is the root of all evil” -  Donald E. Knuth, Structured Programming with go to Statements

"First you learn the value of abstraction, then you learn the cost of abstraction, then you're ready to engineer" - Ken Beck, Twitter

### #15Rattrap  Members

Posted 18 March 2014 - 01:09 PM

I wasn't going by Microsoft's definition, I was going with what you called the "common understanding" and the provide C++ equivalent.

Of course, what constitutes a "natural size" is not defined anywhere. The common understanding is that it should be 16 bits in 16 bits systems, 32 bits in 32 bits systems, 64 bits y 64 bits systems, and so on.

intptr_t and uintptr_t should fall under this definition.

"I can't believe I'm defending logic to a turing machine." - Kent Woolworth [Other Space]

### #16Javier Meseguer de Paz  Members

Posted 18 March 2014 - 02:25 PM

I wasn't going by Microsoft's definition, I was going with what you called the "common understanding" and the provide C++ equivalent.

Of course, what constitutes a "natural size" is not defined anywhere. The common understanding is that it should be 16 bits in 16 bits systems, 32 bits in 32 bits systems, 64 bits y 64 bits systems, and so on.

intptr_t and uintptr_t should fall under this definition.

Oh, ok, then. Yes, they do.

“We should forget about small efficiencies, say about 97% of the time; premature optimization is the root of all evil” -  Donald E. Knuth, Structured Programming with go to Statements

"First you learn the value of abstraction, then you learn the cost of abstraction, then you're ready to engineer" - Ken Beck, Twitter

### #17wintertime  Members

Posted 18 March 2014 - 04:53 PM

I think the standards committee finally realized it was foolish to not make all types fixed size in the first place, because most libraries included Rube Goldberg Contraptions using the preprocessor to derive from a million compiler and platform specific defines its own incompatible version of fixed size types that end up being not the same size sometimes. Though in its extreme pursuit for compatibility they couldn't get themselves to just say short/int/long/long long are now always 16/32/64/128 bits, but had to add a few dozen new types. And the most useful fixed size versions are not guaranteed to be there, only the fast and least versions that are near useless, so people will still not use them and can enjoy using 100 types per program for a few decades longer.

### #18Matias Goldberg  Members

Posted 18 March 2014 - 05:41 PM

I think the standards committee finally realized it was foolish to not make all types fixed size in the first place

Umm. No.
The standard guarantees that char is the minimum representation a machine can do (in x86 case, that's 8 bits). And that a short is 2x a char.
Back then there were machines whose register's size in bits was not even a multiple of 2; thus this format made sense. This historical reason is also why signed integer overflow is undefined behavior, even though in all modern cpus an overflow pretty much behaves consistently.
The fact that char = 8-bit was not clear back then, and surely wasn't portable.

Edited by Matias Goldberg, 18 March 2014 - 05:43 PM.

### #19nobodynews  Members

Posted 18 March 2014 - 08:31 PM

And that a short is 2x a char.

If char is 8-bits then the only guarantee you have is sizeof(short) >= 2 * sizeof(char). If char is 16-bits then sizeof(short) >= 1* sizeof(char). It is possible for an implementation to have sizeof(char), sizeof(short), sizeof(long) all be 1. char, short, and long would all need 32-bits of storage. This stackoverflow answer claims Cray computers used 32-bits for char, so it's not unheard of.

C++: A Dialog | C++0x Features: Part1 (lambdas, auto, static_assert) , Part 2 (rvalue references) , Part 3 (decltype) | Write Games | Fix Your Timestep!

### #20tanzanite7  Members

Posted 19 March 2014 - 05:51 AM

Another thing int32_t family of types does is guarantee twos compliment arithmetic.

Really? I highly doubt that, can anyone confirm?

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.