Followers 0

# Exactly what's the point of 'int32_t', etc.

## 22 posts in this topic

I see very little point of using the 'int32_t' compared to 'signed int'. The project irrLicht actually shows one valid example for it in .\irrTypes.h:

#if defined(_MSC_VER) || ((__BORLANDC__ >= 0x530) && !defined(__STRICT_ANSI__))
typedef unsigned __int64            u64;
#elif __GNUC__
#if __WORDSIZE == 64
typedef unsigned long int           u64;
#else
__extension__ typedef unsigned long long    u64;
#endif
#else
typedef unsigned long long          u64;
#endif


MSDN writes little about the 'fixed size' data types it supports, such as __int32:
"The types __int8__int16, and __int32 are synonyms for the ANSI types that have the same size, and are useful for writing portable code that behaves identically across multiple platforms" -- Great, it's "ANSI principle", I suppose.

I guess my point is; why does MSVC declare the data types prefixed with "__" a fixed size, yet \most\ other compilers determine the globally used "int"(etc.) as a fixed size? Is there any scenario that I should dread about a non-fixed size integer?

Edit: I am aware that 'int' on a x86 build is 32 bits, yet should be 64 bits on a x64 build.

Edited by MarlboroKing
1

##### Share on other sites

Alright, I see. If you're expecting your plain int has at least 32 bits of storage, however, the platform it's being deployed on creates only 16 bits of storage.

So if you are performing a bit shift operation to take the last 16 bits out of the supposed 32 bits, bam.

Thanks!

0

##### Share on other sites

It is VERY important when you take portability into account.

A very simple and common example in games would be a binary protocol, if you create an struct:

struct pos_update_package_st{
long id;
float x, y, z;
};


And the code runs on 32 bits machine the x memory will start at the 5th byte while in a 64 machine it would start at the 9th byte. In this example you would surely run into a bug and very likely into a seg_fault.

0

##### Share on other sites

And the code runs on 32 bits machine the x memory will start at the 5th byte while in a 64 machine it would start at the 9th byte. In this example you would surely run into a bug and very likely into a seg_fault.

This ain't necessarily so. In MSVC all int types have the same size on 32 and 64 bit; i.e. long is also 32 bit.

2

##### Share on other sites

And the code runs on 32 bits machine the x memory will start at the 5th byte while in a 64 machine it would start at the 9th byte. In this example you would surely run into a bug and very likely into a seg_fault.

This ain't necessarily so. In MSVC all int types have the same size on 32 and 64 bit; i.e. long is also 32 bit.

Most Windows compilers (even the Windows version of gcc) use 32bit for longs. (atleast on IA64 and x86-64)

on UNIX, Linux and OS X however long is usually 64bit on 64bit versions of the OS.

If you are writing a multiplayer game you might want to support Linux servers and OS X clients and then it really helps if you know exactly what it is you are sending over the network.

Edited by SimonForsman
0

##### Share on other sites

Just to elaborate a little further. The thing with the standard types in C is that... well, they are not really well defined. According to the C90 standard, and I quote:

A ‘‘plain’’ int object has the natural size suggested by the architecture of the execution environment (large enough to contain any value in the range

INT_MIN to INT_MAX as de?ned in the header <limits.h>)

Of course,  what constitutes a "natural size" is not defined anywhere. The common understanding is that it should be 16 bits in 16 bits systems, 32 bits in 32 bits systems, 64 bits y 64 bits systems, and so on. Of course, that is not the case. For example for Visual Studio int is always 32 bits regardless on where you are, and that's it. I bet they did this for compatibility reasons (and because, let's face it, Microsoft is not big on standards).

The bottom line is that in reality you should never assume what the size of an int is (not without checking, at least).

However, having fixed-size types is a pretty common requirement. You need it if you are going to store files that can be read by other applications (or the same application in another platform). You need it to send data over a network. You need it to use shared memory, etc.

Microsoft's response to this need was to create the special __int types. But of course this is Microsoft's only. The people in the C committee noticed this problem also and, in the C99 standard they created the stdint.h header, with improved definitions for integral numbers. You have a good overview of the new defined types here: http://www.cplusplus.com/reference/cstdint/ .

This header (renamed cstdint, because that's the naming convention for accessing C standard headers from C++ programs) in C++11. Although to the best of my knowledge compilers supporting C99 already had it and of course you could include in your C++ programs without problem.

I recommend following that last link because I think that correctly using the new types is important... (although it makes you really think what you want, whether it is the maximum speed of the minimum memory footprint).
Edited by Javier Meseguer de Paz
1

##### Share on other sites

As others have mentioned int32_t is a standard way to garuntee 32 bit integers. Another thing int32_t family of types does is garuntee twos compliment arithmatic.

0

##### Share on other sites

Of course, what constitutes a "natural size" is not defined anywhere. The common understanding is that it should be 16 bits in 16 bits systems, 32 bits in 32 bits systems, 64 bits y 64 bits systems, and so on.

Wouldn't intptr_t and uintptr_t be considered the "natural size", since they are supposed to be able to hold the size of a void pointer?

0

##### Share on other sites

Of course, what constitutes a "natural size" is not defined anywhere. The common understanding is that it should be 16 bits in 16 bits systems, 32 bits in 32 bits systems, 64 bits y 64 bits systems, and so on.

Wouldn't intptr_t and uintptr_t be considered the "natural size", since they are supposed to be able to hold the size of a void pointer?

I am not sure I understand your question... the term "natural size" was used in the standard to describe the intended size of an int. No definition of that term was provided, so the size of an int was essentially undefined.

IF we define natural size as the size of a Word in the architecture, then yes, the size of intptr_t and uintptr_t should be the natural size.  But that is we use that definition for "natural size". Since the standard provides none, we could use another one. In fact, Microsoft Visual Studio uses another: "natural size" (the size of int) is 32 bits no matter what. Yet intptr_t and uintptr_t must be 64 bits in 64 bits architectures, so by their definition, they wouldn't be the natural size... It's just a matter of how we define what, which is open to interpretation since the standard is not clear.

I don't know if I am giving a clear answer...

PS: Regardless, those types were introduced in C99, 9 years after "natural size" was used for the first time to describe int's size.

Edited by Javier Meseguer de Paz
1

##### Share on other sites

I wasn't going by Microsoft's definition, I was going with what you called the "common understanding" and the provide C++ equivalent.

Of course, what constitutes a "natural size" is not defined anywhere. The common understanding is that it should be 16 bits in 16 bits systems, 32 bits in 32 bits systems, 64 bits y 64 bits systems, and so on.

intptr_t and uintptr_t should fall under this definition.

0

##### Share on other sites

I wasn't going by Microsoft's definition, I was going with what you called the "common understanding" and the provide C++ equivalent.

Of course, what constitutes a "natural size" is not defined anywhere. The common understanding is that it should be 16 bits in 16 bits systems, 32 bits in 32 bits systems, 64 bits y 64 bits systems, and so on.

intptr_t and uintptr_t should fall under this definition.

Oh, ok, then. Yes, they do.

0

##### Share on other sites

I think the standards committee finally realized it was foolish to not make all types fixed size in the first place, because most libraries included Rube Goldberg Contraptions using the preprocessor to derive from a million compiler and platform specific defines its own incompatible version of fixed size types that end up being not the same size sometimes. Though in its extreme pursuit for compatibility they couldn't get themselves to just say short/int/long/long long are now always 16/32/64/128 bits, but had to add a few dozen new types. And the most useful fixed size versions are not guaranteed to be there, only the fast and least versions that are near useless, so people will still not use them and can enjoy using 100 types per program for a few decades longer.

1

##### Share on other sites
And that a short is 2x a char.

If char is 8-bits then the only guarantee you have is sizeof(short) >= 2 * sizeof(char). If char is 16-bits then sizeof(short) >= 1* sizeof(char). It is possible for an implementation to have sizeof(char), sizeof(short), sizeof(long) all be 1. char, short, and long would all need 32-bits of storage. This stackoverflow answer claims Cray computers used 32-bits for char, so it's not unheard of.

0

##### Share on other sites

Another thing int32_t family of types does is guarantee twos compliment arithmetic.

Really? I highly doubt that, can anyone confirm?
0

##### Share on other sites
The C99 Standard says in section 6.2.6.2 paragraph 2:

For signed integer types, the bits of the object representation shall be divided into three groups: value bits, padding bits, and the sign bit. There need not be any padding bits; there shall be exactly one sign bit. Each bit that is a value bit shall have the same value as the same bit in the object representation of the corresponding unsigned type (if there are M value bits in the signed type and N in the unsigned type, then M £ N). If the sign bit is zero, it shall not affect the resulting value. If the sign bit is one, the value shall be modified in one of the following ways: — the corresponding value with sign bit 0 is negated (sign and magnitude); — the sign bit has the value -(2N) (two’s complement); — the sign bit has the value -(2N - 1) (one’s complement). Which of these applies is implementation-defined, as is whether the value with sign bit 1 and all value bits zero (for the first two), or with sign bit and all value bits 1 (for one’s complement), is a trap representation or a normal value. In the case of sign and magnitude and one’s complement, if this representation is a normal value it is called a negative zero.
0

##### Share on other sites

No really, they specified they are in twos complement, but the problem is the few types that applies to are optional:

7.18.1.1 Exact-width integer types
1 The typedef name intN_t designates a signed integer type with width N, no padding
bits, and a two’s complement representation. Thus, int8_t denotes a signed integer
type with a width of exactly 8 bits.
2 The typedef name uintN_t designates an unsigned integer type with width N. Thus,
uint24_t denotes an unsigned integer type with a width of exactly 24 bits.
3 These types are optional. However, if an implementation provides integer types with
widths of 8, 16, 32, or 64 bits, it shall define the corresponding typedef names.

It would be nice if they just specified everything had to be twos complement and maybe even little endian, but they are supporting ones complement, sign-magnitude and big endian and keep a huge number of things undefined. Hopefully someday computers running ones complement, sign-magnitude or big endian need not be considered anymore.

2

##### Share on other sites

They don't have to be defined, because if twos compliment isn't native, you don't want the program to emulate it -- that would be very slow. But any twos compliment machine should have them defined.

1

##### Share on other sites

I recently debugged really old PRNG code that assumed 'int' was 16 bits. Problem was only in the seeding, but that code had been used for years...

I use the _t types now, but I still have a lot of bit twiddling that will fail if a type declared 'int' isn't 32 bits. Definitely an improvement to have these fixed size types.

0

## Create an account

Register a new account