# Portable Datatypes (C++)

This topic is 4135 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I remember seeing datatypes such as int16, uint16, int32, uint32, ect ect in some libs but I cant remember them and I was wondering if anyone has any info on this. Is it just a mere typedef, surely not? Im guessing you manually allocate the size, does anyone have a already built lib for this? Also, im looking for an arbituary number lib (big nums), any suggestions on which I should use? Thanks

##### Share on other sites
In linux (gcc) there is usually a stdint.h with typedefs for those, and there may be something on windows.

I usually just make my own...

typedef unsigned char      uint8_ttypedef char               int8_t;typedef unsigned short     uint16_t;typedef signed short       int16_t;typedef unsigned int       uint32_t;typedef signed int         int32_t;// etc

Just have that header on any platform you compile for.

EDIT: Here's some documentation about stdint.h:

##### Share on other sites
but those arnt guarenteed to be the size you say they are...int isnt always 32! Im looking for something that guarenteed.

##### Share on other sites
That's why I said you have to provide those 8 lines or whatever on any platform you build on. THat way, the other many millions of lines of code can use uint32_t, etc, and you don't have to recode everything.

Trust me, this is how it works. Read the link I posted on the first post in the edit.

EDIT: I worded it awkwardly... every platform must have a platform specific version of those typedefs.

##### Share on other sites
Alright, thanks a lot.

##### Share on other sites
I know that in VC++ 2003 and higher, there are size specific types

__int8, __int16, __int32, __int64

not sure if they are standardized across different compilers though (probably arent)

##### Share on other sites
Boost has <boost/cstdint.hpp>, which is perhaps your best choice so far. C99 has <stdint.h>, which will become <cstdint> in C++0x, but that's a few years off.

Of course, you could always just use a good serialisation method that doesn't rely on specific type sizes.

##### Share on other sites
Doing your own typedef's is the way to go. That way if you decide to target another platform all you have to do is change the typedef's around to suit the new platform.

##### Share on other sites
I tried using fixed size int8,int16,etc religiously when I was a junior programmer. In the end it did not work out well. Sooner or later you will have to use API's and then you can set yourself up for all sorts of type-casting headaches. I wound up just picking the dominant API and stuck to its type convention. If you really need to get byte sizes right for file formats or protocols, WORD is pretty much guaranteed to be 16 bits.

##### Share on other sites
Quote:
 Original post by SoftwareGuy256[...]WORD is pretty much guaranteed to be 16 bits.
Only if you define the type to be 16 bits just like smitty1276 said. Personally, it really does seem better to use a pre-made system like boost::integer since it's already been extensively tested on most platforms you're likely to ever deal with (excepting those yet to be released - you have to wait a short while after release to be sure those are well-tested).

##### Share on other sites
@smitty: you should not assume int is 32 bit, it is guarantted to be at least as big as a short, and no bigger than a long, so it could end up being 16 bit. The same is true with double (float <= double <= long double, at least that's what the Microsoft site says)

##### Share on other sites
Quote:
 Original post by amnesiasoft@smitty: you should not assume int is 32 bit, it is guarantted to be at least as big as a short, and no bigger than a long, so it could end up being 16 bit. The same is true with double (float <= double <= long double, at least that's what the Microsoft site says)
Did you read his post? He says you must have a custom file for each platform you compile for (which is exactly why using boost is easier - the work is already done).

##### Share on other sites
I've seen the Boost header and it honestly strikes me as rather hackish. Besides, it can only support the compilers that they know about, as opposed to "every conforming compiler".

I haven't started on it yet, but ISTM that it should be possible to define a generic integral type (templated on number of bytes used for storage, with all appropriate operators defined) and specializing it with primitive types for the sizes that the primitive types happen to represent:

template <size_t n>class Integer {  Integer<n/2> high;  Integer<n/2 + n%2> low;  // lots of operator overloads that invoke operations on the high and low halves,  // propagating carries as needed};template<>class Integer<1> {  char c;  // lots of operator overloads that just operate on 'c' directly};boost::enable_if<sizeof(short) == 2>template<>class Integer<2> {  short s;  // etc.};// And so on for other primitive types and plausibly expected sizes thereof.// If no primitive has a sizeof() == 2, then the generic template using two// chars gets used.

##### Share on other sites
I made my own from scratch.

I started with a GenericInteger class which could represent any integer size (in 8 bit increments), sign or unsigned, big or little endian. The base implementation used a byte type (which needed to be declared elsewhere for each platform, in a header file) and then emulated the rest from there.

Obviously this is quite slow, so in the template parameters I added compiler, platform, OS, ect... and then wrote specializations for whatever compiler/OS ect.. I want to support, which (for the most part) end up being simple mappings to a base type.

So I get guarantee bitwise operations on my types, I get a default mode, which though slow is fully portable, and I get all the speed of normal types on the systems that support it.

It also goes beyond just internal calculations, but I found them also convenient for serializing data and data marshalling, and stuff like that.

##### Share on other sites
SDL uses this neat little trick to check that the sizes really are what you think they are. If they aren’t this will generate a compile time error that will point right at one of the below lines of code making it obvious what the error is instead of having it pop up as some obscure runtime error.
/* Make sure the types really have the right sizes */#define SDL_COMPILE_TIME_ASSERT(name, x)                      typedef int SDL_dummy_ ## name[(x) * 2 - 1]SDL_COMPILE_TIME_ASSERT(uint8, sizeof(Uint8) == 1);SDL_COMPILE_TIME_ASSERT(sint8, sizeof(Sint8) == 1);SDL_COMPILE_TIME_ASSERT(uint16, sizeof(Uint16) == 2);SDL_COMPILE_TIME_ASSERT(sint16, sizeof(Sint16) == 2);SDL_COMPILE_TIME_ASSERT(uint32, sizeof(Uint32) == 4);SDL_COMPILE_TIME_ASSERT(sint32, sizeof(Sint32) == 4);SDL_COMPILE_TIME_ASSERT(uint64, sizeof(Uint64) == 8);SDL_COMPILE_TIME_ASSERT(sint64, sizeof(Sint64) == 8);

This makes it safe to do what smitty suggested. If you made a wrong assumption or change platforms you'll find out about it as soon as you hit compile.

##### Share on other sites
Quote:
 Original post by Zahlman[...]I haven't started on it yet, but ISTM that it should be possible to define a generic integral type (templated on number of bytes used for storage, with all appropriate operators defined) and specializing it with primitive types for the sizes that the primitive types happen to represent:[...]
You shouldn't base it on the size of the integer, because 'uint16' is conceptually used as "unsigned integer that can store numbers from 0 to 65535" and not "integer that is two bytes long". Even if you do want to go with bit length instead of range, you'd need to multiply sizeof by CHAR_BITS (or whatever the constant is called) since sizeof is relative to char.

##### Share on other sites
Quote:
Original post by Extrarius
Quote:
 Original post by Zahlman[...]I haven't started on it yet, but ISTM that it should be possible to define a generic integral type (templated on number of bytes used for storage, with all appropriate operators defined) and specializing it with primitive types for the sizes that the primitive types happen to represent:[...]
You shouldn't base it on the size of the integer, because 'uint16' is conceptually used as "unsigned integer that can store numbers from 0 to 65535" and not "integer that is two bytes long". Even if you do want to go with bit length instead of range, you'd need to multiply sizeof by CHAR_BITS (or whatever the constant is called) since sizeof is relative to char.

It is called CHAR_BITS.

'uint16' only is "conceptually used as "unsigned integer that can store numbers from 0 to 65535" and not "integer that is two bytes long"" because it only gets used on hardware where CHAR_BITS == 8. The distinction is probably lost on most people anyway, but it seems to me like either interpretation is reasonable.

And yes, I know sizeof is relative to char, but I intended for the template int to indicate the number of bytes, not bits. That does suggest that the theoretical range for Integer<n> is not fixed (the intended value is 1 << (8 * n)), though, so I might want to rethink that, yes. Except, if bit-resolution is offered in the Integer size, extra code has to be added to handle overflow bits...

##### Share on other sites
Compilers that conform to the C99 standard, such as GCC 3.0 and later, come bundled with the standard library header <stdint.h> which has some standard defined integral types. The standard types are named things like int8_t, uint16_t as well as macros like UINT32_MIN and UINT32_MAX.

Compilers that support C++ tr1, like GCC 4.1 and later, come bundled with the standard library header <cstdint> which effectively includes <stdint.h> and hoists names where possible in the std:: namespace.

I would strongly suggest that if you come up with your own portability headers yu use the same names defined in those standard headers. That way, when the tools you use finally become conformant to the latest standards (a moving target, to be sure), you can take advantage of having already written standard code.

##### Share on other sites
You should only use sized types when you absolutely need to. The two most common cases for actually needing to are when you're doing binary resource handling (like, say, loading a mesh from a packed binary format) or when you're dealing with a specific range of numbers that you know you need.

Otherwise, you should avoid sized types, as you're just going to be shooting yourself in the foot most times. The number of bugs I've seen (in code that I've written or supervised) relating to integral type overflows in the last, say, 5 years, is pretty close to 0.

That said, I am an advocate of covering basic types - e.g. tInt instead of int, tUnsigned instead of unsigned, tFloat, etc. Why? Well, that way if I disagree with a compiler's default choice about a type, I can just change it in my CoreTypes.h, instead of having to muck with compiler settings all over the place. This isn't so big of a deal at the moment with 32-bit everything, but those who remember back to the days of the 16 to 32 bit transition should be able to anticipate similar annoyances going forward from 32 to 64 bit.

##### Share on other sites

This topic is 4135 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
628701
• Total Posts
2984287

• 22
• 10
• 9
• 13
• 13