Sign in to follow this  
Azh321

Portable Datatypes (C++)

Recommended Posts

Azh321    569
I remember seeing datatypes such as int16, uint16, int32, uint32, ect ect in some libs but I cant remember them and I was wondering if anyone has any info on this. Is it just a mere typedef, surely not? Im guessing you manually allocate the size, does anyone have a already built lib for this? Also, im looking for an arbituary number lib (big nums), any suggestions on which I should use? Thanks

Share this post


Link to post
Share on other sites
smitty1276    560
In linux (gcc) there is usually a stdint.h with typedefs for those, and there may be something on windows.

I usually just make my own...


typedef unsigned char uint8_t
typedef char int8_t;

typedef unsigned short uint16_t;
typedef signed short int16_t;

typedef unsigned int uint32_t;
typedef signed int int32_t;

// etc




Just have that header on any platform you compile for.

EDIT: Here's some documentation about stdint.h:

Share this post


Link to post
Share on other sites
Azh321    569
but those arnt guarenteed to be the size you say they are...int isnt always 32! Im looking for something that guarenteed.

Share this post


Link to post
Share on other sites
smitty1276    560
That's why I said you have to provide those 8 lines or whatever on any platform you build on. THat way, the other many millions of lines of code can use uint32_t, etc, and you don't have to recode everything.

Trust me, this is how it works. Read the link I posted on the first post in the edit.

EDIT: I worded it awkwardly... every platform must have a platform specific version of those typedefs.

Share this post


Link to post
Share on other sites
me22    212
Boost has <boost/cstdint.hpp>, which is perhaps your best choice so far. C99 has <stdint.h>, which will become <cstdint> in C++0x, but that's a few years off.

Of course, you could always just use a good serialisation method that doesn't rely on specific type sizes.

Share this post


Link to post
Share on other sites
jkleinecke    251
Doing your own typedef's is the way to go. That way if you decide to target another platform all you have to do is change the typedef's around to suit the new platform.

Share this post


Link to post
Share on other sites
SoftwareGuy256    100
I tried using fixed size int8,int16,etc religiously when I was a junior programmer. In the end it did not work out well. Sooner or later you will have to use API's and then you can set yourself up for all sorts of type-casting headaches. I wound up just picking the dominant API and stuck to its type convention. If you really need to get byte sizes right for file formats or protocols, WORD is pretty much guaranteed to be 16 bits.

Share this post


Link to post
Share on other sites
Extrarius    1412
Quote:
Original post by SoftwareGuy256
[...]WORD is pretty much guaranteed to be 16 bits.
Only if you define the type to be 16 bits just like smitty1276 said. Personally, it really does seem better to use a pre-made system like boost::integer since it's already been extensively tested on most platforms you're likely to ever deal with (excepting those yet to be released - you have to wait a short while after release to be sure those are well-tested).

Share this post


Link to post
Share on other sites
amnesiasoft    161
@smitty: you should not assume int is 32 bit, it is guarantted to be at least as big as a short, and no bigger than a long, so it could end up being 16 bit. The same is true with double (float <= double <= long double, at least that's what the Microsoft site says)

Share this post


Link to post
Share on other sites
Extrarius    1412
Quote:
Original post by amnesiasoft
@smitty: you should not assume int is 32 bit, it is guarantted to be at least as big as a short, and no bigger than a long, so it could end up being 16 bit. The same is true with double (float <= double <= long double, at least that's what the Microsoft site says)
Did you read his post? He says you must have a custom file for each platform you compile for (which is exactly why using boost is easier - the work is already done).

Share this post


Link to post
Share on other sites
Zahlman    1682
I've seen the Boost header and it honestly strikes me as rather hackish. Besides, it can only support the compilers that they know about, as opposed to "every conforming compiler".

I haven't started on it yet, but ISTM that it should be possible to define a generic integral type (templated on number of bytes used for storage, with all appropriate operators defined) and specializing it with primitive types for the sizes that the primitive types happen to represent:


template <size_t n>
class Integer {
Integer<n/2> high;
Integer<n/2 + n%2> low;
// lots of operator overloads that invoke operations on the high and low halves,
// propagating carries as needed
};

template<>
class Integer<1> {
char c;
// lots of operator overloads that just operate on 'c' directly
};

boost::enable_if<sizeof(short) == 2>
template<>
class Integer<2> {
short s;
// etc.
};

// And so on for other primitive types and plausibly expected sizes thereof.
// If no primitive has a sizeof() == 2, then the generic template using two
// chars gets used.

Share this post


Link to post
Share on other sites
Ryan_001    3475
I made my own from scratch.

I started with a GenericInteger class which could represent any integer size (in 8 bit increments), sign or unsigned, big or little endian. The base implementation used a byte type (which needed to be declared elsewhere for each platform, in a header file) and then emulated the rest from there.

Obviously this is quite slow, so in the template parameters I added compiler, platform, OS, ect... and then wrote specializations for whatever compiler/OS ect.. I want to support, which (for the most part) end up being simple mappings to a base type.

So I get guarantee bitwise operations on my types, I get a default mode, which though slow is fully portable, and I get all the speed of normal types on the systems that support it.

It also goes beyond just internal calculations, but I found them also convenient for serializing data and data marshalling, and stuff like that.

Share this post


Link to post
Share on other sites
Grain    500
SDL uses this neat little trick to check that the sizes really are what you think they are. If they aren’t this will generate a compile time error that will point right at one of the below lines of code making it obvious what the error is instead of having it pop up as some obscure runtime error.
/* Make sure the types really have the right sizes */
#define SDL_COMPILE_TIME_ASSERT(name, x) typedef int SDL_dummy_ ## name[(x) * 2 - 1]

SDL_COMPILE_TIME_ASSERT(uint8, sizeof(Uint8) == 1);
SDL_COMPILE_TIME_ASSERT(sint8, sizeof(Sint8) == 1);
SDL_COMPILE_TIME_ASSERT(uint16, sizeof(Uint16) == 2);
SDL_COMPILE_TIME_ASSERT(sint16, sizeof(Sint16) == 2);
SDL_COMPILE_TIME_ASSERT(uint32, sizeof(Uint32) == 4);
SDL_COMPILE_TIME_ASSERT(sint32, sizeof(Sint32) == 4);
SDL_COMPILE_TIME_ASSERT(uint64, sizeof(Uint64) == 8);
SDL_COMPILE_TIME_ASSERT(sint64, sizeof(Sint64) == 8);




This makes it safe to do what smitty suggested. If you made a wrong assumption or change platforms you'll find out about it as soon as you hit compile.

Share this post


Link to post
Share on other sites
Extrarius    1412
Quote:
Original post by Zahlman
[...]I haven't started on it yet, but ISTM that it should be possible to define a generic integral type (templated on number of bytes used for storage, with all appropriate operators defined) and specializing it with primitive types for the sizes that the primitive types happen to represent:
[...]
You shouldn't base it on the size of the integer, because 'uint16' is conceptually used as "unsigned integer that can store numbers from 0 to 65535" and not "integer that is two bytes long". Even if you do want to go with bit length instead of range, you'd need to multiply sizeof by CHAR_BITS (or whatever the constant is called) since sizeof is relative to char.

Share this post


Link to post
Share on other sites
Zahlman    1682
Quote:
Original post by Extrarius
Quote:
Original post by Zahlman
[...]I haven't started on it yet, but ISTM that it should be possible to define a generic integral type (templated on number of bytes used for storage, with all appropriate operators defined) and specializing it with primitive types for the sizes that the primitive types happen to represent:
[...]
You shouldn't base it on the size of the integer, because 'uint16' is conceptually used as "unsigned integer that can store numbers from 0 to 65535" and not "integer that is two bytes long". Even if you do want to go with bit length instead of range, you'd need to multiply sizeof by CHAR_BITS (or whatever the constant is called) since sizeof is relative to char.


It is called CHAR_BITS.

'uint16' only is "conceptually used as "unsigned integer that can store numbers from 0 to 65535" and not "integer that is two bytes long"" because it only gets used on hardware where CHAR_BITS == 8. The distinction is probably lost on most people anyway, but it seems to me like either interpretation is reasonable.

And yes, I know sizeof is relative to char, but I intended for the template int to indicate the number of bytes, not bits. That does suggest that the theoretical range for Integer<n> is not fixed (the intended value is 1 << (8 * n)), though, so I might want to rethink that, yes. Except, if bit-resolution is offered in the Integer size, extra code has to be added to handle overflow bits...

Share this post


Link to post
Share on other sites
Bregma    9201
Compilers that conform to the C99 standard, such as GCC 3.0 and later, come bundled with the standard library header <stdint.h> which has some standard defined integral types. The standard types are named things like int8_t, uint16_t as well as macros like UINT32_MIN and UINT32_MAX.

Compilers that support C++ tr1, like GCC 4.1 and later, come bundled with the standard library header <cstdint> which effectively includes <stdint.h> and hoists names where possible in the std:: namespace.

I would strongly suggest that if you come up with your own portability headers yu use the same names defined in those standard headers. That way, when the tools you use finally become conformant to the latest standards (a moving target, to be sure), you can take advantage of having already written standard code.

Share this post


Link to post
Share on other sites
You should only use sized types when you absolutely need to. The two most common cases for actually needing to are when you're doing binary resource handling (like, say, loading a mesh from a packed binary format) or when you're dealing with a specific range of numbers that you know you need.

Otherwise, you should avoid sized types, as you're just going to be shooting yourself in the foot most times. The number of bugs I've seen (in code that I've written or supervised) relating to integral type overflows in the last, say, 5 years, is pretty close to 0.

That said, I am an advocate of covering basic types - e.g. tInt instead of int, tUnsigned instead of unsigned, tFloat, etc. Why? Well, that way if I disagree with a compiler's default choice about a type, I can just change it in my CoreTypes.h, instead of having to muck with compiler settings all over the place. This isn't so big of a deal at the moment with 32-bit everything, but those who remember back to the days of the 16 to 32 bit transition should be able to anticipate similar annoyances going forward from 32 to 64 bit.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this