typedef a primitive type

Started by
18 comments, last by rnlf_in_space 10 years, 10 months ago

I often see typedefs on a primitive type.

For example


    typedef signed   char int8;
    typedef unsigned char uint8;

    typedef signed   short int16;
    typedef unsigned short uint16;

What's the point of this?

An invisible text.
Advertisement

In C++, the size (in bits) of a data type is not guaranteed. Sometimes though, you need to ensure a specific size for your data types (this happens a lot when working with bit streams). These types might be typedefed so that on whatever compiler/platform the code is built on, when you need an 8-bit signed integer, you really do get an 8-bit signed integer.

Say, for example, you need a 16-bit signed integer. On one system, short might be 16 bits, and on another system, short might be 32 bits. With typedefs, it allows you to simply use int16 in your code, and then if you switch between these two different systems where short is two different sizes, you can just change the typedef to the appropriate type and your code should work just fine (rather than having to hunt down all the places in your code and make changes all over the place just to get a 16 bit integer).

Sometimes it's not so you can switch between one system and another (like in the previous example). Sometimes it's to explicitly state that you require a data type with a very specific size, and if the system does not support that size, then your code does not support that system. Saying int8 makes it very clear you require an 8 bit integer.

Of course, the typedef itself doesn't enforce the requirement (it doesn't care how big your data types are or what they're named), but using an appropriately named type in your code makes it very clear what you require. You can then, if you want, force the compilation to fail if the requirements are not met (for example, you can fail the compilation if int8 is not 8 bits, or if int16 is not 16 bits), in addition to making the requirements in the code self-documenting (because int8 "documents" the code as requiring an 8-bit signed integer, whereas just saying signed char does not "document" the code as requiring an 8-bit signed integer).

[size=2][ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]
Just as an addition, I always prefer using the C99 header stdint.h (also available as cstdint in C++) and the types uint8_t and similar and not handling this mess myself. In one company I worked for, this was not done and during a switch to 64 bit I had to update a whole bunch of header files with lots of #ifdefs to take care of this. Had the project used stdint, it would have saved me a whole day of debugging to find the problem and fix it.

Just as an addition, I always prefer using the C99 header stdint.h (also available as cstdint in C++) and the types uint8_t and similar and not handling this mess myself. In one company I worked for, this was not done and during a switch to 64 bit I had to update a whole bunch of header files with lots of #ifdefs to take care of this. Had the project used stdint, it would have saved me a whole day of debugging to find the problem and fix it.

I agree that stdint.h should be used, but before that was available I used to write a little program that would check the size of various integer types (using sizeof) and produce the text of a header file with those typedefs. The makefile knew to compile and run this program to generate the header file. I used that without problems for years.


#include <iostream>
#include <string>
#include <climits>
#include <cstdlib>

std::string find_type(int size) {
  if (CHAR_BIT * sizeof(char) == size) return "char";
  if (CHAR_BIT * sizeof(short) == size) return "short";
  if (CHAR_BIT * sizeof(int) == size) return "int";
  if (CHAR_BIT * sizeof(long) == size) return "long";
  if (CHAR_BIT * sizeof(long long) == size) return "long long"; // OK in gcc
  std::cerr << "ERROR: I couldn't find a " << size << "-bit type!\n";
  std::exit(1);
}

void define_signed_and_unsigned(int size) {
  std::cout << "typedef signed " << find_type(size) << " int" << size << ";\n";
  std::cout << "typedef unsigned " << find_type(size) << " uint" << size << ";\n";
}

int main() {
  define_signed_and_unsigned(8);
  define_signed_and_unsigned(16);
  define_signed_and_unsigned(32);
  define_signed_and_unsigned(64);
}
 

In this case, others have already explained the specific use here, but more generally typedefs of primitive (and non-primitive) types can be used to provide additional context about their use.

Primitive types convey only two things: Their size (which is platform specific), and their format (which is also platform specific, though less obviously so). But it doesn't convey a purpose or intent as to what it holds. For example, 'typedef float velocity;' and 'typedef float acceleration;' give you added information about what instances of these types (ought to) hold, that would otherwise be described only by the variable name. Now, perhaps unfortunately, typedefs aren't strong -- that is, they don't create new distinct types, they just allow you to call a type by another name -- so you can still assign a 'velocity' variable to an 'acceleration' variable or a plain old float. But the point is that it creates a logical distinction between them, even if its not enforced by the compiler.

Another practical advantage is that if you decide that float is insufficient for your 'velocity' or 'acceleration' types, then you can easily redefine them to be of type 'double' in only a single place, rather than hunting through every single float in your code to determine whether it holds a velocity or acceleration. In this way it can save you time and avoid subtle bugs that arise from missing one of the things you should have changed.

throw table_exception("(? ???)? ? ???");

Thank you all. I now understand it.

So a typedef header cannot be cross platform. Am I right?

An invisible text.

What is a “typedef header”? A header full of typedef’s?

In any case, yes, they can be cross-platform. That is the point.

L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

Primitive types are *not* cross-platform in terms of the number of bits they contain or their representation -- IIRC, paraphrasing the standard, it says only something along the lines of "a 'char' is the smallest addressable unit of storage; a 'short' is at least as big as a 'char'; an 'int' is at least as big as a 'short'; ..." The standard doesn't say that an 'int' is exactly 32 bits (although it is in many platforms), I don't even believe it says that signed numbers must be represented as two's compliment form.

The idea of a typedef header like stdint.h is to define a type (the typedef) which *is* the same number of bits across many platforms, by changing the underlying primitive types appropriately on a platform-specific basis. In other words, the idea is that uint32_t is always a 32bit unsigned integer on any platform, but the underlying primitive type might be different on a 32bit x86 machine running Windows than it is on a 64bit MIPS machine running Unix.

throw table_exception("(? ???)? ? ???");

Thank you all. I now understand it.

So a typedef header cannot be cross platform. Am I right?

You make these headers be cross platform like:

#if defined(PLATFORM_ONE)
typedef foo int32;
#elif defined(PLATFORM_TWO)
typedef bar int32;
#else
#error "this platform not supported"'
#endif

Thank you all. I now understand it.

So a typedef header cannot be cross platform. Am I right?

You make these headers be cross platform like:


#if defined(PLATFORM_ONE)
typedef foo int32;
#elif defined(PLATFORM_TWO)
typedef bar int32;
#else
#error "this platform not supported"'
#endif

Oh I see, I didn't think of conditional compilation

An invisible text.

This topic is closed to new replies.

Advertisement