Archived

This topic is now archived and is closed to further replies.

tstrimp

Typedef

Recommended Posts

tstrimp    1798
I have seen in a few cases where game engines will typedef every integer/floating point/bool variable. Is there a reason for this? Is it just for consistancy?
     Hope is the first step on the road to disappointment

Share this post


Link to post
Share on other sites
Guest Anonymous Poster   
Guest Anonymous Poster
It can be useful. Suppose you''re engine is written for DX. You want floats. Now support you''re going to compile for some API that uses doubles. Do you really want to go through every line of every file and make changes? Easier to just update the typedef and be on your way.

Also people have been bitten by the int was 16 bit but it''s now 32 bit thing. What''s a 64 bit number going to be on a 64 bit machine? Well, it''ll be an int, by definition of int... but will a long be 64 bit or 32 bit? If they want 32 bit data, they know there will be SOME way of specifying it on new hardware, but no idea how yet. The typedef can be updated when compilers for new 64 bit architectures come out.

Share this post


Link to post
Share on other sites
Shannon Barber    1681
Abstraction of the type. Same reason to use enums instead of int''s whenever it is appro.

- Magmai Kai Holmlor

"No, his mind is not for rent to any god nor government" - Rush, Tom Sawyer

[Look for information | GDNet Start Here | GDNet Search Tool | GDNet FAQ | MSDN RTF[L] | SGI STL Docs | STFW | Asking Smart Questions ]

[Free C++ Libraries | Boost | ACE | Loki | MTL | Blitz++ | wxWindows| Spirit(xBNF)]
[Free C Libraries | zlib ]

Share this post


Link to post
Share on other sites
GoofProg    127
An integer (int) type will change to 64 bits as the 64bit processors roll out. An (int) is a word which is defined as the number of bits the data bus can handle.

It would have to be some kinda exotic typedef for a 32bit type since short is taken to be 16 bits

Share this post


Link to post
Share on other sites
It won''t be an exotic typedef. 64 bit compilers will provide some method of specifying 32 bit values. Just as the ''long long'' extension does 64 bit for 32 bit machines, they''ll add some new type that means 32 bit.

Maybe they''ll even create a whole new set of types, learning from past mistakes, that DO have a definate size associated with them. int8, int16, int32, int64, int128, and so on for example.

Knowing the industry though, they''ll just patch 64 bit to allow 32 bit, and think, "We won''t have to worry about what happens when we hit 128 bit for a long time, why bother to define things now".

These transitionary periods are often a real pain as each compiler does it''s own thing for a while.

Share this post


Link to post
Share on other sites
Sneftel    1788
quote:
Original post by GoofProg
An integer (int) type will change to 64 bits as the 64bit processors roll out. An (int) is a word which is defined as the number of bits the data bus can handle.

It would have to be some kinda exotic typedef for a 32bit type since short is taken to be 16 bits


Correction: "int" is whatever the individual compiler developer decides to make it. The C/C++ standards make no guarantees to its size.


How appropriate. You fight like a cow.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster   
Guest Anonymous Poster

Do a google for the "lp64" model. It says that char is 8 bits, short is 16 bits, int is 32 bits, and long and pointers are 64 bits. It''s the sanest way of implementing a 64 bit compiler; everyone should be using it.

Corollary: if you want to cast between pointers and integral types, better cast to LONG, not to int. Or ptrdiff_t, which is defined to be as large as a pointer.

Second corollary: if you''re doing file or network I/O, or defining specific interfaces, typedef-ing specific sizes will save your butt 10 times over.

Share this post


Link to post
Share on other sites
mcfly    151
quote:
Original post by Sneftel
Correction: "int" is whatever the individual compiler developer decides to make it. The C/C++ standards make no guarantees to its size.



They guarantee it will be at least 16 bits in size.


[edited by - mcfly on May 1, 2003 12:18:44 PM]

Share this post


Link to post
Share on other sites
daerid    354
One of the specifications of C++0x is to have fixed size ints, which will take a lot of the stress out of porting to different platforms.

Personally, I think that, along with the standard names we''re all used to, there should be a standard keyword prefixed with int that means the same thing:

int8 for char
uint8 for unsigned char
int16 for short
uint16 for unsigned short
int32 for int
uint32 for unsigned int
int64 for long
uint64 for unsigned long

sizeof(void*) == sizeof(int64)

That would make portable network programming a lot easier, IMO. I don''t know if this is the system being proposed for the new C++ standard, but I think it makes sense. And as processors march towards 128 bits (and you _know_ they will, probably faster than we think), a new type of int should be introduced. possibly int128 / wide and uint128 / uwide

Share this post


Link to post
Share on other sites
civguy    308
quote:
Original post by mcfly
They guarantee it will be at least 16 bits in size.
Really? To my knowledge, the standard places no such limitations. On some system, char, short int and int could all be 8 bit in size, perhaps even less.

(the minimum size for char may be limited by some other rules, so it''s probably not possible to have a C++ implementation with 1 bit chars)

Share this post


Link to post
Share on other sites
daerid    354
The standard guarantees this:

short <= int <= long

and

short < long

(As far as I know)

Could somebody find a quote from the actual standard on this?

Share this post


Link to post
Share on other sites
civguy    308
quote:
Original post by daerid
Could somebody find a quote from the actual standard on this?
Well, it''s not hidden in any way. See 3.9.1. I couldn''t spot a word about sizeof(short) < sizeof(long) there.

Share this post


Link to post
Share on other sites
Sneftel    1788
Yeah, char is defined as being at _least_ 8 bits. (which makes me wonder, then, why there had to be a wchar_t.... perhaps extending char would simply break too many incompliant apps).


How appropriate. You fight like a cow.

Share this post


Link to post
Share on other sites
mcfly    151
quote:
Original post by civguy
Really? To my knowledge, the standard places no such limitations. On some system, char, short int and int could all be 8 bit in size, perhaps even less.

(the minimum size for char may be limited by some other rules, so it's probably not possible to have a C++ implementation with 1 bit chars)


Hmm.. come to think of it, the standard doesn't speak in terms of bits. 5.2.4.2.1 in C89 defines the sizes of the various integers (note, I havn't read it myself, just peoples interpretion of it).

Here's how it works:

char <= short <= int <= long

sizeof(char) == 1
sizeof(short) >= 1
sizeof(int) >= 1
sizeof(long) >= 1

..and the standard implementation defined values must be equal or greater (absolute) than these:

CHAR_BIT 8
SHRT_MIN -32767
SHRT_MAX +32767
INT_MIN -32767
INT_MAX +32767
LONG_MIN -2147483647
LONG_MAX +2147483647

edit: typo

[edited by - mcfly on May 1, 2003 2:49:04 PM]

Share this post


Link to post
Share on other sites
spock    217
All versions of ISO C specifies minimum sizes for integer types through minimum magnitudes for their numerical limits (C99 §5.2.4.2.1). For example, INT_MIN must be -32767 or lower and INT_MAX must be at least 32767, so int will have to be at least 16 bits. I'm not sure whether ISO C++ includes this aspect of the C standard, but I think it does.

Edit: McFly beat me to it


[edited by - spock on May 1, 2003 2:49:57 PM]

Share this post


Link to post
Share on other sites
cozman    583
This has already been added to C99 via stdint.h the header is actually quite long but here is part of the version that comes with GCC.


    
/* 7.18.1.1 Exact-width integer types */
typedef signed char int8_t;
typedef unsigned char uint8_t;
typedef short int16_t;
typedef unsigned short uint16_t;
typedef int int32_t;
typedef unsigned uint32_t;
typedef long long int64_t;
typedef unsigned long long uint64_t;


Of course each compiler can define this file how it wants giving people the flexibility of using [u]int##_t types in portable code.

[edited by - cozman on May 1, 2003 4:04:25 PM]

Share this post


Link to post
Share on other sites
spock    217
int16_t et al are not really needed in C++ because it''s fairly easy to do the same thing yourself using templates. But since they''re already in C99 I suppose they might appear in C++0x.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster   
Guest Anonymous Poster
quote:
Original post by spock
int16_t et al are not really needed in C++ because it''s fairly easy to do the same thing yourself using templates.
What do you mean? I think those typedefs would be very handly in C++ too and have no idea of how templates would help to fix the need for them.

Share this post


Link to post
Share on other sites
mcfly    151
quote:
Original post by Anonymous Poster
What do you mean? I think those typedefs would be very handly in C++ too and have no idea of how templates would help to fix the need for them.



Check out my post in this thread for an example..

Share this post


Link to post
Share on other sites
spock    217
Yes, they would be handy and I do hope they will get added to the standard. But as long as you only need typedefs for built-in integer types (ie no __int64) they're not strictly necessary in C++.

Here's the basic idea:

template <typename T,size_t N> struct integer;
template <> struct integer<signed,CHAR_BIT*sizeof(signed char)> { typedef signed char type; };
template <> struct integer<signed,CHAR_BIT*sizeof(signed short)> { typedef signed short type; };
template <> struct integer<signed,CHAR_BIT*sizeof(signed int)> { typedef signed int type; };
template <> struct integer<signed,CHAR_BIT*sizeof(signed long)> { typedef signed long type; };

/* ... */

template <> struct integer<unsigned,CHAR_BIT*sizeof(unsigned long)> { typedef unsigned long type; };

typedef integer<signed,16>::type int16_t;


That won't compile as is because there's no guarantee that sizeof(int) != sizeof(short) or that a 16 bit integer type exists. The first problem can be solved using the defines in climits and the second through a simple metaprogram. A more elegant solution should be possible using typelists and a slightly less simple metaprogram.

[edited by - spock on May 1, 2003 6:08:54 PM]

Share this post


Link to post
Share on other sites
Shannon Barber    1681

static const BufferSize = sizeof(buffer);
typedef typename AtLeast< Log2<BufferSize>::Upper >::Unsigned IndexInt;

char buffer[1..255]; -> IndexInt ~ char
char buffer[256..65535]; -> IndexInt ~ short
char buffer[65536..4294967295]; -> IndexInt ~ int




  
template<int I>
struct AtLeast
{
typedef typename AtLeast<I+1>::Signed Signed;
typedef typename AtLeast<I+1>::Unsigned Unsigned;
};
template<>
struct AtLeast<8>
{
typedef i8 Signed;
typedef u8 Unsigned;
};
template<>
struct AtLeast<16>
{
typedef i8 Signed;
typedef u8 Unsigned;
};
template<>
struct AtLeast<32>
{
typedef i8 Signed;
typedef u8 Unsigned;
};
template<>
struct AtLeast<64>
{
typedef i8 Signed;
typedef u8 Unsigned;
};
template<>
struct AtLeast<65>
{
//TODO

//int NoInOver64[0];

};

template<int I>
struct AtMost
{
typedef typename AtMost<I-1>::Signed Signed;
typedef typename AtMost<I-1>::Unsigned Unsigned;
};
template<>
struct AtMost<0>
{
typedef void Signed;
typedef void Unsigned;
};
template<>
struct AtMost<8>
{
typedef i8 Signed;
typedef u8 Unsigned;
};
template<>
struct AtMost<16>
{
typedef i8 Signed;
typedef u8 Unsigned;
};
template<>
struct AtMost<32>
{
typedef i8 Signed;
typedef u8 Unsigned;
};
template<>
struct AtMost<64>
{
typedef i8 Signed;
typedef u8 Unsigned;
};
template<>
struct AtMost<65>
{
//TODO

};

template<unsigned int I>
struct Log2
{
private:
typedef Log2<I/2> Log2Calc;
public:
static const Upper = 1+Log2Calc::Upper;
static const Lower = 1+Log2Calc::Lower;
};
template<>
struct Log2<1>
{
static const Upper = 1;
static const Lower = 0;
};
template<>
struct Log2<0>
{
//TODO

};


Share this post


Link to post
Share on other sites
daerid    354
quote:
Original post by Magmai Kai Holmlor

static const BufferSize = sizeof(buffer);
typedef typename AtLeast< Log2::Upper >::Unsigned IndexInt;

char buffer[1..255]; -> IndexInt ~ char
char buffer[256..65535]; -> IndexInt ~ short
char buffer[65536..4294967295]; -> IndexInt ~ int





    
template<int I>
struct AtLeast
{
typedef typename AtLeast<I+1>::Signed Signed;
typedef typename AtLeast<I+1>::Unsigned Unsigned;
};
template<>
struct AtLeast<8>
{
typedef i8 Signed;
typedef u8 Unsigned;
};
template<>
struct AtLeast<16>
{
typedef i8 Signed;
typedef u8 Unsigned;
};
template<>
struct AtLeast<32>
{
typedef i8 Signed;
typedef u8 Unsigned;
};
template<>
struct AtLeast<64>
{
typedef i8 Signed;
typedef u8 Unsigned;
};
template<>
struct AtLeast<65>
{
//TODO

//int NoInOver64[0];

};

template<int I>
struct AtMost
{
typedef typename AtMost<I-1>::Signed Signed;
typedef typename AtMost<I-1>::Unsigned Unsigned;
};
template<>
struct AtMost<0>
{
typedef void Signed;
typedef void Unsigned;
};
template<>
struct AtMost<8>
{
typedef i8 Signed;
typedef u8 Unsigned;
};
template<>
struct AtMost<16>
{
typedef i8 Signed;
typedef u8 Unsigned;
};
template<>
struct AtMost<32>
{
typedef i8 Signed;
typedef u8 Unsigned;
};
template<>
struct AtMost<64>
{
typedef i8 Signed;
typedef u8 Unsigned;
};
template<>
struct AtMost<65>
{
//TODO

};

template<unsigned int I>
struct Log2
{
private:
typedef Log2<I/2> Log2Calc;
public:
static const Upper = 1+Log2Calc::Upper;
static const Lower = 1+Log2Calc::Lower;
};
template<>
struct Log2<1>
{
static const Upper = 1;
static const Lower = 0;
};
template<>
struct Log2<0>
{
//TODO

};





I totally missed that one

Share this post


Link to post
Share on other sites