Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

Typedef

This topic is 5498 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have seen in a few cases where game engines will typedef every integer/floating point/bool variable. Is there a reason for this? Is it just for consistancy?
     Hope is the first step on the road to disappointment

Share this post


Link to post
Share on other sites
Advertisement
Guest Anonymous Poster
It can be useful. Suppose you''re engine is written for DX. You want floats. Now support you''re going to compile for some API that uses doubles. Do you really want to go through every line of every file and make changes? Easier to just update the typedef and be on your way.

Also people have been bitten by the int was 16 bit but it''s now 32 bit thing. What''s a 64 bit number going to be on a 64 bit machine? Well, it''ll be an int, by definition of int... but will a long be 64 bit or 32 bit? If they want 32 bit data, they know there will be SOME way of specifying it on new hardware, but no idea how yet. The typedef can be updated when compilers for new 64 bit architectures come out.

Share this post


Link to post
Share on other sites
Abstraction of the type. Same reason to use enums instead of int''s whenever it is appro.

- Magmai Kai Holmlor

"No, his mind is not for rent to any god nor government" - Rush, Tom Sawyer

[Look for information | GDNet Start Here | GDNet Search Tool | GDNet FAQ | MSDN RTF[L] | SGI STL Docs | STFW | Asking Smart Questions ]

[Free C++ Libraries | Boost | ACE | Loki | MTL | Blitz++ | wxWindows| Spirit(xBNF)]
[Free C Libraries | zlib ]

Share this post


Link to post
Share on other sites
An integer (int) type will change to 64 bits as the 64bit processors roll out. An (int) is a word which is defined as the number of bits the data bus can handle.

It would have to be some kinda exotic typedef for a 32bit type since short is taken to be 16 bits

Share this post


Link to post
Share on other sites
It won''t be an exotic typedef. 64 bit compilers will provide some method of specifying 32 bit values. Just as the ''long long'' extension does 64 bit for 32 bit machines, they''ll add some new type that means 32 bit.

Maybe they''ll even create a whole new set of types, learning from past mistakes, that DO have a definate size associated with them. int8, int16, int32, int64, int128, and so on for example.

Knowing the industry though, they''ll just patch 64 bit to allow 32 bit, and think, "We won''t have to worry about what happens when we hit 128 bit for a long time, why bother to define things now".

These transitionary periods are often a real pain as each compiler does it''s own thing for a while.

Share this post


Link to post
Share on other sites
quote:
Original post by GoofProg
An integer (int) type will change to 64 bits as the 64bit processors roll out. An (int) is a word which is defined as the number of bits the data bus can handle.

It would have to be some kinda exotic typedef for a 32bit type since short is taken to be 16 bits


Correction: "int" is whatever the individual compiler developer decides to make it. The C/C++ standards make no guarantees to its size.


How appropriate. You fight like a cow.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster

Do a google for the "lp64" model. It says that char is 8 bits, short is 16 bits, int is 32 bits, and long and pointers are 64 bits. It''s the sanest way of implementing a 64 bit compiler; everyone should be using it.

Corollary: if you want to cast between pointers and integral types, better cast to LONG, not to int. Or ptrdiff_t, which is defined to be as large as a pointer.

Second corollary: if you''re doing file or network I/O, or defining specific interfaces, typedef-ing specific sizes will save your butt 10 times over.

Share this post


Link to post
Share on other sites
quote:
Original post by Sneftel
Correction: "int" is whatever the individual compiler developer decides to make it. The C/C++ standards make no guarantees to its size.



They guarantee it will be at least 16 bits in size.


[edited by - mcfly on May 1, 2003 12:18:44 PM]

Share this post


Link to post
Share on other sites
One of the specifications of C++0x is to have fixed size ints, which will take a lot of the stress out of porting to different platforms.

Personally, I think that, along with the standard names we''re all used to, there should be a standard keyword prefixed with int that means the same thing:

int8 for char
uint8 for unsigned char
int16 for short
uint16 for unsigned short
int32 for int
uint32 for unsigned int
int64 for long
uint64 for unsigned long

sizeof(void*) == sizeof(int64)

That would make portable network programming a lot easier, IMO. I don''t know if this is the system being proposed for the new C++ standard, but I think it makes sense. And as processors march towards 128 bits (and you _know_ they will, probably faster than we think), a new type of int should be introduced. possibly int128 / wide and uint128 / uwide

Share this post


Link to post
Share on other sites
quote:
Original post by mcfly
They guarantee it will be at least 16 bits in size.
Really? To my knowledge, the standard places no such limitations. On some system, char, short int and int could all be 8 bit in size, perhaps even less.

(the minimum size for char may be limited by some other rules, so it''s probably not possible to have a C++ implementation with 1 bit chars)

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!