Jump to content
  • Advertisement
Sign in to follow this  
captacha

Integers

This topic is 2483 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Why doesn't everybody use [font="'courier new"]int[/font][font="arial, helvetica, sans-serif"] when they're coding? A lot of libraries (Box2D, SDL, WINAPI) I've seen use integer types like [/font][font="'courier new"]UInt32[/font][font="arial, helvetica, sans-serif"], [/font][font="'courier new"] Int32[/font][font="arial, helvetica, sans-serif"], or [/font][font="'courier new"]UInt[/font][font=arial, helvetica, sans-serif]. I don't see a reason for this since Integers in C++ are Unsigned 32 bit by default, and most IDEs/Code Editors only highlight [/font][font="'courier new"]int[/font].

Share this post


Link to post
Share on other sites
Advertisement

[font=arial, helvetica, sans-serif]I don't see a reason for this since Integers in C++ are Unsigned 32 bit by default[/font]

No they're not. The C++ standard doesn't specify how many bits an int is. That's why they use int32, so that they can be sure that whatever compiler is being used, they're getting a 32 bit integer. The uint stuff is just because it's shorter to type that than [font=courier new,courier,monospace]unsigned int[/font]. [edit: and yes, as was said below, integers are signed by default, not unsigned]

C++ does not require a char to be 8 bits, either, so you may not even be working with power-of-2 size data types to begin with, for that matter.

Share this post


Link to post
Share on other sites
First of all, integers in C++ are not unsigned, they are signed 32 bit by default on 32-bit and now 64-bit platforms. However, there is no warranty that "int" is a 32 bit signed integer on all platforms. For example, on old 16-bit platforms, it was 16 bit only. Another example, not for int this time, but for "long" is that on 32-bit platforms long is 32 bit and on 64-bit platforms, long is 64 bit. So because of these inconsistencies, when you are developing a cross platform library , it is better to define your own types, depending on platform. That way you ensure portability for your library/application.

Share this post


Link to post
Share on other sites

[font=arial, helvetica, sans-serif]...since Integers in C++ are [s]Un[/s][color=#ff0000]signed 32 bit by default...[/font]

Fixed. ([size=2]I'm assuming you already knew that, and just made a simple/common mistake).

int32_t, uint32_t, etc... is what should be used. Not library-specific typedefs like Uint8 or whatever.

If you just need to hold a number, int is fine. But if you want to be a bit more specific of what you are wanting, you might as well declare your intent. ("This is a number that will never go below zero", or "This is a number that will get very large").

Sometimes I typedef the ints to give extra readability to the code.
typedef uint32_t PlayerID;
typedef uint32_t EntityMask;
etc...

I might be mistaken about this, but I seem to recall an expectancy that 'int' might've been increased to 64 bits by default on 64 bit computers, but that it was overruled because programmers had made too many faulty assumptions that 'int' would always be 32 bits, despite it not being garunteed, and that it'd break too much code to change it now. If your code depends on such assumptions of size, you should at least use the standardized way to declare such assumptions (int32_t, uint32_t).
'int' just means, "Give me something to hold numbers, dang it!"
'int32_t' means, "Give me something that is exactly 32 bits, and signed"

Share this post


Link to post
Share on other sites

int32_t, uint32_t, etc... is what should be used


I think int32_t and similar types weren't available only in boost until the last version of visual studio, as they were not in the C++ standard, but in TR1.

Share this post


Link to post
Share on other sites

First of all, integers in C++ are not unsigned, they are signed 32 bit by default on 32-bit and now 64-bit platforms. However, there is no warranty that "int" is a 32 bit signed integer on all platforms. For example, on old 16-bit platforms, it was 16 bit only. Another example, not for int this time, but for "long" is that on 32-bit platforms long is 32 bit and on 64-bit platforms, long is 64 bit. So because of these inconsistencies, when you are developing a cross platform library , it is better to define your own types, depending on platform. That way you ensure portability for your library/application.

No, actually, long is either 32 or 64 bit depending on the OS on most (not all) 64 bit systems. Again, for the same reason int tends to be 32 bits on 32 and 64 bit systems: bad assumptions about the size of long. This is why the new standard (and compilers even before the standard) introduced types like long long.

As far as needing specific bitlengthed types, C99 introduced them to stdint.h, and boost/cstdint.hpp (which has been in boost since '99)

Share this post


Link to post
Share on other sites

No, actually, long is either 32 or 64 bit depending on the OS on most (not all) 64 bit systems. Again, for the same reason int tends to be 32 bits on 32 and 64 bit systems: bad assumptions about the size of long. This is why the new standard (and compilers even before the standard) introduced types like long long.


I am sorry, I meant "long long" on that one, my bad.

Share this post


Link to post
Share on other sites

[quote name='Washu' timestamp='1330292068' post='4916806']
No, actually, long is either 32 or 64 bit depending on the OS on most (not all) 64 bit systems. Again, for the same reason int tends to be 32 bits on 32 and 64 bit systems: bad assumptions about the size of long. This is why the new standard (and compilers even before the standard) introduced types like long long.


I am sorry, I meant "long long" on that one, my bad.
[/quote]
long long is 64 bits on all the platforms I've used (32 bit and 64 bit)... and according to inttypes and C++11 it's required to be at least 64 bits.

Share this post


Link to post
Share on other sites

[quote name='Servant of the Lord' timestamp='1330291264' post='4916798']
int32_t, uint32_t, etc... is what should be used


I think int32_t and similar types weren't available only in boost until the last version of visual studio, as they were not in the C++ standard, but in TR1.
[/quote]
Ah, my mistake. The <inttypes.h> header was part of the C99 standard (as opposed to C++98), apparently. GCC has it currently available in C++.

Share this post


Link to post
Share on other sites
For the record, what the standard claims about the size of integers goes like this: sizeof(signed char) = 1 <= sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long). char is at least 8 bits, short and int are both at least 16 bits, long is at least 32 bits and long long is at least 64 bits.

I've personally used compilers with 16 bit ints back in DOS days, and at least one embedded system out there (predating long long) made everything 32 bits, and it was perfectly conforming.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!