Jump to content
  • Advertisement
Sign in to follow this  
Amadeus H

Stroke of integer genius

This topic is 2043 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I wonder how many of them would have caught it if it was a debugging exercise as part of a larger project example though.

Usually the problem with those projects is that they're so big it's extremely easy to miss something. There's so much that can go wrong that unless the structure is extremely clear you're bound to waste lots of time looking for large pieces of code and hoping you don't overlook the faulty line(s).

 

That's also why it's better to fix a bug as soon as it's found, because at least you can assume it's most likely a recent change that did it and help narrow the places where to look (and like 99% of the time either the bug is in the new code or the new code helped expose it and gives away where it was).

 

GCC and Clang treat long as a 64-bit value on 64-bit systems, but MSVC requires you do use long long. In C++, at least.

GCC still treats long as 32-bit on Windows (at least MinGW-w64), as far as I know (feel free to correct me if that isn't the case), although uses 64-bit everywhere else. No idea about Clang.

 

If you really need a specific guaranteed amount of bits though the values in stdint.h are probably a better choice =P (assuming there isn't already some other type that's made specifically for the kind of data in question)

Share this post


Link to post
Share on other sites
Advertisement

That's also why it's better to fix a bug as soon as it's found, because at least you can assume it's most likely a recent change that did it and help narrow the places where to look (and like 99% of the time either the bug is in the new code or the new code helped expose it and gives away where it was).

 

I wish I could follow this. But here's the real process.

  1. I spot a bug.

  2. I report a bug.
  3. Someone else ("release manager/coordinator") has assign the bug to (hopefully) me.
  4. I come up with a suggested fix.
  5. A senior dev/architect accepts the fix.
  6. I apply the fix.
  7. The fix is code reviewed.
  8. The fix is tested during next scheduled system test.
  9. Bug is fixed.

Share this post


Link to post
Share on other sites

I'm not sure what's worse here. The fact that "gb" can only take two different values (zero or one) on 32-bit systems

 

Actually it could be 2 if you're lucky :p

Share this post


Link to post
Share on other sites

Actually, int remains 32-bits for most languages and compilers out there.

 

Really? I thought it was based on the CPU and you had to target the compiler to 32-bit or whatever XX-bit.

int is suggested to be the same size as the "natural" size of an integer in the processor. x86 systems use 32 bit registers, hence most (all?) compilers targeting an x86 system will make int a 32 bit integer, so it nicely matches the processor's "natural" integer size.

 

So what about these pesky 64 bit systems? Believe it or not, their "natural" size is still 32 bits. At least for x64 systems. x64 systems are really just x86 systems with a few extensions to give them 64 bit support. However, at their heart, they're built on an x86 system. Your 64 bit x64 chip is built using 32 bit x86 architecture, with some extensions for native 64 bit support. That would suggest that their "natural" size is still 32 bits, given the fact that they're built on 32 bit x86 architecture, which means it still makes sense for int to be 32 bits. Yes, x64 systems have some 64 bit registers and a few new instructions, but they're really building on top of 32 bit x86 architecture.

 

But on top of that, the compiler is free to make int any size it wants (so long as it can represent +/-32767). A lot of people have written code that assumed int is 32 bits, and by keeping int 32 bits when compiling for a 64 bit system, some headaches/bugs can be avoided. So part of it is compiler implementors having mercy on those who wrote code that would break if int was 64 bits.

Edited by Cornstalks

Share this post


Link to post
Share on other sites

Re: int - well, not really.  The only thing absolutely guaranteed is that short <= int <= long; otherwise you can't really rely on anything.  If you've got explicitly sized types available (int32, __int64, etc, depending on compiler/platform) it might not be a bad idea to use those instead in cases where this might matter.

Share this post


Link to post
Share on other sites
In C and C++ you do have minimums. char is at least 8 bits, short and int are at least 16 bits, long is at least 32 and long long is at least 64.

Share this post


Link to post
Share on other sites

I'm not sure what's worse here. The fact that "gb" can only take two different values (zero or one) on 32-bit systems

 

Actually it could be 2 if you're lucky tongue.png

No (there aren't 32-bit systems with a larger int). To get 2 you need 2³¹, but the maximum you can store on a signed 32-bit int is 2³¹-1.

 

 

In C and C++ you do have minimums. char is at least 8 bits, short and int are at least 16 bits, long is at least 32 and long long is at least 64.

I think char is required to be 8-bit now (although it wasn't the case in old standards, in fact GCC had a platform with a 16-bit char).

Share this post


Link to post
Share on other sites

No (there aren't 32-bit systems with a larger int). To get 2 you need 2³¹, but the maximum you can store on a signed 32-bit int is 2³¹-1

It's possible for int to be more than 32-bits on some particular system (even a 32-bit system). It's a rare occurrence (if it ever occurs, hence you'd need to be "lucky"), but not forbidden by C or C++.
 


>In C and C++ you do have minimums. char is at least 8 bits, short and int are at least 16 bits, long is at least 32 and long long is at least 64.

I think char is required to be 8-bit now (although it wasn't the case in old standards, in fact GCC had a platform with a 16-bit char).


Nope. It's required to be at least 8 bits in C++11 and C11. It can be more than 8 bits. The C++ standard builds on the C standard and requires compliance with the following portion of the C standard (among other portions):
 

5.2.4.2.1 Sizes of integer types <limits.h>
 
1 The values given below shall be replaced by constant expressions suitable for use in #if preprocessing directives. [...] Their implementation-de?ned values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign.
 
number of bits for smallest object that is not a bit-?eld (byte)
CHAR_BIT 8

 
It says there that CHAR_BIT, which defines the number of bits in a char, is at least 8 (but may be more: "their implementation-defined values shall be equal or greater..."). Edited by Cornstalks

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!