Sign in to follow this  
Amadeus H

Stroke of integer genius

Recommended Posts

I'm not sure what's worse here. The fact that "gb" can only take two different values (zero or one) on 32-bit systems, or the lack of rounding causing huge accuracy problems (0.99GB anyone?)

 

Probably should make those floats or doubles anyway. That way you can't screw up! (but you still will)

Share this post


Link to post
Share on other sites

I should never have digged through this part of the code base.

In a function (EstimateTime), a background thread with a while(true) loop (terminated using Abort() -- yeah, Abort), at the end I found this comment:

 

Thread.Sleep(500); // Estimate time every 5 seconds.

 

Yeah, that's not how milliseconds work.

And why not use the built in C# class Timer?

 

I don't know. I give up.

Edited by Amadeus H

Share this post


Link to post
Share on other sites

I should never have digged through this part of the code base.

In a function (EstimateTime), a background thread with a while(true) loop (terminated using Abort() -- yeah, Abort), at the end I found this comment:

 

Thread.Sleep(500); // Estimate time every 5 seconds.

 

Yeah, that's not how milliseconds work.

And why not use the built in C# class Timer?

 

I don't know. I give up.

 

Thanks for these examples.  I use them as "warm up" exercises for the C# programming class I teach.   My 16-17 year old students wondered why anybody in their right mind would have used an int for that disk space operation if you were working with numbers that should easily be 2+ billion.   One of my female students chimed in and wondered why they would do that given that integer division will result in truncated remainders.. she thought it would dramatically skew the calculation.  I wonder how many of them would have caught it if it was a debugging exercise as part of a larger project example though.

Share this post


Link to post
Share on other sites

Granted this question is dependent on when that code was written, but int is normally 2^(number of bits in the CPU). So on a modern processor (64 bit) an unsigned int or even int shouldn't be a problem.

Though to be clear, I know a long would be the better and safer choice.

Edited by Alpha_ProgDes

Share this post


Link to post
Share on other sites

Actually, int remains 32-bits for most languages and compilers out there.

 

Really? I thought it was based on the CPU and you had to target the compiler to 32-bit or whatever XX-bit.

Aye, on practically all compilers I know of (other than for more obscure chips), int is a 32-bit value. GCC and Clang treat long as a 64-bit value on 64-bit systems, but MSVC requires you do use long long. In C++, at least. Languages like C#, int is 32-bit, long is 64-bit.

Share this post


Link to post
Share on other sites

I wonder how many of them would have caught it if it was a debugging exercise as part of a larger project example though.

Usually the problem with those projects is that they're so big it's extremely easy to miss something. There's so much that can go wrong that unless the structure is extremely clear you're bound to waste lots of time looking for large pieces of code and hoping you don't overlook the faulty line(s).

 

That's also why it's better to fix a bug as soon as it's found, because at least you can assume it's most likely a recent change that did it and help narrow the places where to look (and like 99% of the time either the bug is in the new code or the new code helped expose it and gives away where it was).

 

GCC and Clang treat long as a 64-bit value on 64-bit systems, but MSVC requires you do use long long. In C++, at least.

GCC still treats long as 32-bit on Windows (at least MinGW-w64), as far as I know (feel free to correct me if that isn't the case), although uses 64-bit everywhere else. No idea about Clang.

 

If you really need a specific guaranteed amount of bits though the values in stdint.h are probably a better choice =P (assuming there isn't already some other type that's made specifically for the kind of data in question)

Share this post


Link to post
Share on other sites

That's also why it's better to fix a bug as soon as it's found, because at least you can assume it's most likely a recent change that did it and help narrow the places where to look (and like 99% of the time either the bug is in the new code or the new code helped expose it and gives away where it was).

 

I wish I could follow this. But here's the real process.

  1. I spot a bug.

  2. I report a bug.
  3. Someone else ("release manager/coordinator") has assign the bug to (hopefully) me.
  4. I come up with a suggested fix.
  5. A senior dev/architect accepts the fix.
  6. I apply the fix.
  7. The fix is code reviewed.
  8. The fix is tested during next scheduled system test.
  9. Bug is fixed.

Share this post


Link to post
Share on other sites

Actually, int remains 32-bits for most languages and compilers out there.

 

Really? I thought it was based on the CPU and you had to target the compiler to 32-bit or whatever XX-bit.

int is suggested to be the same size as the "natural" size of an integer in the processor. x86 systems use 32 bit registers, hence most (all?) compilers targeting an x86 system will make int a 32 bit integer, so it nicely matches the processor's "natural" integer size.

 

So what about these pesky 64 bit systems? Believe it or not, their "natural" size is still 32 bits. At least for x64 systems. x64 systems are really just x86 systems with a few extensions to give them 64 bit support. However, at their heart, they're built on an x86 system. Your 64 bit x64 chip is built using 32 bit x86 architecture, with some extensions for native 64 bit support. That would suggest that their "natural" size is still 32 bits, given the fact that they're built on 32 bit x86 architecture, which means it still makes sense for int to be 32 bits. Yes, x64 systems have some 64 bit registers and a few new instructions, but they're really building on top of 32 bit x86 architecture.

 

But on top of that, the compiler is free to make int any size it wants (so long as it can represent +/-32767). A lot of people have written code that assumed int is 32 bits, and by keeping int 32 bits when compiling for a 64 bit system, some headaches/bugs can be avoided. So part of it is compiler implementors having mercy on those who wrote code that would break if int was 64 bits.

Edited by Cornstalks

Share this post


Link to post
Share on other sites

Re: int - well, not really.  The only thing absolutely guaranteed is that short <= int <= long; otherwise you can't really rely on anything.  If you've got explicitly sized types available (int32, __int64, etc, depending on compiler/platform) it might not be a bad idea to use those instead in cases where this might matter.

Share this post


Link to post
Share on other sites
In C and C++ you do have minimums. char is at least 8 bits, short and int are at least 16 bits, long is at least 32 and long long is at least 64.

Share this post


Link to post
Share on other sites

I'm not sure what's worse here. The fact that "gb" can only take two different values (zero or one) on 32-bit systems

 

Actually it could be 2 if you're lucky tongue.png

No (there aren't 32-bit systems with a larger int). To get 2 you need 2³¹, but the maximum you can store on a signed 32-bit int is 2³¹-1.

 

 

In C and C++ you do have minimums. char is at least 8 bits, short and int are at least 16 bits, long is at least 32 and long long is at least 64.

I think char is required to be 8-bit now (although it wasn't the case in old standards, in fact GCC had a platform with a 16-bit char).

Share this post


Link to post
Share on other sites

No (there aren't 32-bit systems with a larger int). To get 2 you need 2³¹, but the maximum you can store on a signed 32-bit int is 2³¹-1

It's possible for int to be more than 32-bits on some particular system (even a 32-bit system). It's a rare occurrence (if it ever occurs, hence you'd need to be "lucky"), but not forbidden by C or C++.
 


>In C and C++ you do have minimums. char is at least 8 bits, short and int are at least 16 bits, long is at least 32 and long long is at least 64.

I think char is required to be 8-bit now (although it wasn't the case in old standards, in fact GCC had a platform with a 16-bit char).


Nope. It's required to be at least 8 bits in C++11 and C11. It can be more than 8 bits. The C++ standard builds on the C standard and requires compliance with the following portion of the C standard (among other portions):
 

5.2.4.2.1 Sizes of integer types <limits.h>
 
1 The values given below shall be replaced by constant expressions suitable for use in #if preprocessing directives. [...] Their implementation-de?ned values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign.
 
number of bits for smallest object that is not a bit-?eld (byte)
CHAR_BIT 8

 
It says there that CHAR_BIT, which defines the number of bits in a char, is at least 8 (but may be more: "their implementation-defined values shall be equal or greater..."). Edited by Cornstalks

Share this post


Link to post
Share on other sites

(thereby forcing char to be a byte)

You might be confused because the standards define [tt]char[/tt] to be a byte, but a C or C++ byte isn't necessarily 8 bits. So while [tt]sizeof(char)[/tt] must always be 1, that doesn't mean that [tt]char[/tt] is required to be 8 bits.

Share this post


Link to post
Share on other sites

No, I was going by CHAR_BIT being always 8, but it may be just POSIX after all (which indeed does seem to require it to be 8).

 

Systems did exist with other char sizes:

http://en.wikipedia.org/wiki/ICT_1900_series

But IBM software compatibility forced the industry to standardise on 8bits. If anything, it was the customers that demanded an 8bit char standard (to make porting business software easier). If you happened to be building computers with 6bit chars, you'd have very quickly seen a rapid decline in sales through the early 70's. So companies chased the money, and made their chars 8bit instead. POSIX just standardised what everyone had already (mostly) been doing.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this