Followers 0

# Stroke of integer genius

## 23 posts in this topic

I'm not sure what's worse here. The fact that "gb" can only take two different values (zero or one) on 32-bit systems, or the lack of rounding causing huge accuracy problems (0.99GB anyone?)

Probably should make those floats or doubles anyway. That way you can't screw up! (but you still will)

0

##### Share on other sites

Sadly reminiscent of a certain company's installers which - last time I checked - still hadn't been updated for that magical moment when drives > 2gb became not only possible, but also commonplace.  Shudder.

1

##### Share on other sites

To be politically correct, that should say:

int GiB = (int)fileManager.RequiredDiskSpace / 1024 / 1024 / 1024;

1

##### Share on other sites

I should never have digged through this part of the code base.

In a function (EstimateTime), a background thread with a while(true) loop (terminated using Abort() -- yeah, Abort), at the end I found this comment:

Thread.Sleep(500); // Estimate time every 5 seconds.


Yeah, that's not how milliseconds work.

And why not use the built in C# class Timer?

I don't know. I give up.

0

##### Share on other sites

Granted this question is dependent on when that code was written, but int is normally 2^(number of bits in the CPU). So on a modern processor (64 bit) an unsigned int or even int shouldn't be a problem.

Though to be clear, I know a long would be the better and safer choice.

Edited by Alpha_ProgDes
0

##### Share on other sites

Actually, int remains 32-bits for most languages and compilers out there.

Really? I thought it was based on the CPU and you had to target the compiler to 32-bit or whatever XX-bit.

0

##### Share on other sites

Actually, int remains 32-bits for most languages and compilers out there.

Really? I thought it was based on the CPU and you had to target the compiler to 32-bit or whatever XX-bit.

Aye, on practically all compilers I know of (other than for more obscure chips), int is a 32-bit value. GCC and Clang treat long as a 64-bit value on 64-bit systems, but MSVC requires you do use long long. In C++, at least. Languages like C#, int is 32-bit, long is 64-bit.

0

##### Share on other sites

I wonder how many of them would have caught it if it was a debugging exercise as part of a larger project example though.

Usually the problem with those projects is that they're so big it's extremely easy to miss something. There's so much that can go wrong that unless the structure is extremely clear you're bound to waste lots of time looking for large pieces of code and hoping you don't overlook the faulty line(s).

That's also why it's better to fix a bug as soon as it's found, because at least you can assume it's most likely a recent change that did it and help narrow the places where to look (and like 99% of the time either the bug is in the new code or the new code helped expose it and gives away where it was).

GCC and Clang treat long as a 64-bit value on 64-bit systems, but MSVC requires you do use long long. In C++, at least.

GCC still treats long as 32-bit on Windows (at least MinGW-w64), as far as I know (feel free to correct me if that isn't the case), although uses 64-bit everywhere else. No idea about Clang.

If you really need a specific guaranteed amount of bits though the values in stdint.h are probably a better choice =P (assuming there isn't already some other type that's made specifically for the kind of data in question)

0

##### Share on other sites

That's also why it's better to fix a bug as soon as it's found, because at least you can assume it's most likely a recent change that did it and help narrow the places where to look (and like 99% of the time either the bug is in the new code or the new code helped expose it and gives away where it was).

I wish I could follow this. But here's the real process.

1. I spot a bug.

2. I report a bug.
3. Someone else ("release manager/coordinator") has assign the bug to (hopefully) me.
4. I come up with a suggested fix.
5. A senior dev/architect accepts the fix.
6. I apply the fix.
7. The fix is code reviewed.
8. The fix is tested during next scheduled system test.
9. Bug is fixed.
1

##### Share on other sites

I'm not sure what's worse here. The fact that "gb" can only take two different values (zero or one) on 32-bit systems

Actually it could be 2 if you're lucky :p

1

##### Share on other sites

Re: int - well, not really.  The only thing absolutely guaranteed is that short <= int <= long; otherwise you can't really rely on anything.  If you've got explicitly sized types available (int32, __int64, etc, depending on compiler/platform) it might not be a bad idea to use those instead in cases where this might matter.

0

##### Share on other sites
In C and C++ you do have minimums. char is at least 8 bits, short and int are at least 16 bits, long is at least 32 and long long is at least 64.
2

##### Share on other sites

I'm not sure what's worse here. The fact that "gb" can only take two different values (zero or one) on 32-bit systems

Actually it could be 2 if you're lucky

No (there aren't 32-bit systems with a larger int). To get 2 you need 2³¹, but the maximum you can store on a signed 32-bit int is 2³¹-1.

In C and C++ you do have minimums. char is at least 8 bits, short and int are at least 16 bits, long is at least 32 and long long is at least 64.

I think char is required to be 8-bit now (although it wasn't the case in old standards, in fact GCC had a platform with a 16-bit char).

0

##### Share on other sites

No (there aren't 32-bit systems with a larger int). To get 2 you need 2³¹, but the maximum you can store on a signed 32-bit int is 2³¹-1

It's possible for int to be more than 32-bits on some particular system (even a 32-bit system). It's a rare occurrence (if it ever occurs, hence you'd need to be "lucky"), but not forbidden by C or C++.

>In C and C++ you do have minimums. char is at least 8 bits, short and int are at least 16 bits, long is at least 32 and long long is at least 64.

I think char is required to be 8-bit now (although it wasn't the case in old standards, in fact GCC had a platform with a 16-bit char).

Nope. It's required to be at least 8 bits in C++11 and C11. It can be more than 8 bits. The C++ standard builds on the C standard and requires compliance with the following portion of the C standard (among other portions):

5.2.4.2.1 Sizes of integer types <limits.h>

1 The values given below shall be replaced by constant expressions suitable for use in #if preprocessing directives. [...] Their implementation-de?ned values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign.

number of bits for smallest object that is not a bit-?eld (byte)
CHAR_BIT 8

It says there that CHAR_BIT, which defines the number of bits in a char, is at least 8 (but may be more: "their implementation-defined values shall be equal or greater..."). Edited by Cornstalks
0

##### Share on other sites

Weird, I remember reading that CHAR_BIT had to be exactly 8 (thereby forcing char to be a byte). Maybe I misunderstood and it's just POSIX that requires it?

0

##### Share on other sites

Maybe I misunderstood and it's just POSIX that requires it?

Yeah, POSIX requires it, so that's a likely possibility.
0

##### Share on other sites

(thereby forcing char to be a byte)

You might be confused because the standards define [tt]char[/tt] to be a byte, but a C or C++ byte isn't necessarily 8 bits. So while [tt]sizeof(char)[/tt] must always be 1, that doesn't mean that [tt]char[/tt] is required to be 8 bits.
1

##### Share on other sites

No, I was going by CHAR_BIT being always 8, but it may be just POSIX after all (which indeed does seem to require it to be 8).

0

##### Share on other sites

No, I was going by CHAR_BIT being always 8, but it may be just POSIX after all (which indeed does seem to require it to be 8).

Systems did exist with other char sizes:

http://en.wikipedia.org/wiki/ICT_1900_series

But IBM software compatibility forced the industry to standardise on 8bits. If anything, it was the customers that demanded an 8bit char standard (to make porting business software easier). If you happened to be building computers with 6bit chars, you'd have very quickly seen a rapid decline in sales through the early 70's. So companies chased the money, and made their chars 8bit instead. POSIX just standardised what everyone had already (mostly) been doing.

0

##### Share on other sites

I was thinking on modern standards though (think e.g. C99, by which point the idea of a byte that isn't 8-bit is just plain ridiculous unless you're e.g. on a DSP that can handle only its word size and absolutely nothing else).

0

## Create an account

Register a new account