Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Stroke of integer genius

  • You cannot reply to this topic
23 replies to this topic

#1 Amadeus H   Members   -  Reputation: 1180

Like
9Likes
Like

Posted 27 February 2013 - 07:53 AM

There I was, all alone - in debugging hell.

 

Why was our GUI not able to display the required disk space that was needed? Why was it showing 0?!

And why did our unit tests of the FileManager all shine of vibrant green?!

 

I started deep in the mounts of what our prophets label the business layer -- it was correctly calculating the bytes based on the file sizes.

 

I continued upward towards the swamp-like territory of the GUI-people.

There I stood. Face to face, with...

 

int gb = (int)fileManager.RequiredDiskSpace / 1024 / 1024 / 1024;


Sponsor:

#2 Bacterius   Crossbones+   -  Reputation: 9098

Like
0Likes
Like

Posted 27 February 2013 - 09:43 AM

I'm not sure what's worse here. The fact that "gb" can only take two different values (zero or one) on 32-bit systems, or the lack of rounding causing huge accuracy problems (0.99GB anyone?)

 

Probably should make those floats or doubles anyway. That way you can't screw up! (but you still will)


The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#3 mhagain   Crossbones+   -  Reputation: 8176

Like
1Likes
Like

Posted 27 February 2013 - 04:52 PM

Sadly reminiscent of a certain company's installers which - last time I checked - still hadn't been updated for that magical moment when drives > 2gb became not only possible, but also commonplace.  Shudder.


It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.


#4 Alpha_ProgDes   Crossbones+   -  Reputation: 4692

Like
1Likes
Like

Posted 27 February 2013 - 07:53 PM

To be politically correct, that should say:

 

int GiB = (int)fileManager.RequiredDiskSpace / 1024 / 1024 / 1024;
 

Beginner in Game Development? Read here.
 
Super Mario Bros clone tutorial written in XNA 4.0 [MonoGame, ANX, and MonoXNA] by Scott Haley
 
If you have found any of the posts helpful, please show your appreciation by clicking the up arrow on those posts Posted Image
 
Spoiler

#5 Amadeus H   Members   -  Reputation: 1180

Like
0Likes
Like

Posted 28 February 2013 - 06:15 AM

I should never have digged through this part of the code base.

In a function (EstimateTime), a background thread with a while(true) loop (terminated using Abort() -- yeah, Abort), at the end I found this comment:

 

Thread.Sleep(500); // Estimate time every 5 seconds.

 

Yeah, that's not how milliseconds work.

And why not use the built in C# class Timer?

 

I don't know. I give up.


Edited by Amadeus H, 28 February 2013 - 06:15 AM.


#6 Michael Tanczos   Senior Staff   -  Reputation: 5437

Like
4Likes
Like

Posted 28 February 2013 - 12:48 PM

I should never have digged through this part of the code base.

In a function (EstimateTime), a background thread with a while(true) loop (terminated using Abort() -- yeah, Abort), at the end I found this comment:

 

Thread.Sleep(500); // Estimate time every 5 seconds.

 

Yeah, that's not how milliseconds work.

And why not use the built in C# class Timer?

 

I don't know. I give up.

 

Thanks for these examples.  I use them as "warm up" exercises for the C# programming class I teach.   My 16-17 year old students wondered why anybody in their right mind would have used an int for that disk space operation if you were working with numbers that should easily be 2+ billion.   One of my female students chimed in and wondered why they would do that given that integer division will result in truncated remainders.. she thought it would dramatically skew the calculation.  I wonder how many of them would have caught it if it was a debugging exercise as part of a larger project example though.



#7 Alpha_ProgDes   Crossbones+   -  Reputation: 4692

Like
0Likes
Like

Posted 28 February 2013 - 04:03 PM

Granted this question is dependent on when that code was written, but int is normally 2^(number of bits in the CPU). So on a modern processor (64 bit) an unsigned int or even int shouldn't be a problem.

Though to be clear, I know a long would be the better and safer choice.


Edited by Alpha_ProgDes, 28 February 2013 - 04:03 PM.

Beginner in Game Development? Read here.
 
Super Mario Bros clone tutorial written in XNA 4.0 [MonoGame, ANX, and MonoXNA] by Scott Haley
 
If you have found any of the posts helpful, please show your appreciation by clicking the up arrow on those posts Posted Image
 
Spoiler

#8 ApochPiQ   Moderators   -  Reputation: 16079

Like
4Likes
Like

Posted 28 February 2013 - 04:30 PM

Actually, int remains 32-bits for most languages and compilers out there.

#9 Alpha_ProgDes   Crossbones+   -  Reputation: 4692

Like
0Likes
Like

Posted 28 February 2013 - 04:53 PM

Actually, int remains 32-bits for most languages and compilers out there.

 

Really? I thought it was based on the CPU and you had to target the compiler to 32-bit or whatever XX-bit.


Beginner in Game Development? Read here.
 
Super Mario Bros clone tutorial written in XNA 4.0 [MonoGame, ANX, and MonoXNA] by Scott Haley
 
If you have found any of the posts helpful, please show your appreciation by clicking the up arrow on those posts Posted Image
 
Spoiler

#10 Ameise   Members   -  Reputation: 754

Like
0Likes
Like

Posted 28 February 2013 - 07:29 PM

Actually, int remains 32-bits for most languages and compilers out there.

 

Really? I thought it was based on the CPU and you had to target the compiler to 32-bit or whatever XX-bit.

Aye, on practically all compilers I know of (other than for more obscure chips), int is a 32-bit value. GCC and Clang treat long as a 64-bit value on 64-bit systems, but MSVC requires you do use long long. In C++, at least. Languages like C#, int is 32-bit, long is 64-bit.



#11 Sik_the_hedgehog   Crossbones+   -  Reputation: 1817

Like
0Likes
Like

Posted 28 February 2013 - 10:34 PM

I wonder how many of them would have caught it if it was a debugging exercise as part of a larger project example though.

Usually the problem with those projects is that they're so big it's extremely easy to miss something. There's so much that can go wrong that unless the structure is extremely clear you're bound to waste lots of time looking for large pieces of code and hoping you don't overlook the faulty line(s).

 

That's also why it's better to fix a bug as soon as it's found, because at least you can assume it's most likely a recent change that did it and help narrow the places where to look (and like 99% of the time either the bug is in the new code or the new code helped expose it and gives away where it was).

 

GCC and Clang treat long as a 64-bit value on 64-bit systems, but MSVC requires you do use long long. In C++, at least.

GCC still treats long as 32-bit on Windows (at least MinGW-w64), as far as I know (feel free to correct me if that isn't the case), although uses 64-bit everywhere else. No idea about Clang.

 

If you really need a specific guaranteed amount of bits though the values in stdint.h are probably a better choice =P (assuming there isn't already some other type that's made specifically for the kind of data in question)


Don't pay much attention to "the hedgehog" in my nick, it's just because "Sik" was already taken =/ By the way, Sik is pronounced like seek, not like sick.

#12 Amadeus H   Members   -  Reputation: 1180

Like
1Likes
Like

Posted 01 March 2013 - 02:45 AM

That's also why it's better to fix a bug as soon as it's found, because at least you can assume it's most likely a recent change that did it and help narrow the places where to look (and like 99% of the time either the bug is in the new code or the new code helped expose it and gives away where it was).

 

I wish I could follow this. But here's the real process.

  1. I spot a bug.

  2. I report a bug.
  3. Someone else ("release manager/coordinator") has assign the bug to (hopefully) me.
  4. I come up with a suggested fix.
  5. A senior dev/architect accepts the fix.
  6. I apply the fix.
  7. The fix is code reviewed.
  8. The fix is tested during next scheduled system test.
  9. Bug is fixed.


#13 Vortez   Crossbones+   -  Reputation: 2704

Like
1Likes
Like

Posted 01 March 2013 - 11:24 AM

I'm not sure what's worse here. The fact that "gb" can only take two different values (zero or one) on 32-bit systems

 

Actually it could be 2 if you're lucky :P



#14 Cornstalks   Crossbones+   -  Reputation: 6991

Like
4Likes
Like

Posted 01 March 2013 - 11:37 AM

Actually, int remains 32-bits for most languages and compilers out there.

 

Really? I thought it was based on the CPU and you had to target the compiler to 32-bit or whatever XX-bit.

int is suggested to be the same size as the "natural" size of an integer in the processor. x86 systems use 32 bit registers, hence most (all?) compilers targeting an x86 system will make int a 32 bit integer, so it nicely matches the processor's "natural" integer size.

 

So what about these pesky 64 bit systems? Believe it or not, their "natural" size is still 32 bits. At least for x64 systems. x64 systems are really just x86 systems with a few extensions to give them 64 bit support. However, at their heart, they're built on an x86 system. Your 64 bit x64 chip is built using 32 bit x86 architecture, with some extensions for native 64 bit support. That would suggest that their "natural" size is still 32 bits, given the fact that they're built on 32 bit x86 architecture, which means it still makes sense for int to be 32 bits. Yes, x64 systems have some 64 bit registers and a few new instructions, but they're really building on top of 32 bit x86 architecture.

 

But on top of that, the compiler is free to make int any size it wants (so long as it can represent +/-32767). A lot of people have written code that assumed int is 32 bits, and by keeping int 32 bits when compiling for a 64 bit system, some headaches/bugs can be avoided. So part of it is compiler implementors having mercy on those who wrote code that would break if int was 64 bits.


Edited by Cornstalks, 01 March 2013 - 11:49 AM.

[ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]

#15 mhagain   Crossbones+   -  Reputation: 8176

Like
0Likes
Like

Posted 01 March 2013 - 03:39 PM

Re: int - well, not really.  The only thing absolutely guaranteed is that short <= int <= long; otherwise you can't really rely on anything.  If you've got explicitly sized types available (int32, __int64, etc, depending on compiler/platform) it might not be a bad idea to use those instead in cases where this might matter.


It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.


#16 SiCrane   Moderators   -  Reputation: 9630

Like
2Likes
Like

Posted 01 March 2013 - 03:44 PM

In C and C++ you do have minimums. char is at least 8 bits, short and int are at least 16 bits, long is at least 32 and long long is at least 64.

#17 Sik_the_hedgehog   Crossbones+   -  Reputation: 1817

Like
0Likes
Like

Posted 02 March 2013 - 12:22 AM

I'm not sure what's worse here. The fact that "gb" can only take two different values (zero or one) on 32-bit systems

 

Actually it could be 2 if you're lucky tongue.png

No (there aren't 32-bit systems with a larger int). To get 2 you need 2³¹, but the maximum you can store on a signed 32-bit int is 2³¹-1.

 

 

In C and C++ you do have minimums. char is at least 8 bits, short and int are at least 16 bits, long is at least 32 and long long is at least 64.

I think char is required to be 8-bit now (although it wasn't the case in old standards, in fact GCC had a platform with a 16-bit char).


Don't pay much attention to "the hedgehog" in my nick, it's just because "Sik" was already taken =/ By the way, Sik is pronounced like seek, not like sick.

#18 Cornstalks   Crossbones+   -  Reputation: 6991

Like
0Likes
Like

Posted 02 March 2013 - 01:39 AM

No (there aren't 32-bit systems with a larger int). To get 2 you need 2³¹, but the maximum you can store on a signed 32-bit int is 2³¹-1

It's possible for int to be more than 32-bits on some particular system (even a 32-bit system). It's a rare occurrence (if it ever occurs, hence you'd need to be "lucky"), but not forbidden by C or C++.
 


>In C and C++ you do have minimums. char is at least 8 bits, short and int are at least 16 bits, long is at least 32 and long long is at least 64.

I think char is required to be 8-bit now (although it wasn't the case in old standards, in fact GCC had a platform with a 16-bit char).


Nope. It's required to be at least 8 bits in C++11 and C11. It can be more than 8 bits. The C++ standard builds on the C standard and requires compliance with the following portion of the C standard (among other portions):
 

5.2.4.2.1 Sizes of integer types <limits.h>
 
1 The values given below shall be replaced by constant expressions suitable for use in #if preprocessing directives. [...] Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign.
 
number of bits for smallest object that is not a bit-field (byte)
CHAR_BIT 8

 
It says there that CHAR_BIT, which defines the number of bits in a char, is at least 8 (but may be more: "their implementation-defined values shall be equal or greater...").

Edited by Cornstalks, 02 March 2013 - 01:56 AM.

[ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]

#19 Sik_the_hedgehog   Crossbones+   -  Reputation: 1817

Like
0Likes
Like

Posted 02 March 2013 - 10:28 PM

Weird, I remember reading that CHAR_BIT had to be exactly 8 (thereby forcing char to be a byte). Maybe I misunderstood and it's just POSIX that requires it?


Don't pay much attention to "the hedgehog" in my nick, it's just because "Sik" was already taken =/ By the way, Sik is pronounced like seek, not like sick.

#20 Cornstalks   Crossbones+   -  Reputation: 6991

Like
0Likes
Like

Posted 03 March 2013 - 02:22 AM

Maybe I misunderstood and it's just POSIX that requires it?

Yeah, POSIX requires it, so that's a likely possibility.
[ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]





PARTNERS