INT question

Started by
4 comments, last by zacaj 12 years, 5 months ago
I'll get straight to the point: does one gain a significant overall performance/memory use increase if they use data types according to the size of a given value, i.e. not always using int for numbers, but using short, byte and so on for lower numbers? Hope that makes sense. Basically, is it worth bothering over that (for example, if I know a given variable cannot be > 10, using int (max value of which is... large) is overkill, or not?) from the very beginning (as I am a beginner) or not? Does it result in more optimized code? Or is the difference negligible?
I'd definetly say that a glass is half-empty!
Advertisement
I believe it to be good coding habits. I know it used to be a big deal when memory wasn't so cheap... that said i still try to keep my declarations in the bounds.
Code makes the man
The difference is irrelevant in 99% of code.

In fact, obsessing over it can be harmful. Accessing a short on a 32-bit system has penalties that a native int will not incur, for example. In general on most CPUs you will be writing code for in 2011, you're better off sticking to the native machine word size, and messing with the rest only when you absolutely need to.


Consider: if you compare a short vs. an int on a typical 32-bit platform, you're talking about 2 bytes vs 4. It would take half a billion copies of your data to make a back-breaking difference on a commodity PC with a couple GB of RAM.

Are you dealing with half a billion objects in your program? Maybe it's worth worrying about 16 vs 32 bits. Otherwise, screw it; memory is cheap and CPUs are better at native word sized data anyways.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

Thank you!

No, I'm just a beginner and will probably never have to worry about optimization and all, but since I want to avoid getting bad habits straight away, I thought I'd ask.

So that means I should use ints and doubles? And not worry about the rest?
I'd definetly say that a glass is half-empty!

So that means I should use ints and doubles? And not worry about the rest?

Probably ints and floats. Doubles are typically (though not always!) 8 bytes, whereas a float is typically 4 bytes (and so are ints, typically), thus a processor is (typically) tuned to process 4-byte floats. Notice the repetition of "typical," as there is not guarantee with any of this.
[size=2][ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]
I was coding for a pentium 3 recently, and I got almost a 10% increase in speed when i used all the same type for my numbers. I don't know if you get a noticeable speed up on newer systems though

This topic is closed to new replies.

Advertisement