Jump to content
  • Advertisement
Sign in to follow this  
Zodiak

INT question

This topic is 2609 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'll get straight to the point: does one gain a significant overall performance/memory use increase if they use data types according to the size of a given value, i.e. not always using int for numbers, but using short, byte and so on for lower numbers? Hope that makes sense. Basically, is it worth bothering over that (for example, if I know a given variable cannot be > 10, using int (max value of which is... large) is overkill, or not?) from the very beginning (as I am a beginner) or not? Does it result in more optimized code? Or is the difference negligible?

Share this post


Link to post
Share on other sites
Advertisement
I believe it to be good coding habits. I know it used to be a big deal when memory wasn't so cheap... that said i still try to keep my declarations in the bounds.

Share this post


Link to post
Share on other sites
The difference is irrelevant in 99% of code.

In fact, obsessing over it can be harmful. Accessing a short on a 32-bit system has penalties that a native int will not incur, for example. In general on most CPUs you will be writing code for in 2011, you're better off sticking to the native machine word size, and messing with the rest only when you absolutely need to.


Consider: if you compare a short vs. an int on a typical 32-bit platform, you're talking about 2 bytes vs 4. It would take half a billion copies of your data to make a back-breaking difference on a commodity PC with a couple GB of RAM.

Are you dealing with half a billion objects in your program? Maybe it's worth worrying about 16 vs 32 bits. Otherwise, screw it; memory is cheap and CPUs are better at native word sized data anyways.

Share this post


Link to post
Share on other sites
Thank you!

No, I'm just a beginner and will probably never have to worry about optimization and all, but since I want to avoid getting bad habits straight away, I thought I'd ask.

So that means I should use ints and doubles? And not worry about the rest?

Share this post


Link to post
Share on other sites

So that means I should use ints and doubles? And not worry about the rest?

Probably ints and floats. Doubles are typically (though not always!) 8 bytes, whereas a float is typically 4 bytes (and so are ints, typically), thus a processor is (typically) tuned to process 4-byte floats. Notice the repetition of "typical," as there is not guarantee with any of this.

Share this post


Link to post
Share on other sites
I was coding for a pentium 3 recently, and I got almost a 10% increase in speed when i used all the same type for my numbers. I don't know if you get a noticeable speed up on newer systems though

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!