Are these variable types used alot in game programming?

Started by
14 comments, last by ravengangrel 13 years ago
The variables in question being:

short
ushort
long
sbyte
byte
ulong
uint

byte is the only one i've used from this list, doing something with the color palette.

But these all look like overkill, instead of just using the floats/decimals/ints/doubles of the programming world. Is there any specific reason as to why I should learn these, or should I not worry about them for a while?
Advertisement

The variables in question being:

short
ushort
long
sbyte
byte
ulong
uint

byte is the only one i've used from this list, doing something with the color palette.

But these all look like overkill, instead of just using the floats/decimals/ints/doubles of the programming world. Is there any specific reason as to why I should learn these, or should I not worry about them for a while?


I've seen longs and ulongs used before, but let's be honest here: primitive data types aren't exactly the most complex thing in the world. Know that they exist, look them up when you've got a compelling need (like interop) to use something other than char/int/double.
For binary serialisation, you will use them. Elsewhere, it won't really matter. In some cases using a smaller type can bring large memory bandwidth gains, but it is critical not to prematurely optimise these things. Overflow is a much more subtle problem compared with a noticeable slowdown.
They are used in embedded programming and in highly bandwidth-limited applications, like some networking code. For most PC applications, no you don't need them.
So the verdict is rarely?
Low-level systems stuff, and when dealing with file-formats.

For day-to-day stuff you've got int, float etc ;)
Yeah, you'll run into them, but it's rarely an issue. All those data types are quite close, and differ mostly in the number of bits they hold. You should be able to find reference for the compiler you're using as to how it interprets the various data types.

Low-level systems stuff, and when dealing with file-formats.

For day-to-day stuff you've got int, float etc ;)


This is one of the reasons why we have to have so much memory now days. Use the smallest data type that will accomplish the job. Anything else is just a waste of memory.

This is one of the reasons why we have to have so much memory now days. Use the smallest data type that will accomplish the job. Anything else is just a waste of memory.
If I've got a variable that has a range of 0-9001, I'd likely still put it in and [font="'Lucida Console"]int[/font] rather than a [font="'Lucida Console"]short[/font], unless there's a good reason to reduce the memory usage (e.g. serialisation across network/disk/system-bus). Normally you just want to stick with the CPU word size to avoid packing overheads.

This is one of the reasons why we have to have so much memory now days.

You almost make it sound as if it was a bad thing!

If I've got a variable that has a range of 0-9001, I'd likely still put it in and [font="Lucida Console"]int[/font] rather than a [font="Lucida Console"]short[/font], unless there's a good reason to reduce the memory usage (e.g. serialisation across network/disk/system-bus). Normally you just want to stick with the CPU word size to avoid packing overheads.
[/quote]
Indeed, there can be some reasons to use larger variables than needed (alhough reducing memory usage is not the only reason to avoid them; think of a video player, even a 1920p video frame isn't big for today standards. Would you dare to work with only ints to, say, adjust contrast and brightness at a low level?)

Ignoring the fact that you are oversimplifying, this is a discussion in a beginner's thread where some dude is asking whether he must learn the different basic types of his choice.
I find shocking me that the message most of you are transmitting is: "D'oh, who cares".
Yeah, perhaps most of the time you can just throw an int. Perhaps most of the time it's even better. But you do it for a reason, not because you don't have another choice. In the same way that TheTroll chooses to use the smallest possible data type. You can argue against the wisdom on that, nowadays that we count RAM in Gb and hard discs in TB, but the fact is that both of you know what you need, balance the advantages and the disadvantages, and make your choice, be it better or worse.

GraySnakeGenocide, you can probably do almost the same things without knowing those data types (not everything, at least not in a overly complex way). However those types are there for a reason. I personally don't find very compelling to work with pro's who can't say the max value of a 32 bit integer (I don't need the exact value, but should know it's around 2000 millions if it's signed).
Probably knowing the exact values is overkill, but you should get a grasp of them, and know the reason (signed: -2^(n-1) to (2^(n-1) - 1); unsigned: 0 to ((2^n) - 1).
You should know that 32767 + 1 = -32768 when you are using ushorts, and that 65535 becomes -1 when converting a short to an ushort. And, most important, you should know WHY.

The time you spend learning this will be saved in frustrating clueless debugging sessions

Dude, I feel so 90-ish now. I'll better take a shower.

This topic is closed to new replies.

Advertisement