Archived

This topic is now archived and is closed to further replies.

Densun

int vs. long

Recommended Posts

''int'' is a mere 3 letters long, whereas ''long'' is an unbearable 4 letters. Using int can save you lots of typing time in the long run.

Share this post


Link to post
Share on other sites
quote:
Original post by krez
just in case: you can fit twice as many 32-bit integers as 64-bit ones into the same memory...

Are you saying that a long is 64-bits? I always thought it was 32.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
a long takes up 8 bytes of memory as aposed to an integer which takes 4 bytes.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Um... actually.. in most c++ compilers an INT is 32-bits, and a LONG is ALWAYS 32-bits... an int can be of different sizes, so for cross compiler compatibility, I always use short for 16-bit numbers, and long for 32-bit. I try to avoid int''s whenever possible. ANSI C/C++ standard states that a short shall be 16-bit data type, and a long shall be a 32-bit data type, an int is dependent on the target. 16-bit compilers (Borlan Turbo C/C++ series for Dos for example) defined int''s as 16-bits, while MSVC (32-bit compiler) defines int''s as 32-bits... once 64-bit compilers start coming out, int''s will be defined as 64-bit, etc, etc. It''s safer to stick with short/long''s that way you know exactly what you''re getting/using. Even though it''s an extra letter for every time you type long instead of int .

Billy - BillyB@mrsnj.com

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Oh, and by the way... long takes up 4 bytes, aka 32-bits.

Billy - BillyB@mrsnj.com

Share this post


Link to post
Share on other sites
This comes up over & over & over & over...

The only thing the standard says is that:

sizeof(char) == 1
sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long)

There are no guarantees that an int is going to be 32 bits (or whatever). The only guarantee you have is that an int is at least as big as a short and no bigger than a long.

You''re not even guaranteed that sizeof(char)==1 (8-bit) *byte*. You''re only guaranteed that it is 1 *something*.

Practically speaking of course if sizeof(char) was anything other than one 8-bit byte it would break virtually every non-trivial C/C++ app written in the last 15 years or so. The same *cannot* be said of short, int, and long. The sizes of these can, and have, changed between compilers, OS''s, etc.

In MSVC on Win32 short=16 bits, int=long=32 bits, but that is only true for that specific platform.

-Mike

Share this post


Link to post
Share on other sites
quote:
Original post by Dredge-Master
when I need a 32bit number I use long. as stated before int should be 16bit under ANSI, but it can change between compilers and platforms. Just helps to avoid "head scratching" when porting the code around.


This is wrong. An int is not required to be any particular size under ANSI, and neither is long. The only requirement put on the sizes of types is as stated by Anon Mike. Don''t get mixed up between what your compiler does and what ANSI requires.

--
The placement of a donkey''s eyes in its head enables it to see all four feet at all times.

Share this post


Link to post
Share on other sites
A common strategy is to use the types defined in basetsd.h (on Win32) like INT16, UINT32, etc.

That way, if you need to port your code, you can change the types easily.

- Pete

Share this post


Link to post
Share on other sites
If you need to port it where? To another version of Visual C++? Seriously, relying on C++ types to be a particular size is a platform dependent assumption and is, therefore, not portable. Names like INT16 obviously assume a size.

--
The placement of a donkey''s eyes in its head enables it to see all four feet at all times.

Share this post


Link to post
Share on other sites
Oh jeez.

Say you want to have a game that runs on several platforms, and you want data to be exchangeable between all platforms. Saved games/demos/networking/3d models/whatever.

Using "int" would be crazy, as there''s no guarantee of its size. Using INT32 would make a bit more sense, as you can define the type in one place, and have the preprocessor choose the right types for each platform.

#ifdef _X86_
typedef int INT32
#endif

#ifdef _SOME_64BIT_MACHINE_
typedef short INT32
#endif

This all platform dependant, but I''d much rather just have one file that I need to change per-platform, instead of changing types all over the place.

- Pete

Share this post


Link to post
Share on other sites
I didn't say you never needed a type of a specific size, I have to use them myself. I objected to the idea that the practice is portable.

--
The placement of a donkey's eyes in its head enables it to see all four feet at all times.

Edited by - SabreMan on February 21, 2002 9:13:15 AM

Share this post


Link to post
Share on other sites
To clarify, the standard states that int be at least 16 bits and long be at least 32 bits. I''m actually not sure if there are any guarantees for short. As noted before, int is required to be at least as large as short, and long is required to be at least as large as int. This makes sense, since ''short'' and ''long'' are just shorthand for ''short int'' and ''long int'' anyway.

--
Eric

Share this post


Link to post
Share on other sites
quote:
Original post by Anonymous Poster
a long takes up 8 bytes of memory as aposed to an integer which takes 4 bytes.


You're thinking of long double , which is 8 bytes (at least on Visual C++; then again, on Visual C++, a long double is the same as a double...)

Ya...from what I'm hearing, everyone sounds right about the rest

"I've learned something today: It doesn't matter if you're white, or if you're black...the only color that REALLY matters is green"
-Peter Griffin

Edited by - matrix2113 on February 21, 2002 6:03:21 PM

Share this post


Link to post
Share on other sites
quote:

You''re thinking of long double , which is 8 bytes (at least on Visual C++; then again, on
Visual C++, a long double is the same as a double...)



That''s a floating point data type and with GCC it''s 12 bytes.

I think MSVC provides an __int64 data type or something similar. GCC provides "long long" which is 64 bits.

Using 64-bit data types on a 32-bit machine such as the X86 isn''t going to help performance and will probably only hinder it. Integer calculations are typically done with the 32-bit register set which means 64-bit math introduces overhead. Bigger is not always better.

Sure, 64-bit and 128-bit extensions exist (MMX, SSE, SSE2) but these are intended for vector operations on bytes, words, and dwords rather than scalar qword operations. In fact, relying on these instructions for simple math could come at a performance cost if I remember correctly (I know MMX instructions have a prefix byte.)

Good uses I''ve found for 64-bit data types on X86:

- Detecting carry and overflow conditions when emulating 32-bit machines
- Getting the full result of a 32-bit * 32-bit multiplication
- Storing more than 32 flags

---
Bart

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
To clarify some things:

int is defined as being the fastest type the machine can handle. If you''re on a crippled Z80 with a 8-bit compiler, then ints are 8 bits wide.

short is defined as an integer type in between char and int.

long is defined as the longest type the machine can nativally handle as general purpose. If you''re on the x86 (excluding the AMD Hammer), then they''re 32-bits wide, unless you''re in real mode, in which case, they''re 16-bits wide.

Abou long == 64-bits on the x86, I think you''re confusing C with Java...

Share this post


Link to post
Share on other sites
quote:
Original post by Anon Mike
You''re not even guaranteed that sizeof(char)==1 (8-bit) *byte*. You''re only guaranteed that it is 1 *something*.



Yup, there is no guarantee that a byte is 8 bits. There were machines with 9-bit or 13-bit bytes (still true in some embedded systems).

Share this post


Link to post
Share on other sites
The previous AP up there was fairly confused about the C/C++ standards. As I said, int is guaranteed at least 16 bits, long is guaranteed at least 32 and at least the same size as int.

It''s true that ''int'' is typically the most efficient integer representation for the machine (not guaranteed, but true for every machine I''ve ever heard of). Long being 64 bits on X86 would be compiler dependent; MSDev''s is 32, but others may do it differently.

--
Eric

Share this post


Link to post
Share on other sites
quote:
Original post by ekenslow
The previous AP up there was fairly confused about the C/C++ standards. As I said, int is guaranteed at least 16 bits, long is guaranteed at least 32 and at least the same size as int.

Guaranteed by what?


  1. Objects declared as characters (char) shall be large enough to store any member of the implementation’s
    basic character set. [...]

  2. There are four signed integer types: “signed char”, “short int”, “int”, and “long int.” In this
    list, each type provides at least as much storage as those preceding it in the list. Plain ints have the natural
    size suggested by the architecture of the execution environment39) ; the other signed integer types are
    provided to meet special needs.

  3. For each of the signed integer types, there exists a corresponding (but different) unsigned integer type:
    “unsigned char”, “unsigned short int”, “unsigned int”, and “unsigned long
    int,” each of which occupies the same amount of storage and has the same alignment requirements (3.9)
    as the corresponding signed integer type) ; that is, each signed integer type has the same object representation
    as its corresponding unsigned integer type. [...]


39that is, large enough to contain any value in the range of INT_MIN and INT_MAX, as defined in the header <climits>.


long is guaranteed to be at least as large as int, but there is no guarantee that it won't be the same as short, or even char.


Edited by - DrPizza on February 23, 2002 1:30:42 AM

Share this post


Link to post
Share on other sites
quote:
Original post by Anon Mike
You're not even guaranteed that sizeof(char)==1 (8-bit) *byte*. You're only guaranteed that it is 1 *something*.



The Standard actually does guarantee that sizeof(char) is 1 byte, because the sizeof operator "yields the size (in bytes) of its operand" [C89 6.3.3.4, C99 6.5.3.4].

Edited by - spock on February 23, 2002 6:43:02 PM

Share this post


Link to post
Share on other sites
quote:
Original post by ekenslow
The previous AP up there was fairly confused about the C/C++ standards. As I said, int is guaranteed at least 16 bits, long is guaranteed at least 32 and at least the same size as int.



There are no such guarantees. This has been pointed out by various posters to this thread about half a dozen times, with accompanying quotes from the Standard. Why is it so hard for you people to understand?

--

The placement of a donkey''s eyes in its head enables it to see all four feet at all times.

Share this post


Link to post
Share on other sites