Sign in to follow this  
hkBattousai

Why sizeof(int) == 4 on a x64 system?

Recommended Posts

hkBattousai    190
#include <iostream>
#include <tchar.h>

int _tmain(int argc, TCHAR** argv)
{
	std::cout << "sizeof(char)       = " << sizeof(char) << std::endl;
	std::cout << "sizeof(TCHAR)      = " << sizeof(TCHAR) << std::endl;
	std::cout << "sizeof(short)      = " << sizeof(short) << std::endl;
	std::cout << "sizeof(int)        = " << sizeof(int) << std::endl;
	std::cout << "sizeof(long)       = " << sizeof(long) << std::endl;
	std::cout << "sizeof(long long)  = " << sizeof(long long) << std::endl;
	std::cout << "sizeof(_int64)     = " << sizeof(_int64) << std::endl;
	return 0;
}
Output:
sizeof(char)       = 1
sizeof(TCHAR)      = 2
sizeof(short)      = 2
sizeof(int)        = 4
sizeof(long)       = 4
sizeof(long long)  = 8
sizeof(_int64)     = 8
The output does not change even if I compile with x86 configuration. Why does int type always have 4-byte size? Shouldn't it allocate 8-bytes on a x64 system?

Share this post


Link to post
Share on other sites
Sneftel    1788
No, nothing forces int to be the same width as the architecture. Keeping int to 32 bits maintains the de facto standard width for the type and thereby improves cross-architecture compatibility.

Share this post


Link to post
Share on other sites
hkBattousai    190
Ok, I understand why int stayed 32-bit. It is for compatibility issues.

I wonder one thing, does it take the same CPU time to take of summation of any two _int64 numbers on x86 and x64 CPUs?

For example:

_int64 num1, num2, num3;
num1 = 5;
num2 = 10;
num3 = num1 + num2;


Does this code take same number of instruction cycles on x86 and x64 CPUs?
I mean, on a x64 system, does the type _int64 still being processed as a pair of "int" numbers, or is it treated as a whole number?

Share this post


Link to post
Share on other sites
Sneftel    1788
Given that it doesn't take the same number of cycles on all x86 processors, and doesn't take the same number of cycles on all x64 processors? No.

Share this post


Link to post
Share on other sites
cache_hit    614
Perhaps youre confusing int with size_t, which does change. I wouldnt say int stayed 4 bytes for compatibility reasons, its because the size of int is implementation defined. There was no reason TO change it, therefore it wasnt changed.

Share this post


Link to post
Share on other sites
Sneftel    1788
Quote:
Original post by cache_hit
I wouldnt say int stayed 4 bytes for compatibility reasons, its because the size of int is implementation defined. There was no reason TO change it, therefore it wasnt changed.
Of course there was a reason to change it. Just like there was a reason to change int from 16 bits to 32 bits once 32 bit processors rolled around. The C89 standard specified that "a 'plain' int object has the natural size suggested by the architecture of the execution environment". To the extent that an architecture with 64-bit registers suggests any natural size, it suggests a 64-bit size.

Share this post


Link to post
Share on other sites
hkBattousai    190
Quote:
Original post by Battousai
Does this code take same number of instruction cycles on x86 and x64 CPUs?
I mean, on a x64 system, does the type _int64 still being processed as a pair of "int" numbers, or is it treated as a whole number?

Can any one please answer or make a comment on this?

Quote:
Original post by Sneftel
Quote:
Original post by cache_hit
I wouldnt say int stayed 4 bytes for compatibility reasons, its because the size of int is implementation defined. There was no reason TO change it, therefore it wasnt changed.
Of course there was a reason to change it. Just like there was a reason to change int from 16 bits to 32 bits once 32 bit processors rolled around. The C89 standard specified that "a 'plain' int object has the natural size suggested by the architecture of the execution environment". To the extent that an architecture with 64-bit registers suggests any natural size, it suggests a 64-bit size.

I agree at all. Why shouldn't the size of int extend to 64-bits? After all, programmers were supposed to write their code platform independent (of course there are exceptions). If the problem was compatibility, they could put a note on the software product something like "Works only on 32-bit systems" and compile a 64-bit alternative. Actually, from what I see on the web, people are already doing this.

Share this post


Link to post
Share on other sites
Sneftel    1788
Quote:
Original post by Battousai
I mean, on a x64 system, does the type _int64 still being processed as a pair of "int" numbers, or is it treated as a whole number?
x86 has no instructions for 64-bit integer manipulation, though it does have instructions to simplify multi-word arithmetic.

Share this post


Link to post
Share on other sites
Codeka    1239
Quote:
Original post by Battousai
After all, programmers were supposed to write their code platform independent (of course there are exceptions).
Yes, and they're supposed to write code that is bug-free. They're supposed to write code that follows the API specification (rather than relying on undocumented features that "just work"). They're supposed to write code that is free of security vulnerabilities. They're supposed to write code that is accessible to people with disabilities. They list goes on, but you can see that there's a lot of things that programmers are "supposed" to do, that don't actually get done.

Here's a bit more background information on why the Win64 team chose the LLP64 model. Be sure to read all of the comments as well :)

Share this post


Link to post
Share on other sites
cache_hit    614
Quote:
Original post by Battousai
Quote:
Original post by Battousai
Does this code take same number of instruction cycles on x86 and x64 CPUs?
I mean, on a x64 system, does the type _int64 still being processed as a pair of "int" numbers, or is it treated as a whole number?

Can any one please answer or make a comment on this?


On x86 systems, multiple assembly instructions are needed in order to add __int64s. But you don't need us to answer this, you can just compile a simple program in x86 that does this and look at the disassembly.


Quote:
Original post by Battousai
Quote:
Original post by Sneftel
Quote:
Original post by cache_hit
I wouldnt say int stayed 4 bytes for compatibility reasons, its because the size of int is implementation defined. There was no reason TO change it, therefore it wasnt changed.
Of course there was a reason to change it. Just like there was a reason to change int from 16 bits to 32 bits once 32 bit processors rolled around. The C89 standard specified that "a 'plain' int object has the natural size suggested by the architecture of the execution environment". To the extent that an architecture with 64-bit registers suggests any natural size, it suggests a 64-bit size.

I agree at all. Why shouldn't the size of int extend to 64-bits? After all, programmers were supposed to write their code platform independent (of course there are exceptions). If the problem was compatibility, they could put a note on the software product something like "Works only on 32-bit systems" and compile a 64-bit alternative. Actually, from what I see on the web, people are already doing this.


I wasn't aware that the standard defined "int" in such a way as Sneftel described, but if that's the case then I stand corrected on there being NO reason to make the change. I still don't think it's all that important though. size_t is exactly the type you're describing, changes to the natural size of the platform.

[Edited by - cache_hit on February 17, 2010 8:36:44 PM]

Share this post


Link to post
Share on other sites
Sneftel    1788
To be pedantic, size_t isn't totally immune from this. Because it's required to be big enough to cover the memory space, it can actually end up too large to fit in a register. Consider, as a prime example, the 286 architecture, where registers are 16 bits but size_t must be at least 20 bits, and was defined as a long under DOS C compilers. Of course, we're not doing too much Turbo C these days.

The other mitigating factor is that, honestly, people rarely need integers that don't fit into 32 bits. When they do, they know they do. That's a thoroughly squishy rationalization, but as empirical evidence, consider the utter lack of outcry about LLP64. To paraphrase someone who wasn't Bill Gates, four billion should be enough for anyone.

Share this post


Link to post
Share on other sites
Antheus    2409
Quote:
Original post by Sneftel

That's a thoroughly squishy rationalization, but as empirical evidence, consider the utter lack of outcry about LLP64. To paraphrase someone who wasn't Bill Gates, four billion should be enough for anyone.


And it was till around 2006 that someone finally decided to run merge sort on more than 2^31 elements. The bug was present even in Java standard library.

So in practice, 32 bits really is enough.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this