Why sizeof(int) == 4 on a x64 system?

Started by
13 comments, last by Antheus 14 years, 2 months ago
Quote:Original post by Battousai
I mean, on a x64 system, does the type _int64 still being processed as a pair of "int" numbers, or is it treated as a whole number?
x86 has no instructions for 64-bit integer manipulation, though it does have instructions to simplify multi-word arithmetic.
Advertisement
Quote:Original post by Battousai
After all, programmers were supposed to write their code platform independent (of course there are exceptions).
Yes, and they're supposed to write code that is bug-free. They're supposed to write code that follows the API specification (rather than relying on undocumented features that "just work"). They're supposed to write code that is free of security vulnerabilities. They're supposed to write code that is accessible to people with disabilities. They list goes on, but you can see that there's a lot of things that programmers are "supposed" to do, that don't actually get done.

Here's a bit more background information on why the Win64 team chose the LLP64 model. Be sure to read all of the comments as well :)
Quote:Original post by Battousai
Quote:Original post by Battousai
Does this code take same number of instruction cycles on x86 and x64 CPUs?
I mean, on a x64 system, does the type _int64 still being processed as a pair of "int" numbers, or is it treated as a whole number?

Can any one please answer or make a comment on this?


On x86 systems, multiple assembly instructions are needed in order to add __int64s. But you don't need us to answer this, you can just compile a simple program in x86 that does this and look at the disassembly.


Quote:Original post by Battousai
Quote:Original post by Sneftel
Quote:Original post by cache_hit
I wouldnt say int stayed 4 bytes for compatibility reasons, its because the size of int is implementation defined. There was no reason TO change it, therefore it wasnt changed.
Of course there was a reason to change it. Just like there was a reason to change int from 16 bits to 32 bits once 32 bit processors rolled around. The C89 standard specified that "a 'plain' int object has the natural size suggested by the architecture of the execution environment". To the extent that an architecture with 64-bit registers suggests any natural size, it suggests a 64-bit size.

I agree at all. Why shouldn't the size of int extend to 64-bits? After all, programmers were supposed to write their code platform independent (of course there are exceptions). If the problem was compatibility, they could put a note on the software product something like "Works only on 32-bit systems" and compile a 64-bit alternative. Actually, from what I see on the web, people are already doing this.


I wasn't aware that the standard defined "int" in such a way as Sneftel described, but if that's the case then I stand corrected on there being NO reason to make the change. I still don't think it's all that important though. size_t is exactly the type you're describing, changes to the natural size of the platform.

[Edited by - cache_hit on February 17, 2010 8:36:44 PM]
To be pedantic, size_t isn't totally immune from this. Because it's required to be big enough to cover the memory space, it can actually end up too large to fit in a register. Consider, as a prime example, the 286 architecture, where registers are 16 bits but size_t must be at least 20 bits, and was defined as a long under DOS C compilers. Of course, we're not doing too much Turbo C these days.

The other mitigating factor is that, honestly, people rarely need integers that don't fit into 32 bits. When they do, they know they do. That's a thoroughly squishy rationalization, but as empirical evidence, consider the utter lack of outcry about LLP64. To paraphrase someone who wasn't Bill Gates, four billion should be enough for anyone.
Quote:Original post by Sneftel

That's a thoroughly squishy rationalization, but as empirical evidence, consider the utter lack of outcry about LLP64. To paraphrase someone who wasn't Bill Gates, four billion should be enough for anyone.


And it was till around 2006 that someone finally decided to run merge sort on more than 2^31 elements. The bug was present even in Java standard library.

So in practice, 32 bits really is enough.

This topic is closed to new replies.

Advertisement