Jump to content
  • Advertisement
Sign in to follow this  
hkBattousai

Why sizeof(int) == 4 on a x64 system?

This topic is 3045 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

#include <iostream>
#include <tchar.h>

int _tmain(int argc, TCHAR** argv)
{
	std::cout << "sizeof(char)       = " << sizeof(char) << std::endl;
	std::cout << "sizeof(TCHAR)      = " << sizeof(TCHAR) << std::endl;
	std::cout << "sizeof(short)      = " << sizeof(short) << std::endl;
	std::cout << "sizeof(int)        = " << sizeof(int) << std::endl;
	std::cout << "sizeof(long)       = " << sizeof(long) << std::endl;
	std::cout << "sizeof(long long)  = " << sizeof(long long) << std::endl;
	std::cout << "sizeof(_int64)     = " << sizeof(_int64) << std::endl;
	return 0;
}
Output:
sizeof(char)       = 1
sizeof(TCHAR)      = 2
sizeof(short)      = 2
sizeof(int)        = 4
sizeof(long)       = 4
sizeof(long long)  = 8
sizeof(_int64)     = 8
The output does not change even if I compile with x86 configuration. Why does int type always have 4-byte size? Shouldn't it allocate 8-bytes on a x64 system?

Share this post


Link to post
Share on other sites
Advertisement
No, nothing forces int to be the same width as the architecture. Keeping int to 32 bits maintains the de facto standard width for the type and thereby improves cross-architecture compatibility.

Share this post


Link to post
Share on other sites
Ok, I understand why int stayed 32-bit. It is for compatibility issues.

I wonder one thing, does it take the same CPU time to take of summation of any two _int64 numbers on x86 and x64 CPUs?

For example:

_int64 num1, num2, num3;
num1 = 5;
num2 = 10;
num3 = num1 + num2;


Does this code take same number of instruction cycles on x86 and x64 CPUs?
I mean, on a x64 system, does the type _int64 still being processed as a pair of "int" numbers, or is it treated as a whole number?

Share this post


Link to post
Share on other sites
Given that it doesn't take the same number of cycles on all x86 processors, and doesn't take the same number of cycles on all x64 processors? No.

Share this post


Link to post
Share on other sites
That's not up to the language, that's up to the compiler. Compile it and look at the generated assembly.

Share this post


Link to post
Share on other sites
Perhaps youre confusing int with size_t, which does change. I wouldnt say int stayed 4 bytes for compatibility reasons, its because the size of int is implementation defined. There was no reason TO change it, therefore it wasnt changed.

Share this post


Link to post
Share on other sites
Quote:
Original post by cache_hit
I wouldnt say int stayed 4 bytes for compatibility reasons, its because the size of int is implementation defined. There was no reason TO change it, therefore it wasnt changed.
Of course there was a reason to change it. Just like there was a reason to change int from 16 bits to 32 bits once 32 bit processors rolled around. The C89 standard specified that "a 'plain' int object has the natural size suggested by the architecture of the execution environment". To the extent that an architecture with 64-bit registers suggests any natural size, it suggests a 64-bit size.

Share this post


Link to post
Share on other sites
Quote:
Original post by Battousai
Does this code take same number of instruction cycles on x86 and x64 CPUs?
I mean, on a x64 system, does the type _int64 still being processed as a pair of "int" numbers, or is it treated as a whole number?

Can any one please answer or make a comment on this?

Quote:
Original post by Sneftel
Quote:
Original post by cache_hit
I wouldnt say int stayed 4 bytes for compatibility reasons, its because the size of int is implementation defined. There was no reason TO change it, therefore it wasnt changed.
Of course there was a reason to change it. Just like there was a reason to change int from 16 bits to 32 bits once 32 bit processors rolled around. The C89 standard specified that "a 'plain' int object has the natural size suggested by the architecture of the execution environment". To the extent that an architecture with 64-bit registers suggests any natural size, it suggests a 64-bit size.

I agree at all. Why shouldn't the size of int extend to 64-bits? After all, programmers were supposed to write their code platform independent (of course there are exceptions). If the problem was compatibility, they could put a note on the software product something like "Works only on 32-bit systems" and compile a 64-bit alternative. Actually, from what I see on the web, people are already doing this.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!