Actually, int remains 32-bits for most languages and compilers out there.
Really? I thought it was based on the CPU and you had to target the compiler to 32-bit or whatever XX-bit.
int is suggested to be the same size as the "natural" size of an integer in the processor. x86 systems use 32 bit registers, hence most (all?) compilers targeting an x86 system will make int a 32 bit integer, so it nicely matches the processor's "natural" integer size.
So what about these pesky 64 bit systems? Believe it or not, their "natural" size is still 32 bits. At least for x64 systems. x64 systems are really just x86 systems with a few extensions to give them 64 bit support. However, at their heart, they're built on an x86 system. Your 64 bit x64 chip is built using 32 bit x86 architecture, with some extensions for native 64 bit support. That would suggest that their "natural" size is still 32 bits, given the fact that they're built on 32 bit x86 architecture, which means it still makes sense for int to be 32 bits. Yes, x64 systems have some 64 bit registers and a few new instructions, but they're really building on top of 32 bit x86 architecture.
But on top of that, the compiler is free to make int any size it wants (so long as it can represent +/-32767). A lot of people have written code that assumed int is 32 bits, and by keeping int 32 bits when compiling for a 64 bit system, some headaches/bugs can be avoided. So part of it is compiler implementors having mercy on those who wrote code that would break if int was 64 bits.