32-bit to 64-bit

Started by
8 comments, last by SirTwist 20 years ago
Hi there, can anybody explain to me what exactly it means if you have to convert a program from 32-bit to 64-bit ? I mean, I know that you can access a far bigger amount of memory with 64-bit. 4500 TB agains 4 GB or something like that. BUT ... what does that mean for your source code ? What exactly are the changes that have to be made to do it? So if someone would enlighten me here ..... thaaaaanks again ! Twist
Advertisement
Not very much, the pointer size is increased to 64-bit, all the other things remain as they are I think. Oh yes, I''ve almost forgotten.. the default alignment of structures is 8 now.
Just something I thought (java):

Imagine jou''ve got an integer #FFFFFFFF.
This integer is negative, because the first bit, the sign bit, is set.
Now if you switch to a 64-bit system, an int is 64 bits instead of 32 (isn''t it?), so the integer wil suddenly become positive. It bewomes #00000000FFFFFFFF, so the sign bit isn''t set anymore.

(this is just what I thought, I am not sure of this at all)
Why do my programs never work on other computers?
Reguardless if your on a 32 bit and a 64 bit machine, wont an int be 32 bits?
quote:Original post by skow
Reguardless if your on a 32 bit and a 64 bit machine, wont an int be 32 bits?


A signed int can hold all the values between INT_MIN and INT_MAX inclusive. INT_MIN is required to be -32767 or less, INT_MAX must be at least 32767. Again, many 2''s complement implementations will define INT_MIN to be -32768 but this is not required.
An unsigned int can hold all the values between 0 and UINT_MAX inclusive. UINT_MAX must be at least 65535. The int types must contain at least 16 bits to hold the required range of values.

These are defined in <limits.h> || <climits> on the platform in question with your compiler

The source code doesn''t change. You just need another version of a compiler that can compile for 64 bit chips.
In C/C++, ints (and even long!) are still 32-bit. It would break far too much code if it wasn''t. Use __int64 (MSVC) or long long (gcc) if you need to use all 64 bits of the integer registers.

x86-64 / AMD64 will get a speed boost anyway from recompiling because it has 16 registers instead of 8.
let me distill this a little, there are seperate issues at work:

1 - compiling for 64 bits instead of 32 is just like any other platform change (think 386 instead of Pentium, or Solaris instead of Linux) - it means the compiler will generate different code, due to different platform assumptions.

What exactly changes DEPENDS on what langauge and what enviroment you are using. Usually the default pointer changes from 32 to 64 bits, SOMETIMES the default int changes, SOMETIMES the long int changes, SOMETIMES the alignment of structures changes (like the DEC alpha requires 64 bit number be 64 bit aligned, x86 does not).

So changing your CODE to support a 64 bit platform means this: The code can be compiled on a 64 bit compiler and still work as expected.

A lot of code requires no changes to be compiled on both 32 and 64 bit platforms. But some programming results that happens in code changes when the size changes, and therefore code that uses these constructs (intentionally or not) will need to be changed to work right under 64 bits platforms.

1 example was given above (the sign of the hard coded Hex number). And most things like this "shouldn''t" be counted on any way (if you want to set the sign bit, use negative decimal numbers, not hex - but even that usually counts on the platform using 2''s compliment).

Another is code that casts between pointers and ints - if they are both 32 bit, life is great, if they both change to 64 bit, life usually stays great ... but when the pointer is 64 and the int 32, the code breaks. Hence the reason these types of operation are called "implementation defined".

ANYTHING in the C/C++ standard that is "implementation defined" is a likely candidate for breaking when the platform changes (or even just the compiler version).

Counting on the binary size of structures and classes is another good example, these change with compilers, compiler settigns, and platforms. So code which was written this way, is usually rewritten to no longer depend on fixed layouts.

So really, supporting 64 bit compilers is a subset of the things need to make your product be "cross-platform" (able to compile and run under multiple compilers / operating systems / or processors.
quote:Original post by Anonymous Poster
In C/C++, ints (and even long!) are still 32-bit.


On most Windows OS, running 32-bit compilers, compiling for 32-bit Intel CPU's, yes. But the size of int, long, etc., is entirely up to the compiler. I used to use Microsoft and Borland compilers running on Windows 3.1 on a 286 CPU, and ints were 16-bit. I once worked on a Dec Alpha computer running a specialized version of Windows NT, where ints were 64-bit.

quote:
It would break far too much code if it wasn't.


It would only break code that was sort of broken already, for requiring assumed sizes of data types. If something in your code *requires* a 32-bit int and would break with a 64-bit int (or anything else), you should be using __int32 in those cases, not just "int". Structures for file formats is a prime example - you don't want your custom file loading routines to break once we all move to 64-bit PCs.




[edited by - BriTeg on April 16, 2004 6:31:56 PM]
Brianmiserere nostri Domine miserere nostri
quote:
Original post by BriTeg
It would only break code that was sort of broken already, for requiring assumed sizes of data types. If something in your code *requires* a 32-bit int and would break with a 64-bit int (or anything else), you should be using __int32 in those cases, not just "int". Structures for file formats is a prime example - you don''t want your custom file loading routines to break once we all move to 64-bit PCs.


This is where you say ''Welcome to the World of C Portability!''

This topic is closed to new replies.

Advertisement