Sign in to follow this  
neverland

some questions about 64bit

Recommended Posts

64bit CPUs have come into the market for months. And 64bit OSs are coming soon. What does the "64bit" means ? Does it have the same meaning of CPU and OS ? And what 64bit will bring for programmers ? I heared someone said that "from DOS to Win32" was a great calamity for programmers. I am wondering if there will be another calamity for programmers when 64bit really comes.

Share this post


Link to post
Share on other sites
64bit OSes have been out since '96. Linux has had 64 bit support since before 2000, which is probably why Linux supported x86_64 much sooner than windows. And I have Windows XP x64 (purchased from newegg, but it's sold only as oem) and Gentoo amd64 running on my 64 bit system, so the OSes are already available. The main advantage of 64bit is access to much, much more memory. 32bit only goes to 4GB (with only 2GB/3GB per process on some OSes). With some inefficient tricks I think you can raise the limit to 64GB. 4GB may sound like a lot but 1GB is basically standard on the higher end desktops, which some 2GB machines, and a full DVD movie you might be editing will have to use your MUCH slower hard drive to store some data. While 64bit can go up to 17179869184GB of ram without inefficient workarounds (assuming I did my math right), but I think in the current x86_64 chips, you can only access something like 256 terabytes of ram.

I'm going to assume your talking about x86_64 (also called AMD64 or EM64T) which was designed by AMD, and can run regular x86 just as well as 64 bit code (with no emulation even in a 64bit os). Compared to regular x86, the integers can be twice as big (and hold numbers about 4 billion times bigger.) Which can be very useful in some cases,GMP can run 4 times faster thanks to bigger primitives. But in most cases the biggest improvement is the doubling of the general registers, which can give about a 5-20% increase in speed depending on the application (on AMD processors at least). Unfortunately from the numbers I've since Intel has a poor implementation of x86_64 (probably because x86_64 has been "tacked on" to the p4) which causes 64 bit code to generally be slower than 32. Also all x86_64 cpus include at least SSE2. I think there's some other more minor stuff too. There is a potential decrease in speed due the increased sized of pointers, instructions and data alignment, but this seems to be more than countered (on the AMD processors at least) by the improvements made to the architecture.

The 64bit transition is going a lot smoother than I thought it would (probably thanks to the lessons learned in the dos days, and better designed OSes). Mostly I just have to worry about 64bit drivers (not a problem with a new system if you check out the parts first), and the few programs that interface heavily with the OS (virtual drives, firewalls, viruses scanners and the like).

The tricky part would be making sure your code can compile on 64bit and 32bit systems. (only really important when you make assumptions about the size of the integer variables, or there size relative to pointers.

One really annoying thing I learned is that on gcc in 64 bit linux (I don't know of a 64 bit windows version to test on)
sizeof(int)<sizeof(long)=sizeof(long long)=sizeof(pointer)
which seems really annoying to me since from my understanding of the c standard int should be the same as the native size on the processor. And long having the same size as long long seems rather silly. they did this to help with compatibility with older apps but I would think that a compiler flag to enable this behavior would have been nicer since the apps have to be recompiled anyway.

[Edited by - Cocalus on July 9, 2005 11:30:49 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by neverland
64bit CPUs have come into the market for months.
And 64bit OSs are coming soon.
What does the "64bit" means ?
Does it have the same meaning of CPU and OS ?
And what 64bit will bring for programmers ?

I heared someone said that "from DOS to Win32" was a great calamity for programmers.
I am wondering if there will be another calamity for programmers when 64bit really comes.


For the most part the only serious problem will be with code that does typecasing between pointers and integral types and pointer arithmetic where the result is stored in and assumed to be 32 bit integers. In C++ the size of an integer type is only guaranteed to be a minimum size. For instance the 'int' type will always be at LEAST 16 bits in size while the 'long' type will always be at least 32 bits in size. Unfortunately many developers simply assume that both 'int' and 'long' are 32 bits. While it might seem trivial this can be problematic when the use of 'int' requires that it be 32bit (i.e. in data structures where memory layout is important) AND that pointers are also 32bits in size.


Consider the following:

char *ptr = new char[256];
int handle = (int)ptr;
ptr = (char*)handle;


The above assumes that pointers and int are the same size. If however you compile for 64bit systems where 'int' is 32bits and pointer are 64bits the above code no longer works. While this may seem unlikely there is a considerable amount of code that uses operations similar to the example.

The other issue is pointer arithmetic. Consider the following:

int get_delta(const char *str1, const char *str2) {
return (str2 - str1);
}


The same issues apply here. If the difference in memory locations pointed to by str1 and str2 is greater than 32bits the result will be incorrect.


See the definitions in <limits>, <limits.h>, and <stddef.h> for type definitions intended for use in those situations.

Share this post


Link to post
Share on other sites
What it gives to programmers IMO is more to work with.

As an example look at the 64bit version of FarCry. It works at about the same fps as the 32bit version but with alot more effects and techniques applied to the scene.

Share this post


Link to post
Share on other sites
Quote:
Original post by Gink
64 bit CPUs also read and write to memory 8 bytes at a time, whereas 32 bit cpus are 4 bytes at a time, iirc.


Actually that should be "CAN read and write to memory 8 bytes at a time". Just as 32 bit cpu's can read/write 1, 2, and 4 bytes at a time so can their 64bit sisters.


Share this post


Link to post
Share on other sites
Quote:
Original post by Helter Skelter
Quote:
Original post by Gink
64 bit CPUs also read and write to memory 8 bytes at a time, whereas 32 bit cpus are 4 bytes at a time, iirc.


Actually that should be "CAN read and write to memory 8 bytes at a time". Just as 32 bit cpu's can read/write 1, 2, and 4 bytes at a time so can their 64bit sisters.


Im pretty sure that they can only read/write 4 bytes at a time, and 64 bit cpus can only read/write 8 bytes at a time. If it doesnt need the bits they are ignored.

Share this post


Link to post
Share on other sites
For cacheable memory, a CPU will read a cache line at a time (or, alternately, a line fetch buffer, which is usually the same size as a cache line).

For non-cacheable memory, different CPUs have different capability as for how wide or narrow memory they can address. However, because of memory-mapped I/O, most general-purpose CPUs can actually read 1 or 2 bytes only when needed, although it's usually dog slow (in CPU terms -- possibly something like an entire microsecond).

It turns out that the internal data paths of the x86-32 are 64 bits these days (which is why 128-bit SSE instructions have 2 clock latencies), and I believe you can actually do a locked 64-bit memory operation with a 32-bit x86, so the exact distinction in memory access capability is somewhat vague.

The big gain is getting much more than 2 GB, or even 4 GB, of virtual memory space for a single process. Even with less than 4 GB of RAM in a machine, this helps, because things like AGP GART, memory-mapped files, pre-relocated executables/DLLs, and bus mastering devices will often eat up (and fragment) your memory space. A modern, large, componentized system can typically run into the 2 GB limit today, so 64-bit arrives just in time. (Workstations have been using it for years, of course -- and anyone remember the Nintendo 64 console?)

Share this post


Link to post
Share on other sites
Quote:
Original post by hplus0603(Workstations have been using it for years, of course -- and anyone remember the Nintendo 64 console?)


Nintendo 64 was 64 bit?[looksaround] Wow, can't say I knew that, but it does sort of make sense [grin]

Share this post


Link to post
Share on other sites
Quote:
Original post by Gink
Im pretty sure that they can only read/write 4 bytes at a time, and 64 bit cpus can only read/write 8 bytes at a time. If it doesnt need the bits they are ignored.


Actually it may be subjective and totally dependent on the actual CPU architecture. I say subjective because it may vary depending on whether you are referring to the instruction set, data access, or address access. Read/write operations may also be affected by unaligned data access (at least for 16 bit values). Have to dig into the x86 specs to be sure.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this