Jump to content
  • Advertisement
Sign in to follow this  
jamesleighe

32 vs 64 bit

This topic is 2527 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Is there any reason not to switch all my code over to 64 bit?


* I hear that 64 bit can be slower for some reason, is this true?

* If I were to switch, would I just convert all my ints to int64s etc? Or keep them as small as makes sense?

* Would not using the 'native' type for the processor be faster? Similar to how bools are often stored as 4 bytes? (wouldn't this be 8 bytes for 64 bit processors?)


Thanks as usual guys!

Share this post


Link to post
Share on other sites
Advertisement
As far as I know you don't have to explicitly "convert to 64 bit". The differentiation is mostly relevant to the compiler and what instruction set it uses. The only issues that you might need to fix in your code is when you made assumptions about the sizes of say size_t and pointers. Just compile your code as 64bit...

Share this post


Link to post
Share on other sites

Is there any reason not to switch all my code over to 64 bit?


64-bit programs won't run on a 32-bit Operating System. However if your program requires more than about 2GB of RAM then it won't work on 32-bit anyway.

Many programs with a 64-bit version also have a 32-bit version.


* I hear that 64 bit can be slower for some reason, is this true?


Yes. For example, as pointers double in size you can get more cache misses with some data structures. Also the compiler may not be as well optimized for 64-bit instructions as 32-bit ones.

However they can also be quicker, as in 64-bit mode you get more registers too. The only way to be sure is to test it.


* If I were to switch, would I just convert all my ints to int64s etc? Or keep them as small as makes sense?


Only convert where there's a clear benefit from doing so, or you'll make the cache miss problem mentioned above even worse.

Obviously you will need to convert to 64-bit types where you need to support quantities of data that would overflow a 32-bit value.


* Would not using the 'native' type for the processor be faster? Similar to how bools are often stored as 4 bytes? (wouldn't this be 8 bytes for 64 bit processors?)


No, they can handle 32-bit data types just fine. Also as I noted earlier bigger data types tend to cause extra cache misses, so using smaller data types can make a program run faster even if the CPU can't process them quite as quickly as bigger types.

If you have a choice between using two 32-bit values or one 64-bit one though, then the single 64-bit value will normally be better. For example, chess programmers like 64-bit because you can fit one bit per square on the board into a single 64-bit register.

Share this post


Link to post
Share on other sites
Indeed, unless you have made any assumptions about variable sizes then in many cases simply switching to x64 mode will Just Work.

About the only thing which will definitately change, going to x64 mode, is pointer sizes. Everything else depends on the platform you are using, which includes the compiler.[s] For example, iirc, with MSVC 'int' stays 32bit and 'long' becomes 64bit. [/s] edit: apprently I did not recal correctly, see the post futher down :) (However I still wouldn't assume this to be the case and if I needed a precise/forced size then I'd either typedef my own types or use boost's sized types.)

Share this post


Link to post
Share on other sites

[quote name='James Leighe' timestamp='1323949139' post='4894143']
* I hear that 64 bit can be slower for some reason, is this true?


Yes. For example, as pointers double in size you can get more cache misses with some data structures. Also the compiler may not be as well optimized for 64-bit instructions as 32-bit ones.

However they can also be quicker, as in 64-bit mode you get more registers too. The only way to be sure is to test it.
[/quote]

Compilers are also free to make more assumptions about the hardware; MSVC, for example, will use SSE2 instructions where it can beause all x64 processors support the instructions.

Share this post


Link to post
Share on other sites
I totally agree with adam on the alignment-issues of structures containing pointers. I once had a structure that was 128 byte on 32 bit and 136 byte on 64 bit just because of a single pointer that used up 8 byte instead of 4 byte in 64 bit. Calculating the n-th element was significantly faster under 32bit until I rearranged the variables in the struct.


Additionally, in the beginning when I switched from 32 bit to 64 bit I had some bugs with the printf function. Depending on whether you use linux or windows you will need some new constants to refer to 64bit variables in the generated string.

Share this post


Link to post
Share on other sites
If writing C++, native types should be used only for OS-specific functionality.

For iteration, use size_t or std::allocator<T>::size_type, which is defined in all containers as well.

For container manipulation, prefer std::algorithm, iterators or ::size_type in that order.

For pointer manipulation, ptrdiff_t is defined in similar manner.

Demotion of integer types is rarely used (use of short instead of int for <32k has fallen out of favor). Choice here should be based on numeric range, not target platform, unless performing some really tricky optimizations for specific hardware.

Float vs. double is a mixed bag. Doubles on their own are likely to be faster and more accurate, but choice here will be dictated by memory use and choice of SIMD, so for many uses floats are natural choice and to avoid excessive conversions, propagate through entire project. Otherwise, doubles are preferred.

Bithacks are a no-go. Whether cramming pointers into ints, checking signs, math optimizations... They would be one of last types of optimization performed and it's highly unlikely they result in any meaningful improvement.

Another gotcha is serialization. Different padding, complications around unions, different sizes and lengths of types.


Obviously, all of the above applies to standalone C++ code only. Whatever OS requires will be quite a mess.

Share this post


Link to post
Share on other sites

For example, iirc, with MSVC 'int' stays 32bit and 'long' becomes 64bit.

Microsoft Visual Studio adheres to LLP64, so int and long are always 32 bits. Only the size of pointers changes.


L. Spiro

Share this post


Link to post
Share on other sites

[quote name='phantom' timestamp='1323951311' post='4894151']
For example, iirc, with MSVC 'int' stays 32bit and 'long' becomes 64bit.

Microsoft Visual Studio adheres to LLP64, so int and long are always 32 bits. Only the size of pointers changes.


L. Spiro
[/quote]

I stand corrected :)

Share this post


Link to post
Share on other sites
[font="Arial"]Microsoft once replied to an inquiry as to why they were reluctant to convert Visual Studio to 64 bits (paraphrased):
"Uh, you know, this is a horrible lot of work, and having done it with Excel, we're scared to do it again soon, as long as it works well just like this. 32 bit apps do run fine under 64bits too, so what is the issue! It's not like any program except maybe Excel where some people have thousands of sheets with tens of thousands of rows and columns needs more than 2GB anyway, either".

There is a grain of truth in that.

Although if you have always been pedantic to[/font][font="Arial"] use correct types [/font][font="Arial"](which can be harder than one anticipates), pedantic on constants (yes, [font="Courier New"](size_t) -1[/font] is not the same in 32 and 64 bits, -- surprise surprise, nor are many other dangerous "common idioms", nor do most bitmasks work as expected...), and never made any assumptions on structure sizes etc, chances are indeed good it will just work.
Though you never know what any of your co-workers or third party library implementors did. And what's worse, code can actually "work fine", i.e. not crash despite being broken.

Despite that, I'm inclined to recommend moving to 64 bits, because the advantages outweight the disadvantages. The much larger and uniform register set is a big advantage (some algorithms such as curve25519 run orders of magnitude (not just 20% or so) faster in 64 bits, only because the data fits into registers), and practically unlimited address space can be a big relief. Also, a lot less guesswork and writing extra branches (CPU features) is involved.

If you read The Old New Thing every now and then and see entries like "question about really stupid thing" and the answer reads "yeah that is historical, for compatibility with 16 bit Windows", you may get an idea of another good reason to move swiftly forward and do a clean cut, if you can afford it. Carrying your old luggage around forever is not always a good thing.
[/font]

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!