Okay...

Android machines are 32-bit. This should not be a surprise.

Of course the cost of a single operation on a 64-bit value on a 32-bit processor is going to be anywhere from 2x to 6x as expensive. Bitwise operations can be done on the upper and lower halves, so they take two steps. Many other operations require three steps. Addition means you add the lower half, add the upper half, and add the carried information from the lower half, if any. Subtraction is the same. Multiplication is five steps, division is potentially six steps.

I don't see how or why that would be surprising to anyone. If you're a university graduate I expect you would have taken a theory of computing class that covered it, a compiler theory class that covered it, and possibly additional courses on data processing or information theory that covered it. Even if you didn't attend school, it should be apearent that doing more work takes more time.

Use the right variable size for the underlying value. If you need a 64-bit variable, then use it.

Also, I'm not sure what you want people to do with the information. Just because operations on a single variable take more time does not mean the operations are a performance bottleneck. Usually the actual bottlenecks in performance are not the things you expect. The only way to know for certain is to measure which thing is slowing you down.

The first round of big bottlenecks are things like the unexpected N^4 loop that you didn't know was present, or the cases where you allocate tens of thousands of objects and throw them away. The second big round are cases where you are going through the entire data set with an n^2 algorithm where you really can prune it down to a small number of less-costly tests, or hop all over memory where a sequential algorithm is available. Subsequent rounds of profiling will identify issues that are specific to each code base.

It is unlikely to have a situation where a 64-bit value is needed where at the same time the 64-bit math is also an actual bottleneck.

### Show differencesHistory of post edits

### #1frob

Posted 15 September 2013 - 06:36 PM

Okay...

Yes, the cost of a single operation on a 64-bit value on a 32-bit processor is going to be anywhere from 2x to 6x as expensive. Bitwise operations can be done on the upper and lower halves, so they take two steps. Many other operations require three steps. Addition means you add the lower half, add the upper half, and add the carried information from the lower half, if any. Subtraction is the same. Multiplication is five steps, division is potentially six steps.

I don't see how or why that would be surprising to anyone. If you're a university graduate I expect you would have taken a theory of computing class that covered it, a compiler theory class that covered it, and possibly additional courses on data processing or information theory that covered it. Doing more work takes more time.

Use the right variable size for the underlying value. If you need a 64-bit variable, then use it.

Also, I'm not sure what you want people to do with the information. Just because operations on a single variable take more time does not mean the operations are a performance bottleneck. Usually the actual bottlenecks in performance are not the things you expect. The only way to know for certain is to measure which thing is slowing you down.

The first round of big bottlenecks are things like the unexpected N^4 loop that you didn't know was present, or the cases where you allocate tens of thousands of objects and throw them away. The second big round are cases where you are going through the entire data set with an n^2 algorithm where you really can prune it down to a small number of less-costly tests, or hop all over memory where a sequential algorithm is available. Subsequent rounds of profiling will identify issues that are specific to each code base.

It is unlikely to have a situation where a 64-bit value is needed where at the same time the 64-bit math is also an actual bottleneck.

Yes, the cost of a single operation on a 64-bit value on a 32-bit processor is going to be anywhere from 2x to 6x as expensive. Bitwise operations can be done on the upper and lower halves, so they take two steps. Many other operations require three steps. Addition means you add the lower half, add the upper half, and add the carried information from the lower half, if any. Subtraction is the same. Multiplication is five steps, division is potentially six steps.

I don't see how or why that would be surprising to anyone. If you're a university graduate I expect you would have taken a theory of computing class that covered it, a compiler theory class that covered it, and possibly additional courses on data processing or information theory that covered it. Doing more work takes more time.

Use the right variable size for the underlying value. If you need a 64-bit variable, then use it.

Also, I'm not sure what you want people to do with the information. Just because operations on a single variable take more time does not mean the operations are a performance bottleneck. Usually the actual bottlenecks in performance are not the things you expect. The only way to know for certain is to measure which thing is slowing you down.

The first round of big bottlenecks are things like the unexpected N^4 loop that you didn't know was present, or the cases where you allocate tens of thousands of objects and throw them away. The second big round are cases where you are going through the entire data set with an n^2 algorithm where you really can prune it down to a small number of less-costly tests, or hop all over memory where a sequential algorithm is available. Subsequent rounds of profiling will identify issues that are specific to each code base.

It is unlikely to have a situation where a 64-bit value is needed where at the same time the 64-bit math is also an actual bottleneck.