long vs int performance

Started by
0 comments, last by frob 10 years, 7 months ago

Thought I would share a recent micro-optimization I found for Android development (targeting Dalvik VM, not native). While trying to optimize some drawing code I noticed that Dalvik uses 32bit registers and thus register pairs (i.e. two registers per) for storage of 64bit data types like the scalar 'long'. Feeling that there might be a performance impact for having to access two registers, I decided to do some benchtesting with the following code:


		public void run() {
				int intCounter = 0;
				int i;
				long longCounter = 0;
				long ii;
				long elapseTime;
				long startTime;
				
				startTime = SystemClock.uptimeMillis();
				
				for (int counter = 0; counter < 100; counter++) {
					for (i = 0; i < 100000; i++) {
						intCounter += i;
					}
					intCounter += counter;
				}
				
				elapseTime = SystemClock.uptimeMillis() - startTime;
			
				Log.d("IntLongTest", "Integer Ending Value - " + intCounter + " / Total Time - " + elapseTime);
				
				startTime = SystemClock.uptimeMillis();
				
				for (long counter = 0; counter < 100; counter++) {
					for (ii = 0; ii < 100000; ii++) {
						longCounter += ii;
					}
					longCounter += counter;
				}
				
				elapseTime = SystemClock.uptimeMillis() - startTime;
				
				Log.d("IntLongTest", "Long Ending Value - " + longCounter + " / Total Time - " + elapseTime);
			}

I came up with the following results:

09-14 23:11:18.567: D/IntLongTest(9491): Integer Ending Value - 1778798614 / Total Time - 123
09-14 23:11:18.728: D/IntLongTest(9491): Long Ending Value - 499995004950 / Total Time - 159

09-14 23:11:27.016: D/IntLongTest(9491): Integer Ending Value - 1778798614 / Total Time - 81
09-14 23:11:27.197: D/IntLongTest(9491): Long Ending Value - 499995004950 / Total Time - 175

09-14 23:11:34.044: D/IntLongTest(9491): Integer Ending Value - 1778798614 / Total Time - 33
09-14 23:11:34.214: D/IntLongTest(9491): Long Ending Value - 499995004950 / Total Time - 163

09-14 23:11:42.653: D/IntLongTest(9491): Integer Ending Value - 1778798614 / Total Time - 39
09-14 23:11:42.843: D/IntLongTest(9491): Long Ending Value - 499995004950 / Total Time - 193

The average execution time for integers is 68.75ms while the average execution time for longs is 172.5. Which means that longs are roughly 2.5 times slower than integers on my Galaxy S3 in this test. I do acknowledge it would be fairly unlikely that this would be a sole reason for poor performance in your code, but I thought it might be an interesting little tidbit :)

This did help me improve draw performance by roughly 10% because I was using longs to track animation frame dwell times for each individual sprite on the screen (I was benchtesting with 10,000 sprites).

Side note, you will notice the integer value is different from the long value which I believe is due to 'roll over' since the loop would take the integer variable past its maximum value.

If anybody else sees a different explanation I would love to hear it. This is just an idle observation, I have not ripped open the dex to see if the bytecode is acting different than expected or if the JIT is helping integers out somehow.

Advertisement
Okay...

Android machines are 32-bit. This should not be a surprise.

Of course the cost of a single operation on a 64-bit value on a 32-bit processor is going to be anywhere from 2x to 6x as expensive. Bitwise operations can be done on the upper and lower halves, so they take two steps. Many other operations require three steps. Addition means you add the lower half, add the upper half, and add the carried information from the lower half, if any. Subtraction is the same. Multiplication is five steps, division is potentially six steps.

I don't see how or why that would be surprising to anyone. If you're a university graduate I expect you would have taken a theory of computing class that covered it, a compiler theory class that covered it, and possibly additional courses on data processing or information theory that covered it. Even if you didn't attend school, it should be apearent that doing more work takes more time.

Use the right variable size for the underlying value. If you need a 64-bit variable, then use it.



Also, I'm not sure what you want people to do with the information. Just because operations on a single variable take more time does not mean the operations are a performance bottleneck. Usually the actual bottlenecks in performance are not the things you expect. The only way to know for certain is to measure which thing is slowing you down.

The first round of big bottlenecks are things like the unexpected N^4 loop that you didn't know was present, or the cases where you allocate tens of thousands of objects and throw them away. The second big round are cases where you are going through the entire data set with an n^2 algorithm where you really can prune it down to a small number of less-costly tests, or hop all over memory where a sequential algorithm is available. Subsequent rounds of profiling will identify issues that are specific to each code base.

It is unlikely to have a situation where a 64-bit value is needed where at the same time the 64-bit math is also an actual bottleneck.

This topic is closed to new replies.

Advertisement