# Arithmetic Calculations Performance Question

This topic is 3610 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hey everyone! Is there a difference in performance when working on coordinates say from -1000 to 1000 in XYZ, and say from -50000 to 50000? Let's assume purely theoretically that I there are no precision problems when shrinking the coordinates, would it have better performance? I thought the answer to this question is 'no', because what does it matter if the polygon is 'large' or not? When the camera is at a relatively the same location, it looks the same size on the screen. Well, I'm not so sure anymore. Thanks in advance!

No one knows? :(

##### Share on other sites
I can't think of anything that would cause a performance difference right now...

What made you think there's a difference?

##### Share on other sites
No, there is no difference.

##### Share on other sites
Quote:
 Original post by RattenhirnWhat made you think there's a difference?

Probably he thinks there's a difference, because it's more difficult for a human to calculate big numbers. :)

It's always the same for computers, because they calculate the numbers bit-by-bit.
A little example, 6 + 3 = 9 in binary form:
0110 (decimal 6)
+0011 (decimal 3)
-----
1001

The computer will do this in hardware. "Hardware" means certain chips on your processor which are capable of addition in this case. Note that such addition-chips can look very different and have very different benefits.
What the chip will do now is comparing each bit of both digits with each other, performing the boolean operations AND and XOR. Even if you have a 64 bit integer of which all bits are 0, the chip will still need to go through all of them and do these two boolean operations. And these operations won't go faster if the two bits are both 0 or anything like that :)

##### Share on other sites
Hehe, no, didn't think so because it was more difficult for humans. :) Two friends of mine used some code a lecturer published (the lecturer used 'smaller' coordinates). One of them 'enlarged' the coordinates while the other used the same as the lecturer, and the smaller coordinates runs faster. Now, you might say, the one with slower app is an idiot and did something stupid, right? Well code seems very similar, and there is nothing that would slow one over the other.

I use 'large' coordinates, and figured I'd ask here before converting to 'smaller' ones.

##### Share on other sites
I benchmarked it

for(i=0; i<16000000; i+=4)	{		array[i  ]=200000.0;		array[i+1]=300000.0;		array[i+2]=500000.0;		array[i+3]=800000.0;	}Begintimer();for(i=0; i<16000000; i+=4)	{		array[i  ]*=array[i  ];		array[i+1]*=array[i+1];		array[i+2]*=array[i+2];		array[i+3]*=array[i+3];	}Endtimer();

for(i=0; i<16000000; i+=4)	{		array[i  ]=2.0;		array[i+1]=3.0;		array[i+2]=5.0;		array[i+3]=0.0;	}Begintimer();for(i=0; i<16000000; i+=4)	{		array[i  ]*=array[i  ];		array[i+1]*=array[i+1];		array[i+2]*=array[i+2];		array[i+3]*=array[i+3];	}Endtimer();

It took about 44 sec on a Core2 in both cases

1. 1
Rutin
27
2. 2
3. 3
4. 4
5. 5

• 11
• 11
• 10
• 13
• 20
• ### Forum Statistics

• Total Topics
632948
• Total Posts
3009399
• ### Who's Online (See full list)

There are no registered users currently online

×