Jump to content
  • Advertisement
Sign in to follow this  
LeoNatan

Arithmetic Calculations Performance Question

This topic is 3662 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey everyone! Is there a difference in performance when working on coordinates say from -1000 to 1000 in XYZ, and say from -50000 to 50000? Let's assume purely theoretically that I there are no precision problems when shrinking the coordinates, would it have better performance? I thought the answer to this question is 'no', because what does it matter if the polygon is 'large' or not? When the camera is at a relatively the same location, it looks the same size on the screen. Well, I'm not so sure anymore. Thanks in advance!

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by Rattenhirn
What made you think there's a difference?

Probably he thinks there's a difference, because it's more difficult for a human to calculate big numbers. :)

It's always the same for computers, because they calculate the numbers bit-by-bit.
A little example, 6 + 3 = 9 in binary form:
0110 (decimal 6)
+0011 (decimal 3)
-----
1001

The computer will do this in hardware. "Hardware" means certain chips on your processor which are capable of addition in this case. Note that such addition-chips can look very different and have very different benefits.
What the chip will do now is comparing each bit of both digits with each other, performing the boolean operations AND and XOR. Even if you have a 64 bit integer of which all bits are 0, the chip will still need to go through all of them and do these two boolean operations. And these operations won't go faster if the two bits are both 0 or anything like that :)

Share this post


Link to post
Share on other sites
Hehe, no, didn't think so because it was more difficult for humans. :) Two friends of mine used some code a lecturer published (the lecturer used 'smaller' coordinates). One of them 'enlarged' the coordinates while the other used the same as the lecturer, and the smaller coordinates runs faster. Now, you might say, the one with slower app is an idiot and did something stupid, right? Well code seems very similar, and there is nothing that would slow one over the other.

I use 'large' coordinates, and figured I'd ask here before converting to 'smaller' ones.

Share this post


Link to post
Share on other sites
I benchmarked it


for(i=0; i<16000000; i+=4)
{
array[i ]=200000.0;
array[i+1]=300000.0;
array[i+2]=500000.0;
array[i+3]=800000.0;
}

Begintimer();
for(i=0; i<16000000; i+=4)
{
array[i ]*=array[i ];
array[i+1]*=array[i+1];
array[i+2]*=array[i+2];
array[i+3]*=array[i+3];
}
Endtimer();



for(i=0; i<16000000; i+=4)
{
array[i ]=2.0;
array[i+1]=3.0;
array[i+2]=5.0;
array[i+3]=0.0;
}

Begintimer();
for(i=0; i<16000000; i+=4)
{
array[i ]*=array[i ];
array[i+1]*=array[i+1];
array[i+2]*=array[i+2];
array[i+3]*=array[i+3];
}
Endtimer();




It took about 44 sec on a Core2 in both cases

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!