Archived

This topic is now archived and is closed to further replies.

jonbell

CPU Math cycles

Recommended Posts

when a CPU performs any addition, multiplication etc on two numbers does it always take the same number of cycles? For example 1.1 + 1.2 = 2.3 This is simpler for a human to do and as such we can do that hell of a lot quicker than 1.11254 + 2.33455. However would both sums use up the same amount of cpu cycles? Does the same hold for Interger data? Does 1+1 take the same amount of time to perform as 4501 + 10,001?

Share this post


Link to post
Share on other sites
> Does 1+1 take the same amount of time to perform as 4501 +
> 10,001?

Yes.

The sorts of sums computers can do can be divided into those inviloving just integers, like those in your question, and those involving floating point numbers. The way it deals with these numbers is different, so they can take different times. E.g. on most modern processors floating point numbers are as fast or faster than integers. Both are so fast that most time you don''t need to worry about which is faster.

Because it is fast it doesn''t have time to decide how to do a sum. Whether it has to do 1 + 1 or 12345 + 67890 it does it exactly the same way in the same time. The same is true for 1.1 * 2.5 and 0.12345 * 0.67890. In theory there might be a quicker technique for simpler sums but it would take far longer to decide this than it takes to just do the sum.

Share this post


Link to post
Share on other sites
I''ve heard many answers to this.

Apparently, multiplication of floating point numbers is faster than addition... But I don''t have anything to back it up. Why don''t you write a simple test program to check it out?

Cédric

Share this post


Link to post
Share on other sites
I think the OP was asking if it was the same complexity to add numbers of little magnitude and large magnitude.

jonbell, the ALU (Arithmetic Logic Unit) in the unit in the CPU does (integer) addition. It takes the same amount of time to do addition regardless of 1+1 or 9999+1234 because both numbers are treated by the ALU as an N-bit number. All binary digits are computed simultaneously by a chain of "full adders", one for each bit, in parallel.

(actually, in the real world we use carry chains to short-circuit the critical path along the carry logic, but the concept is that it''s all done in parallel).

So it doesn''t take one cycle to calculate the first bit, one cycle for the second, etc, and then we stop when we hit all-zeros (which is the "human" analog of doing this). Instead, the registers are set up and the numbers all percolate down to the output in X total clock cycles (amount of time for the carries to percolate through).

Share this post


Link to post
Share on other sites