# is adding two same numbers faster than multiplication by two?

This topic is 2186 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Title quite says it. In other words, what will compiler compile if it recieves b=b+b;?

will it perform b=b*2, thus one bit shift, or it will add the two numbers? consider the numbers to be integers.

Thanks much!

##### Share on other sites

Note that some operations you'd think would be replaced by bit shifting doesn't yield the same result.
For example (-2 / 2) and (-2 >> 1) don't give the same result.

Shifting a signed integer to the right propagates the sign bit, which results in the correct result.

#include <iostream>

int main() {
std::cout << (-2 / 2) << '\n';
std::cout << (-2 >> 1) << '\n';
}

Output:

-1

-1

##### Share on other sites

Any actual difference that might exist is going to stem from how the processor's particular micro-architecture handle the different options, and perhaps from reduced pressure on the instruction-cache if the instruction sizes are different. It's not something you can hope to control from code -- even assembly language -- so your best bet is to just do what reads most clearly. Moreso, even, because compilers themselves are very smart about this kind of thing and have more context than you; being overly clever in your arrangement of code can prevent the compiler from recognizing bits of code it could have optimized if they were just written in the straight-forward way.

Modern processors, too, are incredibly complex, and incredibly smart. Today, some common variations of certain instructions aren't even executed -- they cost literally zero processor cycles, and consume no back-end execution resources of the processor. Throughput of these kinds of instructions are limited by other pathways in the processor (i-cache bandwidth, register renaming, etc). I doubt if this particular case is handled in this way, but I mention it because it demonstrates clearly the amount of effort that processor designers expend to make even "dumb" code run fast -- If you do the simple, straight-forward, recommended thing, other people make careers of making that go fast. As a programmer, its really your job to pick the right algorithm and, for the most part, be done with it.

For an interesting read on the kinds of micro-architecture-level optimizations I mentioned, check out The Surprising Subtleties of Zeroing a Register.

##### Share on other sites

Note that some operations you'd think would be replaced by bit shifting doesn't yield the same result.
For example (-2 / 2) and (-2 >> 1) don't give the same result.

Shifting a signed integer to the right propagates the sign bit, which results in the correct result.

#include <iostream>

int main() {
std::cout << (-2 / 2) << '\n';
std::cout << (-2 >> 1) << '\n';
}

Output:
-1
-1

Ooops, that's not what I meant (though thanks SiCrane for pointing out the Standard thingy); I was more into the rounding issue, and my example was wrong because it doesn't truncate or round.
The correct example I meant to give is: (-3 / 2) != (-3 >> 1)

##### Share on other sites

In VS2010 I get the following:

		int b = 2;
00BCF613  mov         dword ptr [b],2
int c = b * 2;
00BCF61A  mov         edx,dword ptr [b]
00BCF61D  shl         edx,1
00BCF61F  mov         dword ptr [c],edx
int d = b + b;
00BCF622  mov         eax,dword ptr [b]
00BCF628  mov         dword ptr [d],eax


Is that faster or not?  Depends on how your CPU implements the shl and add instrcutions, but in either case it's just 3 instructions (using a debug build, though).

The one thing I can guarantee is that unless you're doing this in a deep inner loop and at least millions of times, you are not going to notice one bit of difference in terms of overall performance.  This is just so much not worth worrying about.

• 9
• 18
• 29
• 11
• 24