Archived

This topic is now archived and is closed to further replies.

Speed: Multiplication vs Bit Shifting?

This topic is 5378 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I''ve been working through the Game Programming Genesis articles and I''m on the 5th one (pixel plotting). In that article he discusses uses bit shifting instead of multiplication to speed up some pixel plotting functions. Why is multiplication slow? Caroline M.

Share this post


Link to post
Share on other sites
Multiplication is slower because the CPU actually has to calculate the answer. Because shifting some bits around in memory doesn''t require any maths, it simply alot faster.

That''s all there is to it, really.



"It''s strange, isn''t it? You stand in the middle of a library and go ''AAAAAAAGGHHHH'' and everyone just stares at you. But you do the same thing on an airplane, and everyone joins in!"

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
I would guess a bit shift uses only 1 CPU cycle while a multiplication uses 2 or more. Although the real speedups with byte shifts come when using them instead of divisions, which are _really_ slow, comparatively.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
That used to be true on old-skool microprocessors.

At some point, most integer instructions (except for divide and square root) started having a pipeline latency of one cycle, and the difference between bit shifts and multiplies disappeared (except for scheduling constraints). THEN, in a mysterious, un-fathomable decision, Intel decided to make bit shifts SLOWER than multiplies in the Pentium IV. Go figure.

Note that bit masking is still faster than modulo, though, as modulo inherently is a division operation:

fast = foo & 127;
slow = foo % 128;

Share this post


Link to post
Share on other sites
Aren''t divs and mults done as repeated addition - whereas a bit shift is a single operation - so they are faster? As you say I''d imagine modern compilers would spot the cases where a shift can be done.

Share this post


Link to post
Share on other sites