Bitwise Operations

Started by
26 comments, last by torakka 17 years, 7 months ago
I know how to convert hex to binary and decimal to binary. I'm just a bit confused on when to use the & and |. These examples are helpful. From this it looks like you would use an | to combine two flags and & to check to see if a flag is set or not.
Advertisement
Quote:Original post by Chris27
I know how to convert hex to binary and decimal to binary. I'm just a bit confused on when to use the & and |. These examples are helpful. From this it looks like you would use an | to combine two flags and & to check to see if a flag is set or not.


Yes, exactly ( ignoring my poor use of alive and dead at the same time :) )
Then something I wondered about: is (flag & value != 0) faster than (flag & value > 0), or does the compiler optimize it anyway - depending on the compiler?

Not that it would be significant of course, I just wondered... although when using signed variabeles, the > could give trouble if all bits were used anyway... :)
Create-ivity - a game development blog Mouseover for more information.
Quote:Original post by Captain P
or does the compiler optimize it anyway - depending on the compiler?

The compiler optimizes it if there is a difference. In this there aren't on all mainstream CPUs, and if there were then it wouldn't matter anyway.

Quote:Not that it would be significant of course,

That is why we won't even think about it. You express what you want to do, as clearly as possible, to the compiler and it combines it with what it knows about the target architecture, and together you create some damn good code. Of course if you try to do something which you know nothing about, then your code won't be good.

Quote:I just wondered... although when using signed variabeles, the > could give trouble if all bits were used anyway... :)

Now imagine if a variable is initially unsigned, you use this very ugly hack, and someone else change the variable to signed because all other variables is signed and you don't want to check if the conversion between signed and unsigned is ok (avoid signed<0, and unsigned>=max/2). Now you have just introduced a bug into your system because you refused to tell what you want, and instead told the compiler what to do.

Chris: If you understand how the bitwise operators work, then there should be no problem understanding what to use.

Imagine we have this (we just use 4 bits for simplicity):
Attack = 0010
Jump = 1000

We have a variable of 4 bits which looks like this:
abcd

Now if we OR(|) the variable with Attack then c will always be set afterwards, since one of the bits is set. Of course when we do it with the attack flag all other bits are zero, ORing with zero is the same as keeping the present state. So the result will be:
ab1d
We could then OR with Jump if we also wanted to set it.
1b1d
So OR always set the bits which are set in the variable you OR with, all other bits aren't changed.


If we AND(&) the variable with Attack then c will be set iff it was set before. All other bits will be zero because the other bits in Attack is zero so there is no way both bits will be one. So Attack ANDed with Attack is:
00c0
So we simply clear all other bits, now if we want to clear a specific bit we should just AND with a variable where all other bits are 1 and the specific bit is 0. We can do this by taking the complement (~, reverses all bits) of Attack and ANDing the result with the variable. So
(abcd) & ~Attack=

abcd &
~0010

abcd &
1101

ab0d



How the 'or' works:
If either A or B is 1, the result is 1.

How the 'and' works:
If both A and the B are 1, the result is 1.

The obvious is so difficult to explain as there is nowhere else to go!

The bitwise implies that these logical operations are done, ummm.. bitwise? This means that the result for each bit is not affected by the other bits. A bitwise AND has two inputs, one result. Likewise (!) for the bitwise OR. It just happens that a computer does many of these operations simultaneously.

A 32 bit computer typically can do 32 bitwise operations per instruction. There are too many details that affect this statement so you just have to take my word for it until you research into the topic by your own self and then we can take this discussion into the next level and go in detail, but then, we don't have to, do we?

Bitwise operations are the fundamentals of the computer science. That should answer your question.
Quote:Original post by torakka
Bitwise operations are the fundamentals of the computer science.


Would you mind explaining that? I have rarely seen bitwise operators mentioned in actual computer science books (not programming), and I have never heard of anyone describing them as the foundation of anything.
Quote:Original post by CTar
Now you have just introduced a bug into your system because you refused to tell what you want, and instead told the compiler what to do.


Clever reasoning. It's good to be reminded of such possible pitfalls. :)
Create-ivity - a game development blog Mouseover for more information.
CTar: "Would you mind explaining that? I have rarely seen bitwise operators mentioned in actual computer science books (not programming), and I have never heard of anyone describing them as the foundation of anything."

You might be interested in reading the "Digital Fundamentals" by Floyd. You might have heard of these "transistors" they speak of? With these tiny little things we can implement binary logic operations like XOR, OR, AND, NOT and so on. These are the BASIC building blocks of computer chips.

Every instruction your tiny little processor executes is *implemented* using these things, I assure you they exists and it is NOT magic. I repeat: it is NOT magic. It's mathematics, electronics and other sleight-of-hand the hard-working people at Intel, AMD, ATI, NVIDIA and other companies are using to steal your hard earned money. ;)

It's a mild statement that to be a competent programmer worth the title to understand atleast what the hell all this is based on. OH, and these are also implemented as instructions in most generic purpose CPU's for the reason that they are very useful for computing all kinds of things your imagination might conjure.

What you think happens when you write expression such as:

if ( a && b ) { ... }

(where a and b can be sub-expressions, the emphasis is on the && operator)

The point being that you will be hard pressed to find actual "&&" (logical and) instruction in the instruction set in the architechture you might be compiling software for. Take a wild guess what might be happening "behind the scenes" ?

These things are all around the topic of computers and I'm baffled that I have to even mention this in a *programming forum* of all things. Geez.

This topic is closed to new replies.

Advertisement