bit orders, right, and left.

Started by
4 comments, last by ironfroggy 22 years, 5 months ago
I was reading that Bitwise Operators article and it enlarged a recently burning flame of question in my mind: Why are bits thought of in the order they are? Within a byte, bit 0 is thought of as the right-most bit and the last bit is on the right. But the bits are not actually arranged like this, so why the illusion? Does it have to be like this? The reason I ask is that, for fun, I am designing my own asyncronous system. I can''t imagine any reason for having the LSB after the MSB. Just think of how you do math, from the least significant digit to the most significant. Right? So shouldn''t computers do it the same? Or do they? I''m a bit confused.
(http://www.ironfroggy.com/)(http://www.ironfroggy.com/pinch)
Advertisement
The way they are ordered varies from platform to platform. Right to left and left to right are called "Big Indian and Little Indian". The reason why you see a lot of people saying bit 0 is the right-most, is because it''s easier to visualize when doing 2''s compliment and other bitwise operations.

I''ll register later

Kevin King
- DigiPen Institute of Technology
Although bits in a computer are used to represent numbers this does not need to be the case. The bits can represent other things such as colour, logical state of sensors or buttons etc. Even when it comes to number representation the bits can mean different things - we have binary, 1''s compliment, 2''s compliment, sign-magnitude, binary coded decimal to name but a few and in each of these representations of numbers the individual bits mean differnet things.

I hope this isn''t too confusing but at the end of the day we have say 32 bits to use to represent something and we can do with them as we want (within the limitations of the computer architecture).

henry
HenryLecturer in Computer Games TechnologyUniversity of Abertay DundeeScotlandUK
The simple answer: The binary digits are arranged in the same sequence as digits in the other common numbering systems, such as decimal(you know that one, right?), hexadecimal and octal.

"I contend that we are both atheists. I just believe in one fewer god than you do. When you understand why you dismiss all the other possible gods, you will understand why I dismiss yours." - - Stephen Roberts
--AnkhSVN - A Visual Studio .NET Addin for the Subversion version control system.[Project site] [IRC channel] [Blog]
quote:Original post by ironfroggy
Just think of how you do math, from the least significant digit to the most significant. Right? So shouldn''t computers do it the same? Or do they? I''m a bit confused.

When you do math (in decimal base), which digit is more significant - left or right? But which direction do you add and multiply in? And in binary? How about division and subtraction - decimal and binary?

The same? Well, congratulations.
63 + 45 = 09

Or, at least, it does when you order the digits differently. It might be tricky for you to follow, but from a hardware point of view, adding bits in one order is no more difficult than adding them in another. Indeed, if you aren''t using bits for mathematical purposes, it makes no sense to talk about them being in either MSB-first or LSB-first order.

From a programmer''s point of view, you don''t need to worry about the order the bits are in. What might, occasionally, be of interest is the byte-order.

To correct AP, the Endian-ness of a machine doesn''t refer to bit order, but byte-order. A big-endian machine has the most significant byte first, whilst a little-endian machine has the least significant byte first.

This is actually significant, particularly to C programmers. If you''re storing a 4-byte long integer in memory, and need to cast it to a 2-byte short integer, then the way you get the short integer is dependant upon byte-order.

In a little-endian system, the bytes are arranged like ''1234''. Bytes ''12'' are its short value. To cast a long to a short, then, simply requires that the compiler generate instructions that operate upon a short.

In a big-endian system, the bytes are arranged like ''4321''. Bytes ''21'' are its short value, but are also offset 2 bytes from the start of the long. To cast a long to a short on this system might require that you add two to the address as well as generating suitable instructions.

In practice, the compiler will handle everything. It will manage the offsets so that they don''t have to be adjusted at runtime, and will always know what instructions you''re telling it to use.

Signatures? We don''t need no steenking signatures!
CoV

This topic is closed to new replies.

Advertisement