Jump to content

  • Log In with Google      Sign In   
  • Create Account


Reading Binary: L-to-R or R-to-L?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
6 replies to this topic

#1 Malabyte   Members   -  Reputation: 583

Like
0Likes
Like

Posted 20 August 2013 - 12:40 AM

    Probably a trivial question, but here goes:

 

    I've got a family member with a background in telecommunications and IT. One day, when I wrote binary code right-to-left, he corrected me and insisted that binary is to be written from left-to-right, stating that the LSD (Least Significant Digit) is the first one from left. But I just don't get it. Every website I've ever been to, including several articles, Wikipedia.org and a number of youtube videos, all state that binary is to be read/written from right-to-left.

 

    As such, my stance is that the example number "5" is to be written as "00000101" and not as "10100000". His stance is opposite. Now, I'm pretty certain that I'm right about this, because I've seen it demonstrated from several sources and that he's simply wrong for whatever reason. But he does have a much stronger background than myself. So I'm thinking that there must be some reason that he insists the way he does?

 

    Immediately, I'm thinking that he's right about the machine reading it all from left-to-right. But who cares about the machine, no offence to HAL9000 - we humans write and count the binary code from right-to-left (just like the decimal system).

 

    Could anyone here please clarify this once and for all? When does binary read from right-to-left, and when does it read from left-to-right?!! Thanks in advance.


Edited by Malabyte, 20 August 2013 - 12:45 AM.

- Awl you're base are belong me! -

- I don't know, I'm just a noob -


Sponsor:

#2 D_Tr   Members   -  Reputation: 362

Like
0Likes
Like

Posted 20 August 2013 - 12:50 AM

I cannot remember having seen a binary number written starting with the most significant bit on the right. We are used to reading numbers starting from left with the most significant digit, so I find it more convenient to read binary numbers the same way. As for the machine, it has no concept of left and right. The only thing the machine "knows" is which wire carries a bit with a given significance. The wires in an integrated circuit carrying a number might even be layed out in a messy way in some cases.


Edited by D_Tr, 20 August 2013 - 12:54 AM.


#3 Paradigm Shifter   Crossbones+   -  Reputation: 5125

Like
0Likes
Like

Posted 20 August 2013 - 01:20 AM

You write the number with the most significant digit on the left, just like in any other base...

 

You do number the bits from right to left though, bit 0 is the LSB, bit N is (1 << N)  (or 2N).

 

Family member is having you on or tripping on bad acid, IMHO.


"Most people think, great God will come from the sky, take away everything, and make everybody feel high" - Bob Marley

#4 Hodgman   Moderators   -  Reputation: 27586

Like
2Likes
Like

Posted 20 August 2013 - 01:28 AM

We're taught in maths that the least significant digit is the right-most one, and more significant digits are added on the left side.

Changing the definition of what a digit is (i.e. changing the base) doesn't affect this.

 

You can open up windows calculator and press alt+3 for the programmer mode, which lets you use base-10, base-16, base-8 and base-2.

It doesn't flip the convention around back-to-front when you select base-2 laugh.png wink.png

 

Your family member must have worked with a bunch of systems / in a job where binary was used pragmatically, without taking mathematical conventions into consideration. They might be correct with regards to their own experience.

i.e. they might have worked with some strange computer system that used this convention... but yes, this is weird.

 

Also, the left-shift and right-shift operations would be backwards in your family member's world.

In any system I've ever used, 2<<1 == 4 (in binary: 10 << 1 == 100).

But using this alternate convention, 2<<1 == 1  (in binary: 01 << 1 == 1)... which is just plain wrong, and confusing.


Edited by Hodgman, 20 August 2013 - 01:33 AM.


#5 Cornstalks   Crossbones+   -  Reputation: 6966

Like
2Likes
Like

Posted 20 August 2013 - 01:45 AM

42.

Now, did I just write forty-two or twenty-four?

Similarly in binary, 10.

Did I just write two, or one?

People write numbers with the most significant digit on the left. Your friend is just weird.

It's possible he's thinking about endianness. Individual bytes have no endianness, so we always write them most significant digit first. Endianness only has to do with ordering of multi-byte sequences. Sometimes when writing a little endian byte sequence, people will write the bytes as little endian too to be consistent.

Sometimes, though, we write bits in streams. For example, some binary formats or communication protocols are defined as streams of bits. In these situations, some numbers may be least significant digit first. However, this depends on the specific context and specification.

In other words, unless you're following a specific specification in a specific context, binary is written most significant digit fist, just like every other number in every other base.

For additional ammo, consider binary literals in code. These are always written most significant digit first. You can try it in C++14, Java 7, Python, and other languages.
[ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]

#6 Malabyte   Members   -  Reputation: 583

Like
0Likes
Like

Posted 20 August 2013 - 01:53 AM

Ok, thanks for the replies guys. However nitpicky this might be for someone like myself who should only worry about Java atm, I always hate when there's something or someone that gives conflicting information. It's an awkward predicament gnawing at my brain and that I can't put away, haha. Well, thankfully I got it resolved. Thanks again. happy.png


Edited by Malabyte, 20 August 2013 - 01:54 AM.

- Awl you're base are belong me! -

- I don't know, I'm just a noob -


#7 BGB   Crossbones+   -  Reputation: 1545

Like
0Likes
Like

Posted 20 August 2013 - 03:24 AM

actually, I don't think it is quite so clear cut...

 

 

while it makes the most sense to always write numbers starting with the MSB on the left, this isn't always the most applicable for things like bitstreams.

 

there are basically two different ways bitstreams have been conventionally done:

big-big: MSB first starting at MSB;

little-little: LSB first starting at LSB.

 

if you start writing out the bit patterns for a big-big bitstream, the MSB goes on the left. then everything comes out in a "sane" ordering.

 

but, if you do this for a little-little bitstream, then something ugly happens:

for every byte, the bits are essentially "flipped", and byte-boundaries need to be taken into account.

 

so, by convention, people start with the LSB on the left for these.

 

ex: (3)5, (8)x42, (5)xC

 

MSB, big-big: 101,01000010,01100

MSB: little-little: 00010,101-01100,010

LSB: little-little: 101,01000010,00110

 

while this isn't a great example, it can be seen in the middle case that things are broken up weirdly.

in the later case, at least all the bits go together nicely.

 

flipping the bits also allows makes imagining the workings of the bitstream nicer as well, since generally it makes more sense to think in terms of the logical clumping of bits rather than the actual numerical values (observe, for example, that both the first and last case have the same grouping despite the actual bit-order being reversed).

 

 

this applies to some extent to both hardware (such as in a serial-communication line or bus), and in some cases when dealing with things like bit-oriented file-formats or data compression.

 

but, a lot of this mostly depends on the specific hardware and/or bitstream-format in question...

 

 

FWIW, little-little apparently dominates WRT serial communications hardware.

so, this could be part of it...

 

it is also fairly common in compression formats as well, partly because the bit-manipulations can generally be implemented "very slightly more cheaply" (big-big tends to require occasionally writing things like "16-n" or "32-n" in a few cases where little-little would simply allow 'n', though there are workarounds), where saving a few clock-cycles here and there can potentially make a noticeable impact on codec performance (more so for things like reading/writing sequences of bits, which can sometimes become a bottleneck...).

 

or such...






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS