Archived

This topic is now archived and is closed to further replies.

why 8 bits in a 'byte'?

This topic is 5006 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Does anyone know why there are 8 bits in a ''byte''? From my understanding, historically, punch-card''s had 64 possible representations (6 bits) (I might be horribly wrong), and the ASCII table had 128 characters (7 bits) originally, so why did the world seem to settle on 8 bit bytes? (read: why did everyone make computers with 8 bit register sizes?) Why not 7,9,10 or more?

Share this post


Link to post
Share on other sites
Ascii was 7 data bits, 1 parity bit. Then people realised you could use the 8th bit as a sign bit and have ''extended ascii'' to make DOS programs look less crap. Ooooh oooh. Then the whole power-of-two thing took over, although it''s still not universal - the Ubicom SX family of microcontrollers has 12-bit words, but each bit is individually addressable so technically each bit is a byte.

Teh Googol Knoez All! That''s gotta be worth caek.

Share this post


Link to post
Share on other sites
Seeing as computers are based upon the binary number system storing ASCII codes in a 7-bit variable makes no sense as 7 is not a power of 2 so the logical step was to use 8 bits to store ASCII characters since 8 is a power of 2.

Share this post


Link to post
Share on other sites
quote:
Original post by pinacolada
Now I have a question- why''d they spell it "byte" instead of "bite"?


Wiki says: "The word was coined by mutating the word bite so it would not be accidentally misspelled as bit."

I like pie.

Share this post


Link to post
Share on other sites
quote:
Original post by Spudder
Seeing as computers are based upon the binary number system storing ASCII codes in a 7-bit variable makes no sense as 7 is not a power of 2 so the logical step was to use 8 bits to store ASCII characters since 8 is a power of 2.


you''re making that up.

Share this post


Link to post
Share on other sites
quote:
Original post by petewood
quote:
Original post by Spudder
Seeing as computers are based upon the binary number system storing ASCII codes in a 7-bit variable makes no sense as 7 is not a power of 2 so the logical step was to use 8 bits to store ASCII characters since 8 is a power of 2.


you''re making that up.


That came from my Uni lecturer, I was just passing the info on.

Share this post


Link to post
Share on other sites
quote:
Original post by petewood
quote:
Original post by Spudder
Seeing as computers are based upon the binary number system storing ASCII codes in a 7-bit variable makes no sense as 7 is not a power of 2 so the logical step was to use 8 bits to store ASCII characters since 8 is a power of 2.

you''re making that up.

According to the article you linked to:
quote:
It seemed reasonable to make a universal 8-bit character set, handling up to 256. In those days my mantra was "powers of 2 are magic". And so the group I headed developed and justified such a proposal

Programming for any length of time tends to instill a certain fondness for powers of 2 in people. It''s not surprising that 8 was picked - 4 is too small and 16 seemed too big at the time (Unicode would have seemed very wasteful given the amount of memory they had to deal with in those days). Even the thought of a 7 or 9 bit byte just feels wrong to me, I''ve probably spent too long staring at hex.

Share this post


Link to post
Share on other sites
You know you''ve been programming too much when...
...the various powers of two hold special significance to you and you can''t help but laugh inwardly at some private joke when you find one being used in a non-programming-related situation.
...you know the powers of two by heart up to the 16th (at least!!)
...you feel more at ease working with numbers like 32 than numbers like 30. Base 10 is inefficient and for primitive lifeforms anyways. You can count on your fingers so much further if you use binary anyhow.
...you can readily think of more examples in this veine, as well as anecdotes to support these.

Share this post


Link to post
Share on other sites
quote:
Original post by mattnewport
Programming for any length of time tends to instill a certain fondness for powers of 2 in people.

yah, it has done wonders for my OCD

Share this post


Link to post
Share on other sites
quote:
Original post by amag
So why are there 4 wheels on a car?


smallest number of wheels for which you can get really good stability. yeah, yeah there are all those 3-wheeled cars in europe. people were also used to making 2 axel''d carriages & carts & whatnot at the time and it just seemed like a logical extension to carry that format over to the first "horseless carriages".

and your question was facetious but i don''t care.

-me

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
I know this is a couple of days late, but here is your answer: a little wikipedia love: http://en.wikipedia.org/wiki/Byte

The term byte was coined by Werner Buchholz in 1956 during the early design phase for the IBM Stretch computer. Originally it was described as one to six bits; typical I/O equipment of the period used six-bit units. The move to an eight-bit byte happened in late 1956, and this size was later adopted and promulgated as a standard by the System/360. The word was coined by mutating the word bite so it would not be accidentally misspelled as bit.

Share this post


Link to post
Share on other sites