Archived

This topic is now archived and is closed to further replies.

Daishim

Why Hex?

Recommended Posts

I''ve seen a lot of things, particulary in the Win32 SDK, that are referenced with hex values. I''ve done a little research into hex, but still can''t find out exactly why it''s used. Is there a particular reason? ... or a just cause they can? I know only that which I know, but I do not know what I know.

Share this post


Link to post
Share on other sites
Computers deal with binary and representing binary is tedious and error prone and decimal would be of no use if you wanna understand. so hex is an easy way to represent the binary values..

Slow and steady wins the race.

Share this post


Link to post
Share on other sites
Hexadecimal has the unique property of easily converting to most of the other commonly used numerical bases (with the exception of decimal). One hex digit is equivalent to four binary digits, and converting octal to hex isn''t too much of a stretch.

It''s convenient, more compact than binary and less error prone.

Share this post


Link to post
Share on other sites
quote:
Original post by Oluseyi
It''s convenient, more compact than binary and less error prone.


I have bad memories about typing endless listings of hex off computer magasines to get whatever the game of the month was...

Share this post


Link to post
Share on other sites
Hehe, and you probably all recall the "good old days" when we programmed machine code on our Commodore 64 by placing hex values directly in memory after looking your hand-written assembler program up in the opcode table... < shivers >

Share this post


Link to post
Share on other sites
>> It''s much better than decimal in almost all respects...

If we were to change counting base, I would vote for base 12 instead. Although that gives 3 bits I think that the advantage in daily life is higher with base-12 since 12 is dividable by a lot of small integers: 1, 2, 3, 4, and 6. By using 10 or even 16 we loose divability with 3 which is a shame. I hate have to split some cost of something between three people would you are out to buy something and not being able to do the split correctly during our inferior base-10 system.

Share this post


Link to post
Share on other sites
quote:
Original post by felonius
>> It''s much better than decimal in almost all respects...

If we were to change counting base, I would vote for base 12 instead. Although that gives 3 bits I think that the advantage in daily life is higher with base-12 since 12 is dividable by a lot of small integers: 1, 2, 3, 4, and 6. By using 10 or even 16 we loose divability with 3 which is a shame. I hate have to split some cost of something between three people would you are out to buy something and not being able to do the split correctly during our inferior base-10 system.


I concede that point. Plus 12 means we''d have a much smaller multiplcation table than 16


codeka.com - Just click it.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
quote:
Original post by felonius
I would vote for base 12 instead. Although that gives 3 bits

No, base-12 requires 4 bits to represent (octal is 3 bits), and base-16 would (IMO) be preferable in the world of computers, since it''s also 4 bits and all bit combinations are valid base-16 numbers. IRL base-12 could have its advantages though

Share this post


Link to post
Share on other sites
quote:
Original post by felonius
Hehe, and you probably all recall the "good old days" when we programmed machine code on our Commodore 64 by placing hex values directly in memory after looking your hand-written assembler program up in the opcode table... < shivers >


Your damn right! And we didn''t need all your fancy general purpose registers. 2 index registers and an accumulator was all we needed. Young folk today have it easy, too few addressing modes is their problem.

Share this post


Link to post
Share on other sites
quote:
Original post by Michalson
Young folk today have it easy, too few addressing modes is their problem.


The 68k had what... 15 addressing modes (including post-increment and post-decrement) ?

Documents [ GDNet | MSDN | STL | OpenGL | Formats | RTFM | Asking Smart Questions ]
C++ Stuff [ MinGW | Loki | SDL | Boost. | STLport | FLTK | ACCU Recommended Books ]

Share this post


Link to post
Share on other sites
I read once that before grouping bits by 8 in the memory, bits where grouped by 4, thus using hexadecimal representation, you could represent such memory unit with only one digit.

The only thing I don''t like in hexa (except that my brain doesn''t auto convert hexa to decimal and vice-versa, except some special values) is that keyboard are not designed to type hexa.

It would be great if someone could make an hexa numeric-pad


----
David Sporn AKA Sporniket

Share this post


Link to post
Share on other sites
quote:

It's much better than decimal in almost all respects... Check this out: http://www.intuitor.com/hex/switch.html



You can't use Hex as our number system, for the simple fact that you'd get confused. They'd need different symbols for the 10-15 values instead of A-F if it was to work.

Consider the numbers FACE, CAFE, BEEF, ACE. Doesn't work too well when mixed in with words.

Also, from that article, dividing a line into segments has absolutely no relation to doing mathematical calculations with numbers.

Oh yeah, and you can't count to 16 on your fingers!


[edited by - cgoat on June 7, 2002 2:25:46 PM]

Share this post


Link to post
Share on other sites
quote:
Original post by felonius
Hehe, and you probably all recall the "good old days" when we programmed machine code on our Commodore 64 by placing hex values directly in memory after looking your hand-written assembler program up in the opcode table... < shivers >


I know C compilers were made in assembly, but now I know what the assembler was made in!

the only win32 opcodes I remeber know are eb (short jump), e9 (long jump), 90 (nop) and b8 (mov eax, const). wonder what I use them for?

Share this post


Link to post
Share on other sites
>> I know C compilers were made in assembly, but now I know what the assembler was made in!

But thank god we had hexes. The people before us had to work in binary digits. The first real computers (such as ENIAC from WWII - the mechanical analytical engines from the 19th century do not count) had large boards with a switch for each bit. You then programmed it by pressing switches. In that perspective hex-coding is very efficient and easy. On the other hand the memory wasn't so large so it wasn't that big of problem. Well, physically ENIAC *was* large - 80 feet long and 8.5 wide, but had only 18.000 vacuum tubes. Anyway, people found out that they could do this more efficiently by creating a large stencil with holes where the switches should *not* be pushed and then they pressed that over all the switches at once. The first "instant" software loader was invented

But seriously, C/C++ compilers are no longer written in pure assembler and assemblers are not written in just opcodes. That was just the first ones. Today we use cross-compilation and bootstrapping to avoid having to dig into low levels. So today all of them are written in C or C++ (or some other higher level language).

[edited by - felonius on June 7, 2002 3:35:45 PM]

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
quote:
Original post by cgoat
Oh yeah, and you can''t count to 16 on your fingers!


Yes you can! In fact, you can do it on one finger. Furthermore, you can count to 1023 on both hands, and 1048575 using your toes.

Just treat each of your fingers as a binary digit (no pun intended).

Right thumb = 1
Right index = 2
Right middle = 4
Right ring finger = 8
Right pinky = 16
Left pinky = 32
Left ring finger = 64
Left middle = 128
Left index = 256
Left thumb = 512

Ta da. Your hands are a 10 bit number.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
quote:
Original post by felonius
The people before us had to work in binary digits. The first real computers (such as ENIAC from WWII - the mechanical analytical engines from the 19th century do not count) had large boards with a switch for each bit. You then programmed it by pressing switches.


The ENIAC may have been the basis for all modern computing machinery, but it WAS in fact a Base 10 system -- one of its many problems. For each digit you had to have ten vacuum tubes, one for each possible value for 0-9. Using binary in electronics was not so patently obvious in the early days. Otherwise your point stands.

Share this post


Link to post
Share on other sites
quote:
Original post by Anonymous Poster
The ENIAC may have been the basis for all modern computing machinery, but it WAS in fact a Base 10 system -- one of its many problems. For each digit you had to have ten vacuum tubes, one for each possible value for 0-9. Using binary in electronics was not so patently obvious in the early days. Otherwise your point stands.



Using the scheme you suggest would be stupid indeed and they did NOT do it. They do state that they used decimal digits but that it not the same as saying they used that odd scheme.

I looked it up at a site about ENIACS history and it said:

"
The core storage unit, the first operational unit of its kind, was built by the Burroughs Corporation. The Binary coded decimal, excess three, system of number representation was used. It was operated successfully three days after its arrival at BRL and continued in service until the ENIAC was retired.
"

A binary coded decimal (or BCD) uses 4 bit to store a decimal digit. If I recall correctly the x86 processors also have support for BCD numbers still. They only went out of fashion many years after ENIAC was retired. (Excess-3 means that you must add 3 to the number before storing it)

So, AP, I guess we both are partially right. It is both binary and decimal.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
quote:
Original post by felonius
So, AP, I guess we both are partially right. It is both binary and decimal.


I swear I had read it that way. Then again, there''s lots of misinfo about ENIAC, so I shouldn''t be surprised I guess. Then there''s the whole issue about who actually came up with the idea of what and the bitterness between M & E and von Neumann and Atanasof, etc. And don''t even get the Colossus people started.

Share this post


Link to post
Share on other sites
Yes, x86 has instructions for BCD.

However, they will are not extended to 64-bits in x86-64. This is because most people use compilers now and compilers have little use for BCD. Plus, if you PC can do math with 64-bit integers quickly, there''s no need to store currency and whatnot in BCD.

I do know that the Legend of Zelda for SNES uses BCD for some of its numbers. I stumbled upon this when trying to find cheat codes, and numbers would jump from 9 to 16!!

We should change our DNA to have 16 fingers so that we can switch to hex. Coming up with 6 new numeric symbols wouldn''t be hard. Or we could not count on our thumbs and use octal, but I prefer hex (except when doing long division).

--TheMuuj

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
quote:
Original post by TheMuuj
We should change our DNA to have 16 fingers so that we can switch to hex. Coming up with 6 new numeric symbols wouldn''t be hard. Or we could not count on our thumbs and use octal, but I prefer hex (except when doing long division).


...or just count in the way previously suggested.

You can even go one better and count to 2^20 (a meg) on your hands by using the "half-states" of your fingers as additional bits.

Share this post


Link to post
Share on other sites