why hexadecimal?

Started by
2 comments, last by Zahlman 16 years, 8 months ago
2 questions: 1) Why are so many variables in C++ #define'd as hex (0x...) instead of integers? 2) How can I increment a hex (like 0x23 to 0x24)?
Advertisement
Hexadecimal is just a different way of writing values. They're all stored internally as binary, so this is strictly a human convenience.

Hex is commonly used to improve readability of certain numbers. It's useful for numbers that are combined with logic operations (AND, OR, XOR) because each hex digit corresponds directly with 4 binary digits. This can make certain operations or sets of values much easier to understand, especially in cases where the binary values of numbers need to be known. Converting decimal to binary isn't nearly as easy.

As for incrementing the values.. since they're all stored internally the same way, it doesn't matter. ints don't know "hex", "binary", "decimal", or "octal", these are just various ways we tell the compiler we want to use some numeric value. Presuming the value has been stored into an integral variable i, we can simply do i++; or ++i; to increment it.
Ra
To be slightly more precise: Some of the fundamental units of memory are the byte, word, dword and qword.

A byte can store 2^8 different values, a word 2^16, a dword 2^32 and a qword 2^64. It's therefore convenient to specify things in hexadecimal. Take a byte for example. 2^8 values is equivalent to 2 hex characters (0x00 - 0xFF). A word, 4 hex chars(0x0000 - 0xFFFF), dword 8 hex chars, qword - 16 hex chars. So its really (mostly) for convenience.

Internally, a number is still just a number though.
Quote:Original post by asdqwe
2 questions:
1) Why are so many variables in C++ #define'd as hex (0x...) instead of integers?
2) How can I increment a hex (like 0x23 to 0x24)?


First off, anything that is #define'd is not, in any way, a "variable". A #define command is a text substitution hack that has no place being used to represent a constant. For constants, use real constants, i.e. variables that are marked as 'const'.

Anyway, numbers are just numbers. Hexadecimal is a way of representing numbers, just like decimal is. When I say "twenty-three", I refer to the number 23, but not the *digits* 23; that's only how the number is represented in decimal. (Which is why it's no problem to say, for example, "twelve" even though that doesn't *name* the decimal digits 12.) Please be sure to distinguish the thing from the representation.

int x = 0x17;int y = 23;assert(x == y); // no problem: both are "twenty-three".x += 1; // the number doesn't care how you initialized it.x + y; // similarly.


As for why we write things in hex sometimes, it's because it highlights the bit pattern. Every hex digit corresponds to four bits.

int x = 0x5555; // we can easily tell this "looks like" 0101 0101 0101 0101.int y = 0xAAAA; // and similarly that this "looks like" 1010 1010 1010 1010.x | y; // must therefore be 1111111111111111, which is 0xFFFF, or 65535.x + y; // is the same, this time, because none of the bits overlap. But it's       // not quite as easy to tell, is it?int a = 21845;int b = 43690;a | b; // the same thing, because those are 0x5555 and 0xAAAA written in decimal.       // But you can't just look at the decimal values and figure out       // that there aren't any overlapping bits, like you could in hex.a + b; // On the other hand, anyone who passed elementary school can check this       // result pretty easily. :)

This topic is closed to new replies.

Advertisement