Archived

This topic is now archived and is closed to further replies.

Daishim

unsigned char?

Recommended Posts

I''ve seen this used all over but none of the C/C++ books cover what an unsigned character is. What is it?

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
It''s an integer that can hold values from 0 (zero) to 255.

Share this post


Link to post
Share on other sites
The definition of unsigned is that it doesn't have a bit devoted to + or -. This means you store 0+. Signed vars store a bit for positive or negative.


byte(default signed) -128 to 127 signed, 0 to 255 unsigned

char(default unsigned) same as byte

short(default signed) about -32000 to about 32000 signed, 0 to about 64000 unsigned

int(default signed) about -2 billion to +2 billion signed, 0 to about 4 billion unsigned.

long(default signed) same as int

long long(default signed) -9223372036854775808 to 9223372036854775807 signed, 0 to 18446744073709551615 unsigned.

Note that I think long long is not supported on all compilers and is slower than ordinary int. There's no real reason to use it for ordinary computations. These values apply to the ia-32(x86) line of processors.

Edited by - gph-gw on June 27, 2001 7:28:08 PM

Edited by - gph-gw on June 28, 2001 2:28:09 AM

Share this post


Link to post
Share on other sites
Note that ''byte'' is not the name of a type in C/C++. and whether a ''char'' is signed or unsigned by default depends on the compiler being used. In my experience the most common is for all integral types to be signed by default (ie including ''char'') (but as I said, it is compiler-dependent).

Both ''signed char'' and ''unsigned char'' are bytes, which one you use only depends on whether you would like them to be able to hold both negative and positive numbers or positive numbers only. If you just use them to hold characters (like ''a'', ''b'', etc), then it doesn''t matter if it is signed or unsigned, just use the default (''char'').

''long long'' is a GCC extension (perhaps other compilers support it too, don''t know for sure). To get the same result in MS Visual C++ you can use a type called ''__int64''.

Share this post


Link to post
Share on other sites
Guys, we mustn't forget that neither C nor C++ make any guarantees of the size of any types (signed or unsigned), except for the following relationships (this comes verbatim from The C++ Programming Language:

1 == sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long)
1 <= sizeof(bool) <= sizeof(long)
sizeof(char) <= sizeof(wchar_t) <= sizeof(long)
sizeof(float) <= sizeof(double) <= sizeof(long double)
sizeof(N) == sizeof(signed N) == sizeof(unsigned N)


The important bit is this: "sizes of...objects are expressed in terms of multiples of the size of a char, so by definition the size of a char is 1." This means that char could be 10 bits or 2 bits, or the usual 8 bits; In fact, there are some machines out there where a "byte" and a char are both 16 bits.

In other words, there can be no assumptions made about types, which is why C's limits.h and C++'s limits exist. Of course, most of you are only targeting x86 machines, so you're pretty safe with assumptions there. But what about when you're writing stuff for IA-64? The type sizes will be slightly different, and your old code might break if you've made assumptions beyond what I listed above. So, you shouldnever assume.

So, here are the ranges for char when using C (using the macros in limits.h):
signed char : SCHAR_MIN to SCHAR_MAX
unsigned char : 0 to UCHAR_MAX
char: CHAR_MIN to CHAR_MAX

And here's the C++ equivalent (using the templates in limits):
signed char : numeric_limits::min() to numeric_limits::max()
unsigned char: numeric_limits::min() to numeric_limits::max()
char: numeric_limits::min() to numeric_limits::max()

That is the true answer to the original question.


Edited by - merlin9x9 on June 27, 2001 8:40:37 PM

Share this post


Link to post
Share on other sites
Whoops. Byte is a java data type. As for the IA-64, it''s compatible with the x86 machines. x86 code will run on ia-64 processors. Why would they change the data type, much less lower the number of bits? But, heh, never assume, thanks for reminding me.

Share this post


Link to post
Share on other sites
Actually, IA-64 is a whole new architecture, and is not really compatile with IA-32. It does have an emulation mode (kind of like how the 386 and above has 16-bit emulation.) However, without knowing all that much about the IA-64 myself, I think it''s safe to say the sizes of the various datatypes will change, especially sizeof(int). The int datatype is meant to be optimized for speed, that''s why it''s 32-bit on the 386 and above (if you have a 16-bit compiler, likely int will be 16-bits and long would be your only 32-bit integer). So my guess is that it''ll be 64-bit on IA-64. Also, sizeof(void *) will be 8, simply because the IA-64 has a 64-bit address space.

Never, ever, ever make assumptions about the size of data types. Always use sizeof(int) or sizeof(void *) in your calculations. It won''t slow your program down, since the compiler replaces it with the appropriate value at compile time. Plus it adds to the readability of you code (i.e. no "magic numbers")


War Worlds - A 3D Real-Time Strategy game in development.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
''long long'' is not a GCC extension. It''s a part of the ANSI C specification. Visual C++''s failure to support it is one of the major reasons that VC isn''t an ANSI compiler.

Share this post


Link to post
Share on other sites
Not true. I''m pretty sure long long is not ANSI C yet, positive that it wasn''t when MSVC 6.0 was released. And MSVC IS an ANSI compiler. It uses some Microsoft extensions which make it not necessarily compatible with all ANSI compilers right out of the box, but it supports all ANSI data types. If you use the /Za switch on the compiler, it becomes a true ANSI C compiler and the Microsoft extensions generate errors or warnings.

Seeya
Krippy

Share this post


Link to post
Share on other sites
IA-64 was a bad example, I admit. However, the principle I was trying to demonstrate is correct--that we must never make assumptions as Dean Harding reinforced.

Share this post


Link to post
Share on other sites