• Advertisement

Archived

This topic is now archived and is closed to further replies.

Singed/Unsigned Variable types (C Question)

This topic is 6004 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Whats the differance between signed and unsigned datatypes ie whats the differance between: Char s; Unsigned char s; signed char s; And : int s; Unsigned int s; signed int s; I understand that an unsinged char can store a value from 0 to 256. Is this the space thats needed to hold a single character? And does the amount that an unsigned char can hold increase when you create an array. Thanks Alot ~StevE~

Share this post


Link to post
Share on other sites
Advertisement
an unsigned char can hold values from 0 - 255, a signed char can hold values from -128 - 127. A reason why a char holds less than an int is because a char is made up of 8 bits and an integer 16 bits. This could vary from language to language. So for each bit we have, there are 2^bits possible values. For a char we have 8 bits so 2^8 = 256 = the range of 0-255. If we have a signed char, then one of our bits needs to be the sign bit. Now we have 1 sign bit and 7 value bits. Now we have 2^7 = 128 values that are positive and zero, and 128 values that are negative. If the sign bit is 1, we negate the value that is stored in the other 7 bits. If the sign bit is 0, we treat the value in the 7 bits as though it were positive. I may be off on a little of the math here, but the general idea is there.

Also I am pretty sure that when you just say:

char s
or
int s

you are by default creating unsigned variables of the type . . . but I could be wrong . . .

Edited by - taybrin on September 5, 2001 1:44:58 PM

Share this post


Link to post
Share on other sites
quote:
Original post by taybrin
A reason why a char holds less than an int is because a char is made up of 8 bits and an integer 16 bits. This could vary from language to language.


I know you put the size disclaimer there, but here''s a small correction: It actually varies depending on the compiler, platform, operating system, and other features. Although, every 32bit compiler I''ve seen has an int set to be 32bits, not 16.

[Resist Windows XP''s Invasive Production Activation Technology!]

Share this post


Link to post
Share on other sites
Thanks for the heads up. I was thinking in Java,

short, int, long --> 16 32 64 . . . i believe

my mistake

that long above should be double

Edited by - taybrin on September 5, 2001 4:24:03 PM

Share this post


Link to post
Share on other sites
Most often (always?), if you don''t specify ''signed'' or ''unsigned'' the variable will be signed by default (not unsigned, as someone said above).

Share this post


Link to post
Share on other sites
unsigned int ui = -1;
int i = -1;

if ( ui < 0 ) printf("ui less zero\n");
if ( i < 0 ) printf("i less zero\n");

can lead to problems in your code

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
quote:
Original post by taybrin
Thanks for the heads up. I was thinking in Java,

short, int, long --> 16 32 64 . . . i believe

my mistake

that long above should be double

Edited by - taybrin on September 5, 2001 4:24:03 PM


int is an alias for long, depending on the compiler.

so:
short = 16
int = 32
long = 32
int64 = 64

Share this post


Link to post
Share on other sites
Yep, that''s how it is on my compiler. Just to let you know, if you are using MSVC++ you can use __int8, __int16, __int32, and __int64 for more portable code.

Abs

Share this post


Link to post
Share on other sites
Umm, portable between different versions of windows or what? (win32, win64).

__int8, __int16, etc aren''t standard types, they are Microsoft extensions, and therefore shouldn''t be used in code that is supposed to be portable.

The best way to ensure that you are using types of a particular size is to use typedefs and conditional compilation for different platforms/compilers.

Share this post


Link to post
Share on other sites
Umm, they are actually synonyms for ANSI standards so they ensure you are using the right type. Or to quote the SDK "useful for writing portable code that behaves identically across multiple platforms". Anyway, the only reason I bring it up is because I have had occasion recently to use the __int64 type while writing the timing code for my game (the high performance counters all return 64 bit integers). I happened to notice in the SDK that you can use this syntax for all types. I have only used __int64 myself, but I can see how the others might be helpful so I thought I would point them out. Look it up yourself if you don''t believe me.

Oh yeah, one last thing, I don''t think littering your code with #ifdef''s is the best way to go for cross-platform compilations, but that''s for another thread.

Abs

Share this post


Link to post
Share on other sites
Thanks for the post's guy's!!

So an unsigned integer can store values from 2^32 = 4294967296 possible values from 0 - 4294967296.

A signed integer can store values from 2^31 = 2147483648
possible values from -2147483648 to 2147483647
where one of the bits equals the singed value

These values can vary from compiler to compiler

And short int's are half these values as there are only 16 bits long

Is this correct?

Thanks Alot for the previous posts!!

~StevE~


Edited by - steveharper101 on September 6, 2001 6:41:36 PM

Share this post


Link to post
Share on other sites
>>So an unsigned integer can store values from 2^32 = 4294967296 possible values from 0 - 4294967296.<<

u cant always be certain, eg 64bit computers are becoming more common and an int there is usually 2^64

Share this post


Link to post
Share on other sites
quote:
Original post by steveharper101
And short int''s are half these values as there are only 16 bits long

Is this correct?



16-bit ints have half the number of bit, and so the min/max values is the square root of the min/max value of a 32-bit int.

int: 2^32 = ~4,000,000,000
short: 2^16 = ~65,000


codeka.com - Just click it.

Share this post


Link to post
Share on other sites
quote:
Original post by Absolution
Umm, they are actually synonyms for ANSI standards so they ensure you are using the right type. Or to quote the SDK "useful for writing portable code that behaves identically across multiple platforms". Anyway, the only reason I bring it up is because I have had occasion recently to use the __int64 type while writing the timing code for my game (the high performance counters all return 64 bit integers). I happened to notice in the SDK that you can use this syntax for all types. I have only used __int64 myself, but I can see how the others might be helpful so I thought I would point them out. Look it up yourself if you don't believe me.

Oh yeah, one last thing, I don't think littering your code with #ifdef's is the best way to go for cross-platform compilations, but that's for another thread.

Abs

Yes, synonyms for ANSI types, not ANSI types per se. My point is that those synonyms are defined by Microsoft and not in the standard (at least this is how it used to be, if they are currently supported by other compilers, or perhaps even the standard, then please correct me).
I believe that what the SDK docs mean is that using the __int8/16/32/64 types will ensure portability between different windows platforms (x86/alpha, win32/win64), not portability in the broader sense.

Currently the "most portable" way to write portable code is to use #idefs and #defines for some things, the reason being that there are many (small) incompatibilities in the implementation of different compilers/platforms/run-time libraries.

A small example:
In MSVC the 64 bit integer type is called "__int64", in GCC it's called "long long" (for a 32bit platform).



Edited by - Dactylos on September 17, 2001 4:34:52 PM

Share this post


Link to post
Share on other sites

  • Advertisement