Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

steveharper101

Singed/Unsigned Variable types (C Question)

This topic is 6266 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Whats the differance between signed and unsigned datatypes ie whats the differance between: Char s; Unsigned char s; signed char s; And : int s; Unsigned int s; signed int s; I understand that an unsinged char can store a value from 0 to 256. Is this the space thats needed to hold a single character? And does the amount that an unsigned char can hold increase when you create an array. Thanks Alot ~StevE~

Share this post


Link to post
Share on other sites
Advertisement
an unsigned char can hold values from 0 - 255, a signed char can hold values from -128 - 127. A reason why a char holds less than an int is because a char is made up of 8 bits and an integer 16 bits. This could vary from language to language. So for each bit we have, there are 2^bits possible values. For a char we have 8 bits so 2^8 = 256 = the range of 0-255. If we have a signed char, then one of our bits needs to be the sign bit. Now we have 1 sign bit and 7 value bits. Now we have 2^7 = 128 values that are positive and zero, and 128 values that are negative. If the sign bit is 1, we negate the value that is stored in the other 7 bits. If the sign bit is 0, we treat the value in the 7 bits as though it were positive. I may be off on a little of the math here, but the general idea is there.

Also I am pretty sure that when you just say:

char s
or
int s

you are by default creating unsigned variables of the type . . . but I could be wrong . . .

Edited by - taybrin on September 5, 2001 1:44:58 PM

Share this post


Link to post
Share on other sites
quote:
Original post by taybrin
A reason why a char holds less than an int is because a char is made up of 8 bits and an integer 16 bits. This could vary from language to language.


I know you put the size disclaimer there, but here''s a small correction: It actually varies depending on the compiler, platform, operating system, and other features. Although, every 32bit compiler I''ve seen has an int set to be 32bits, not 16.

[Resist Windows XP''s Invasive Production Activation Technology!]

Share this post


Link to post
Share on other sites
Thanks for the heads up. I was thinking in Java,

short, int, long --> 16 32 64 . . . i believe

my mistake

that long above should be double

Edited by - taybrin on September 5, 2001 4:24:03 PM

Share this post


Link to post
Share on other sites
Most often (always?), if you don''t specify ''signed'' or ''unsigned'' the variable will be signed by default (not unsigned, as someone said above).

Share this post


Link to post
Share on other sites
unsigned int ui = -1;
int i = -1;

if ( ui < 0 ) printf("ui less zero\n");
if ( i < 0 ) printf("i less zero\n");

can lead to problems in your code

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
quote:
Original post by taybrin
Thanks for the heads up. I was thinking in Java,

short, int, long --> 16 32 64 . . . i believe

my mistake

that long above should be double

Edited by - taybrin on September 5, 2001 4:24:03 PM


int is an alias for long, depending on the compiler.

so:
short = 16
int = 32
long = 32
int64 = 64

Share this post


Link to post
Share on other sites
Yep, that''s how it is on my compiler. Just to let you know, if you are using MSVC++ you can use __int8, __int16, __int32, and __int64 for more portable code.

Abs

Share this post


Link to post
Share on other sites
Umm, portable between different versions of windows or what? (win32, win64).

__int8, __int16, etc aren''t standard types, they are Microsoft extensions, and therefore shouldn''t be used in code that is supposed to be portable.

The best way to ensure that you are using types of a particular size is to use typedefs and conditional compilation for different platforms/compilers.

Share this post


Link to post
Share on other sites
Umm, they are actually synonyms for ANSI standards so they ensure you are using the right type. Or to quote the SDK "useful for writing portable code that behaves identically across multiple platforms". Anyway, the only reason I bring it up is because I have had occasion recently to use the __int64 type while writing the timing code for my game (the high performance counters all return 64 bit integers). I happened to notice in the SDK that you can use this syntax for all types. I have only used __int64 myself, but I can see how the others might be helpful so I thought I would point them out. Look it up yourself if you don''t believe me.

Oh yeah, one last thing, I don''t think littering your code with #ifdef''s is the best way to go for cross-platform compilations, but that''s for another thread.

Abs

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!