Sign in to follow this  

-1 6 ?? comparing negatives with positives = wierd

This topic is 4593 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I had this in my code:
unsigned size = 6;
for ( int i = -1; i< size; ++i )
    //some stuff

the for loop didn't run - it thought that i was >= size. As soon as 'i' was changed to 0, however, it worked fine. How come?

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
i is promoted to unsigned. I guess it becomes something like 2^31. If size were signed, it should work.

Share this post


Link to post
Share on other sites
I'm not sure. I would think the compiler would know not to promote in that way due to inaccuracies, rather size would be "demoted" to signed. Unless the anony is right, I can't see why it wouldn't work right. Also, a solution is to use all signed variables, though I'm sure this was done for curiosity.

Share this post


Link to post
Share on other sites
I still don't know too well what was going on, so I just avoid using unsigned variables that will be used to compare for that reason. Any that I use are related to their use, OpenGL, WINAPI something like that. Otherwise, my class members, game variables are all signed.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
I tried with MSVC.NET and assigning -1 to unsigned int without type-casting results in 2^32. So I believe that was going on.

Share this post


Link to post
Share on other sites
I think it's undefined behaviour. The same thing comes up if you try and do...
for (unsigned int i = 10; i < 11; --i)
in MSVC v6.
It relies on integer overflow results being a specific behaviour.

Share this post


Link to post
Share on other sites
Quote:
Original post by Anonymous Poster
I tried with MSVC.NET and assigning -1 to unsigned int without type-casting results in 2^32. So I believe that was going on.

I'm sure you meant 2^32-1 since there's no way to store 2^32 in a 32 bit variable ;)

Share this post


Link to post
Share on other sites
I'm 99.99% sure that your compiler is demoting i to an unsigned int. Just change the other variable to a normal int and you should be fine. By the way, you almost never need to use unsigned ints--signed ints generally hold enough.
Quote:

I'm sure you meant 2^32-1 since there's no way to store 2^32 in a 32 bit variable ;)

Maybe he has a screwy 64-bit cpu that wraps int's around to 2^32 for no apparent reason:P

Share this post


Link to post
Share on other sites

This topic is 4593 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this