Say you have a char in binary:
1111 1111
Now when this is upcast to an int do we get zeros appended to the left?:
0000 0000 1111 1111
Is that right? So now we can left-shift 8 more places?
I've been trying to combine the 4 chars representing my red, green, blue and alpha into a 32 bit number( im using long long ). For some reason, even when I cast the chars up to long long and then try to shift them it's not working properly but if I just start out with the chars as long long instead of char, it seems to work fine. I'm assuming the cast is not working as I thought it was. Can anyone offer any insight?
binary, bitwise etc
Quote:Original post by nomichi
Now when this is upcast to an int do we get zeros appended to the left?:
Depends on whether you are working with signed or unsigned values. Unsigned values will be 0-extended, while signed values will be sign-extended.
Quote:Original post by FrunyQuote:Original post by nomichi
Now when this is upcast to an int do we get zeros appended to the left?:
Depends on whether you are working with signed or unsigned values. Unsigned values will be 0-extended, while signed values will be sign-extended.
ok I tested with an unsigned char and it converts correctly. I'm not sure what sign-extended means though. I guess when working with bits I should always use unsigned values?
I managed to get my program working without using unsigned. It was a pain but here it is. I'm gonna go back and see how much easier I can do it with unsigned chars.
#include <ctime>#include <cstdlib>#include <iostream>#include <iomanip>using namespace std;typedef long long int32;void PackedColor( int32& val ){ char r = rand() % 256; char g = rand() % 256; char b = rand() % 256; char a = rand() % 256; cout << "Generated RGBA numbers: " << endl; cout << "R:" << (r & 0xFF) << ' ' << "G:" << (g & 0xFF) << ' ' << "B:" << (b & 0xFF) << ' ' << "A:" << (a & 0xFF) << endl << endl; val = ( 0xFFFFFFFF & (r&0xFF) << 24 | 0xFFFFFFFF & (g&0xFF) << 16 | 0xFFFFFFFF & (b&0xFF) << 8 | 0xFFFFFFFF & (a&0xFF) );}void OutputBin( int32& data ){ for ( int i = 31; i >= 0; i-- ) { if ( data & (int32(1) << i) ) cout << '1'; else cout << '0'; if ( i % 4 == 0 ) cout << ' '; } cout << endl << endl;}int main(){ srand( time (0) ); int32 rgba = 0; // 32bit value to hold our packed RGBA cout << "Verify our 32bit datatype is empty: " << endl; OutputBin(rgba); PackedColor(rgba); cout << "Color packed into 32bit datatype: " << endl; OutputBin(rgba); // unpack our color values into seperate 8bit chars char red = ((rgba >> 24) & 0xFF); char green = ((rgba >> 16) & 0xFF); char blue = ((rgba >> 8) & 0xFF); char alpha = (rgba & 0xFF); cout << "Values unpacked from binary: " << endl; cout << "R:" << (red & 0xFF) << ' ' << "G:" << (green & 0xFF) << ' ' << "B:" << (blue & 0xFF) << ' ' << "A:" << (alpha & 0xFF) << endl << endl; unsigned char test = 255; int32 test2 = test; cout << test2 << endl; return 0;}
Another question I had while converting to the easier method...
Is this:
val = ( 0xFFFFFFFF & (r & 0xFF) << 24 |
0xFFFFFFFF & (g & 0xFF) << 16 |
0xFFFFFFFF & (b & 0xFF) << 8 |
0xFFFFFFFF & (a & 0xFF) );
faster than this?
val = static_cast<int32>(r) << 24 |
static_cast<int32>(g) << 16 |
static_cast<int32>(b) << 8 |
static_cast<int32>(a);
I'm assuming it is either faster or uses less memory? When you do a cast does it essentially make that variable take up the amount of memory that what you cast to would normally use? Even if it's temporary I would imagine all that overhead adds up in big projects.
Is this:
val = ( 0xFFFFFFFF & (r & 0xFF) << 24 |
0xFFFFFFFF & (g & 0xFF) << 16 |
0xFFFFFFFF & (b & 0xFF) << 8 |
0xFFFFFFFF & (a & 0xFF) );
faster than this?
val = static_cast<int32>(r) << 24 |
static_cast<int32>(g) << 16 |
static_cast<int32>(b) << 8 |
static_cast<int32>(a);
I'm assuming it is either faster or uses less memory? When you do a cast does it essentially make that variable take up the amount of memory that what you cast to would normally use? Even if it's temporary I would imagine all that overhead adds up in big projects.
Quote:Original post by nomichi
ok I tested with an unsigned char and it converts correctly. I'm not sure what sign-extended means though. I guess when working with bits I should always use unsigned values?
If the (signed) number is negative, i.e. if its sign bit is 1, the extra space will be filled with ones. This may or may not be what you need, depending on the actual operation you want to perform. Sometimes an unsigned value makes sense, sometimes only a signed value will do.
unsigned 00000000 → 0000000000000000 zero-extendunsigned 11111111 → 0000000011111111 zero-extend signed 00000000 → 0000000000000000 sign-extend signed 11111111 → 1111111111111111 sign-extend
[Edited by - Fruny on May 10, 2006 1:00:25 AM]
Quote:Original post by FrunyQuote:Original post by nomichi
ok I tested with an unsigned char and it converts correctly. I'm not sure what sign-extended means though. I guess when working with bits I should always use unsigned values?
If the (signed) number is negative, i.e. if its sign bit is 1, the extra space will be filled with ones. This may or may not be what you need, depending on the actual operation you want to perform. Sometimes an unsigned value makes sense, sometimes only a signed value will do.unsigned 00000000 → 0000000000000000 zero-extendunsigned 11111111 → 0000000011111111 zero-extend signed 00000000 → 0000000000000000 sign-extend signed 11111111 → 1111111111111111 sign-extend
Oh I see. That explains why the signed char converted ok sometimes. That sign bit must of been 0 in those cases. Thanks for the explanation. :)
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement