int is 32 bits.
char is 8 bits.
short is 16.
Actually char is 8bit in 99% of all environments, and from there it's char < short <= int (not perfectly sure about about where it's < and <=). You will most likely find short to be 16bit most of the time and long to be 32bit, while int is the "natural" size for your platform (which happens to be 64bit if you compile for 64bit and 16bit for some old 16bit machines). Never make assumptions about the size of types, that's why there's a stdint header with all the int8_t typedefs (and a ton of #ifdefs).
[quote name='Nanoha']
unsigned int colour = 0;
colour |= (red << 24);
colour |= (green << 16);
colour |= (blue << 8);
colour |= |= alpha;
[/quote]
You realize that you can let the compiler do that kind of thing? (Including a somewhat silly use of union for interchangeable usage.
union
{
uint32_t color;
struct
{
uint32_t red : 8;
uint32_t green : 8;
uint32_t blue : 8;
uint32_t alpha : 8;
}
};
Though I honestly see little point in doing that over using using one uint8_t per channel in the first place.
While I would stick with vector<bool>, it wouldn't be that hard to use an array and get the bits yourself. Depending on the used type (unsigned int might be the most efficient), all you have to do is use your bit index and get the word and bit of that word ( i/sizeof(unsigned) and i%sizeof(unsigned) ). Somebody will point out that "it's teh fasta using bit fiddling", ie i>>4 and i&0xfffffff, but I haven't seen a compiler for a while that wasn't smart enough to turn / and % into bit operations for power of two operands.