Okay first off I found this:
bool endianness() {
int i = 1;
char *ptr;
ptr = (char*) &i;
return (*ptr);
}
I like the way this is done, it's clever, quick and gets the job done. Except it only really works if the size of an int is at-least twice the size of a char. As I understand it the data types don't have a set size and that each 'larger' data type only has to be greater than or equal to the 'next smallest' data type. Considering endianness is architecture specific, and the size of data types would differ on different architectures, this seems to be an unsafe assumption.
There's a lot of garbage information out there on endianness but from what I've gathered (and please correct me if I'm wrong):
char isn't guranteed to be one byte, it seems it's usually the size of whatever the processor processes things in, which is sometimes 16-bit (2 byte) increments
integer can be 1 byte on some architectures, which would break this code
I also have one question, is endianness byte ordering always based on 8-bit bytes or would it be ordered in sections of 16-bits on architectures with 16-bit chars?
I'm trying to accomplish something like this (pseudo-code):
enum ENDIANNESS {UNKNOWN, LITTLE_ENDIAN, BIG_ENDIAN};
ENDIANNESS getEndian()
{
ENDIANESS e = UNKNOWN;
get a 2 byte numeric data type;
set bytes value equal to 1;
get first byte
if ( first byte == 0)
{
e = BIG_ENDIAN
}
else if ( first byte == 1)
{
e = LITTLE_ENDIAN
}
return e;
}
Just not sure the best way to handle this, especially since I'm unclear on whether the byte order is based on 8-bits always (seems unlikely) or whatever the size of a char is on the system executing the code (seems more likely).
Any help is greatly appreciated :)