As also have been mentioned above, there is a risk that you are falling into a common trap here. Data (or protocols) can be little endian or big endian, but you should never worry about what the current machine/system has.
Please read The byte order fallacy, which I think summarizes it all perfectly!
I was also going to link to that article, but you beat me to it. For the use-case being described here, i think it's pretty accurate. However, as much respect as I have for all the Bell Labs folks, there is more to the story. In the ideal world where computers are infinitely fast, reading and writing every multi-byte value one byte at a time may be great. For those of us making games that need to load 100s of MBs with minimal delay -- not so much. I'd love to ignore the endianness of each platform we support, and serialize all data in one consistent format. But then we'd be iterating over all of that data at load time, instead of loading up a block of memory and doing pointer fixup.
All of that said, our solution is exactly as frob has indicated: We just #define it at compile time. We need to declare lots of things about each platform we support, we just add endianness to the list. It's really not hard. The #if/#elif chain for each of those decisions always looks like:
#if PLATFORM_WIN64
#define LITTLE_ENDIAN
#elif PLATFORM_PS3
#define BIG_ENDIAN
#else
#error Need to specify endianness for new platform!
#endif