Jump to content
Site Stability Read more... ×
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

11840 Excellent

1 Follower

About SiCrane

  • Rank
    Not Getting Enough Sleep

Personal Information

  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. SiCrane

    Writing Efficient Endian-Independent Code in C++

    Well, the difference between arithmetic right shift and whatever other implementation choice is used for signed right shift can make a difference in some situations, it doesn't really come up in this case, because whenever you right shift for serialization you don't actually care what gets filled in at those top bits. If you want to worry about implementation specific details regarding signed/unsigned, I'd start with the fact that casting an integer to a smaller signed type has a completely implementation defined value if it's value can't be represented in that smaller type, which is not the case for a smaller unsigned type. For example, casting the uint16_t 0xff00 to uint8_t will always give you 0x00. Casting it to the signed 8 bit value, and you can get any value the implementation feels like. Fortunately for the intended audience, while platforms exist where this can be an issue, you're unlikely to run into them when doing application programming for any gaming platform that I can think of. I suppose you could also worry about weirdness that happens during conversion of negative numbers between different integer sizes if the platform doesn't use two's complement, but that's probably even more academic for the target audience. There's also the fact that left shifting a negative signed integer is explicitly undefined behavior under C++11. That's right, undefined, not implementation defined. Again, in practice, on the platforms that game programmers work with, it's not an issue and works more less like you expect.    However, what I was thinking about is (mostly) platform independent: sign extension when signed bytes are converted to larger integer types. So if you're deserializing two bytes a and b to a 16 bit number, the straightforward way to combine them is either (a << 8) + b or (a << 8) | b. If a contains the bit pattern 0x01 and b contains the bit pattern 0xff, then you're going to get the bit pattern 0x01ff out from unsigned types with either computation. With a and b as signed characters you're probably going to get 0x00ff out from the addition version and 0xffff out from the bit-or version (the "probably" referring to the fact that I'm assuming two's complement).
  2. SiCrane

    Writing Efficient Endian-Independent Code in C++

    One thing I would emphasize is the importance of using unsigned types when doing these kinds of integer operations. In particular, deserializing from signed byte array may give weird results when any of the bytes are negative. 
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!