Sorry for not making that clear. In some ways I am living in the future of the language and compilers, not the present.
Nearly all of today's x64 "64-bit compilers" are actually 32-bit compilers with 64-bit extensions bolted on. Fundamentally they are designed around 32-bit architecture, 32-bit integers, and 32-bit optimizations.
Some systems, notably a large number of ARM compilers, have already transitioned to fully 64-bit designs. A growing number of x64 compilers are transitioning to full 64-bit. Notably among the x64 systems, gcc has been pushing for fully 64-bit systems rather than "32-bit with extensions".
Some of us work on games that are cross-platform that include both 32-bit and (true) 64-bit systems.
Again, apologies if that was unclear. The C++11 standard has been expanded to support 64-bit as a standard integer type. For some code that is a terrifying chasm to be crossed. For some code that is like having shackles removed. Relatively few programmers exist in the latter.
The reason I posted the example was to provide an example of where valid code that worked one way under a C++03 compiler gives different results under a C++11 compiler.
The addition of 64-bit integer types and the changes to enumerations are two cases where the behavior is not "fully backwards compatible."
Here's the relevant bit of the old standard:
That doesn't sound right. 0x89abcdef is an int literal (not unsigned int as you suggest in the myEnum comment) and should always be whatever an int is with your compiler (which may be 32 or 64 bit, but that isn't new),
On a compiler where int is 32-bits 0x89abcdef would not fit in an int, but would fit in an unsigned int, so as a literal 0x89abcdef would be an unsigned int. The rules in C++11 are substantially the same except that the list goes int, unsigned int, long int, unsigned long int, long long int, unsigned long long int. So for a compiler where int is 32-bits, 0x89abcdef is still going to be an unsigned int. In the case where int is 16-bits and long is 32-bits, under both standards 0x89abcdef would be an unsigned long. In either of these cases 0x89abcdef would be an unsigned 32-bit integer. (More exotic architectures where int and long are different bit sizes are left as exercises for the reader.) So frob's comment about the type of the literal being possible something other than u32 under C++11 are incorrect. (Again assuming typical bit sizes for integers. Relevant bits of the standard are 2.13 in the old version and 2.14 in the current version if anyone wants to follow along.)
The type of an integer literal depends on its form, value, and suffix.... If it is octal or hexadecimal and has no suffix, it has the first of these types in which its value can be represented: int, unsigned int, long int, unsigned long int.
Correct about the assumption of systems with 32-bit int. The cross platform game targets a game system that happens to not follow the 32-bit int used on PC and current-gen consoles. The assumption is increasingly invalid.
As for when enums are mixed in, there are many fun and exciting rules involved when C++11 introduced "fixed" enums. A fixed enum has an underlying type specified. If no type is given there are several rules to determine the underlying type. But something interesting happens when you specify values directly with a non-fixed enum. Under 7.2, an enumeration can have a different type than the enumerated values: "If the underlying type is not fixed, the type of each enumerator is the type of its initializing value". The actual type is implementation defined. There are a few more details, but a careful reading (and if you follow standards-notes, a longstanding frustration) is that it allows a few fun quirks, such as an enumeration's underlying type to be signed but an individual enumerator to be unsigned. That's why the example of an enumerator with an initializing value of 0x89abcdef becomes interesting. Many programmers do not think about implementation-defined behavior.
Now that the standard permits 64-bit conversions and promotions and types, code is expected to conform to it. There is much code that will behave badly when moving forward to 64-bit compilers (which I had the joy of experiencing), just as much code was similarly troublesome in the 16-bit to 32-bit migration.
The key difference is that the old standard was limited to 32-bit promotions and conversions. The new standard includes 64-bit types, promotions and conversions as standard integer types, promotions and conversion rules. These rules are very nearly but not completely backwards compatible. They have the potential to break your code when you transition to C++11.
However, what frob is probably referring to is the effective type after integral promotions. When a binary operator is applied to two operands of integral type, both sides are promoted to a common type before being operated on. The rules are sufficiently complex that I'm not going to bother to summarize them, but under the old standard the rules were pretty predictable, but under the current standard the rules are a mess of implementation specific choices. (If you want to follow along, the relevant parts are section 5 paragraph 9 and section 4.5 in both standards, plus 4.13 in the current standard.) frob makes some assumptions about bit sizes of integer types that the standards don't mandate, but is otherwise substantially correct when talking about effective type of the literal rather than actual type. (I'm not sure that his characterization of foo == MAGIC_NUMBER being possibly s32 is correct, but I can see u32, s64 and u64 all happening.)
That is the second thing I was referring to, yes.
The same preferences apply, first signed than unsigned. Having s32 op u32 can result in both being promoted to s64, since it can represent all the values of the source types.
These are just some of the subtle little gotchas that moving to a 64-bit enabled standard provides.
If you are using what amounts to a 32-bit compiler with 64-bit extensions, that code can generate very different implementation defined results from a true 64-bit compiler. Both results are legal.
The point of all this, if you look back, is that C++11 is not fully backwards compatible with C++03, as an earlier post in the thread questioned.
Some code that worked one way under C++03 can work slightly differently under C++11 in ways that can break your program. Some behavior (a small amount) that was specified in C++03 now falls under slightly different specifications. The larger your programs, the greater the risk of breakage. You cannot replace a C++03 compiler with a C++11 compiler and blindly expect everything to just work.