Sign in to follow this  
MarkS

List of C++11 compliant compilers?

Recommended Posts

SiCrane    11839

I don't think anyone would argue that "printf" is part of the C language

I would. It's defined in section 7.21 of the current C standard, which is entitled "Programming languages - C", not "The C programming language and various bits not actually part of the language". Similarly, std::cout is defined in 27.4.2 of the current C++ standard.

Share this post


Link to post
Share on other sites
MJP    19755

It would be nice if there was some real effort to get Clang on Windows. I'd be real tempted to switch to it from GCC considering what I've seen with error messages and C++11/14 support.

Yeah I've used clang on non-Windows platforms and it's really great. I know one guy was maintaining a pre-compiled version of clang for Windows along with a VS add-in that could switch out MSVC++ for clang, but I don't know if he's still actively working on it. For the VS add-in to actually workable someone would have to figure out how to generate .PDB's so that the VS debugger would work with it, and that's probably a difficult task.

Share this post


Link to post
Share on other sites
Ravyne    14300

 


OP, have you tried the VS2013 preview? It supports more of C++11 -- indeed all, or nearly all, of the stuff you'd actually care to use. Again, its just a preview, but the real-deal is just around the corner.

 

Far from it. Implicit move generation is pretty important, and ref qualifiers are very helpful for correctness. And no proper constexpr. And no user defined literals. All delayed to post VC2013 CTPs (so not for production code). The saddest part? On the far right (lowest priority/farthest in the future) there are still two C++98 bullet points. Admittedly not that important but that standard is now 15 years old, and Visual Studio still isn't there yet, and I suspect that C++11/14 will take just as long.

 

Visual Studio is lagging behind extremely.

 

About Mingw, I usually get my MinGW from http://sourceforge.net/projects/mingwbuilds/ those are very up to date.

 

 

I don't disagree that its behind, and I've perhaps made the mistake of overstating what it supports -- it does lack features both important and not-so-important. The point I was pushing at is that C++11 adds a lot of stuff, and VS2013 will contain the majority of it by numbers, and those numbers are made up of the most-important, most-game-changing additions -- things like auto, lambdas, r-value references, move semantics, perfect forwarding, variadic templates, etc -- that actually change the way we build software, rather than just making things nicer. Certainly there are still things to add that are critical for a set of people and also nice-to-haves for even a majority, but for a lot of what remains there are reasonable work-arounds or just making due with what we have now is not so terrible. Perhaps qualifying that as "nearly all of the stuff you'd actually care to use" was worded poorly -- if your goal is to use one of those unsupported features, or to have a play with all of C++11, then VS2013 won't deliver for you -- Its far from perfect, I'm just saying they have most things, and that those things they have provide the most value to the most people in their work.

 

You also have to keep in mind that Microsoft's compiler has a very different market in mind than open-source solutions like GCC or Clang -- The bulk of Microsoft's customers are corporations building and maintaining in-house tools. These corporations change at a glacial pace, and are entirely entrenched in the Microsoft Platform. Microsoft could have a markedly different dialect of C++ (as some have argued in the past, say VC++ 6,) and while that would not be a good thing, it would matter little to the majority their core customers. In fact, the change of new standards or changing behaviors in small ways is often more painful for those customers than the benefit they might provide -- especially if they have critical code that's on life-support where they haven't the budget or interest to update it wholesale, but it needs to compile when they fix a bug. That's why MS maintains a number of switches that restore some of that non-standard behavior (like for-loop index variable scoping from VC6).

 

People who rely on open-source compilers typically (or, moreso than the customers I describe above) keep their code moving along to current standards (sometimes as a re-write, sometimes as a consequence of the software being open-source), and when its not kept-up-to-date, they have the option of going back to whatever branch of GCC or Clang last worked for them, and they get the benefit of whatever bug-fixes have been made to that branch even long-after mainline development has moved on to the next point (or even whole-number) version. For Microsoft's compiler, you only get critical bug-fixes until the support lifetime is up.

 

There's also the fact that many of the challenges implementing new features in Microsoft's compiler is because the compiler's legacy goes back to pre-standard C++ (C++96), which is certainly not true of Clang. Its true of GCC also, but GCC has had the benefit of essentially rolling releases. Everyone who uses GCC upgrades because doing so only costs you time, and its typically very incremental, MS's releases are more spaced-out, and often more drastic, so updating code for the new release is a huge deal in a corporate environment, and code may languish in old features and ways of doing things for a very long time as a result.

 

Its absolutely reasonable to criticize Microsoft for being behind, but such criticism should be tempered by understanding the realities the product faces. They're doing what best-serves their customers, most of whom are not standards-wonks. There's a lot of work ahead to be sure, but I doubt it will be so long before C++11 and 14 will fully conformant. Probably late 2015 at the earliest would be my guess, but not languishing for 5, 10, 15 years like a couple parts of C99 have.

Share this post


Link to post
Share on other sites
swiftcoder    18432

The saddest part? On the far right (lowest priority/farthest in the future) there are still two C++98 bullet points

I don't know which list you are looking at, but I'll give even odds that one of those C++98 features is template export.

Nobody (with the partial exception of Comeau) ever implemented template export - it's widely regarded as a feature that should never have been added to the standard in the first place.

I'd hardly call it fair to ding Microsoft for that one, since both GCC and Clang also fail that test...

Share this post


Link to post
Share on other sites
MarkS    3502

...snip...


I think I get what you're saying, but I have a question. Isn't C++11 fully backwards compatible with C++98? Couldn't I write a fully C++98 program and have it compile on a C++11 compiler? If I could do so, then there is no excuse to not fully implement C++11. If the implementation will not break existing code bases, then failure to implement C++11 boils down to some other reason.

My understanding is that the new standards add features, not break existing features.

Share this post


Link to post
Share on other sites
SiCrane    11839

Plus, C++11 added some keywords so C++03 code using those as identifiers would break, though of those keywords, the only one I've seen in real code is nullptr. 

Share this post


Link to post
Share on other sites
frob    44911

 


Isn't C++11 fully backwards compatible with C++98?

Nope. There are a handful of C++98 features that were removed from C++11, and a few more areas where the semantics changed between C++98 and C++11.

 

 

There are also some subtle breaking changes.  One that bit us was that integral promotions now go up to 64 bits.  Then add to it that some promotions signed-ness is either undefined or implementation defined when enums are involved.

 

Example:

enum myEnum {  // Variables of this type are s32
{ 
    smallnum      = 0x01234567,  // This is s32
    signedvalue   = -1,          // This is s32
    bignum        = 0x89abcdef,  // used to be u32 that could be silently reinterpreted as s32.  Today the interpretation is compiler specific.  It may be an u32, a s64, or reinterpreted in some alternate compiler-specific length.
};

// PROGRAMMERS BEWARE! Under C++03 this was u32. Under C++11 this can be (but is not guaranteed to be) an s64.  Generally when used in integral operations the other value will silently be promoted to a compatible integer type,probably s64. In some expressions (such as those involving enums) it can also be a u32.  It should probably have either a "UL" or "LL" specifier at the end.
#define MAGICNUMBER (0x89abcdef)

myEnum foo = bignum; /* foo has underlying type of s32. The value of bignum is not fixed, and may be interpreted as either u32 (-1985229329) or s64 (2309737967) depending on context. The compiler is okay with this assignment due to enum rules, the value is silently reinterpreted to an s32 regardless of how it interprets the value elsewhere. */
if( foo == smallnum )  ...  /* This is a signed 32-bit comparison */
if( foo == bignum ) ... /* This is probably a signed 32-bit comparison.  However, we found a few compilers that expanded it and treated it as a 64-bit comparison */
if( foo == MAGICNUMBER ) ... /* BUG! Compiler specific functionality! The generated assembly can legally be s32, u32, s64, and u64. It is up to the compiler which one is chosen, and if any sign-extension is used.  The result of this expression is false with some compilers, true with others.  Caveat Emptor. */

We were bit by this on one console's compiler.

 

 

 

Just like the 32-bit transition in the early '90s, there will be a bit of growing pain to the new 64-bit standard.

Edited by frob
minor change for clarity and formatting.

Share this post


Link to post
Share on other sites
l0calh05t    1796
...

 

 

That doesn't sound right. 0x89abcdef is an int literal (not unsigned int as you suggest in the myEnum comment) and should always be whatever an int is with your compiler (which may be 32 or 64 bit, but that isn't new),

Share this post


Link to post
Share on other sites
l0calh05t    1796

 

The saddest part? On the far right (lowest priority/farthest in the future) there are still two C++98 bullet points

I don't know which list you are looking at, but I'll give even odds that one of those C++98 features is template export.

Nobody (with the partial exception of Comeau) ever implemented template export - it's widely regarded as a feature that should never have been added to the standard in the first place.

I'd hardly call it fair to ding Microsoft for that one, since both GCC and Clang also fail that test...

 

Nope, template export isn't even on the list (and removed in C++11 anyways). The missing C++98 points that are still to be implemented according to the roadmap are correct two-phase lookup (template instantiation) and some preprocessor features.

Share this post


Link to post
Share on other sites
SiCrane    11839

That doesn't sound right. 0x89abcdef is an int literal (not unsigned int as you suggest in the myEnum comment) and should always be whatever an int is with your compiler (which may be 32 or 64 bit, but that isn't new),

Here's the relevant bit of the old standard:

The type of an integer literal depends on its form, value, and suffix.... If it is octal or hexadecimal and has no suffix, it has the first of these types in which its value can be represented: int, unsigned int, long int, unsigned long int.

On a compiler where int is 32-bits 0x89abcdef would not fit in an int, but would fit in an unsigned int, so as a literal 0x89abcdef would be an unsigned int. The rules in C++11 are substantially the same except that the list goes int, unsigned int, long int, unsigned long int, long long int, unsigned long long int. So for a compiler where int is 32-bits, 0x89abcdef is still going to be an unsigned int. In the case where int is 16-bits and long is 32-bits, under both standards 0x89abcdef would be an unsigned long. In either of these cases 0x89abcdef would be an unsigned 32-bit integer. (More exotic architectures where int and long are different bit sizes are left as exercises for the reader.) So frob's comment about the type of the literal being possible something other than u32 under C++11 are incorrect. (Again assuming typical bit sizes for integers. Relevant bits of the standard are 2.13 in the old version and 2.14 in the current version if anyone wants to follow along.)

However, what frob is probably referring to is the effective type after integral promotions. When a binary operator is applied to two operands of integral type, both sides are promoted to a common type before being operated on. The rules are sufficiently complex that I'm not going to bother to summarize them, but under the old standard the rules were pretty predictable, but under the current standard the rules are a mess of implementation specific choices. (If you want to follow along, the relevant parts are section 5 paragraph 9 and section 4.5 in both standards, plus 4.13 in the current standard.) frob makes some assumptions about bit sizes of integer types that the standards don't mandate, but is otherwise substantially correct when talking about effective type of the literal rather than actual type. (I'm not sure that his characterization of [tt]foo == MAGIC_NUMBER[/tt] being possibly s32 is correct, but I can see u32, s64 and u64 all happening.)

Share this post


Link to post
Share on other sites
l0calh05t    1796

 

That doesn't sound right. 0x89abcdef is an int literal (not unsigned int as you suggest in the myEnum comment) and should always be whatever an int is with your compiler (which may be 32 or 64 bit, but that isn't new),

Here's the relevant bit of the old standard:

The type of an integer literal depends on its form, value, and suffix.... If it is octal or hexadecimal and has no suffix, it has the first of these types in which its value can be represented: int, unsigned int, long int, unsigned long int.

On a compiler where int is 32-bits 0x89abcdef would not fit in an int, but would fit in an unsigned int, so as a literal 0x89abcdef would be an unsigned int. The rules in C++11 are substantially the same except that the list goes int, unsigned int, long int, unsigned long int, long long int, unsigned long long int. So for a compiler where int is 32-bits, 0x89abcdef is still going to be an unsigned int. In the case where int is 16-bits and long is 32-bits, under both standards 0x89abcdef would be an unsigned long. In either of these cases 0x89abcdef would be an unsigned 32-bit integer. (More exotic architectures where int and long are different bit sizes are left as exercises for the reader.) So frob's comment about the type of the literal being possible something other than u32 under C++11 are incorrect. (Again assuming typical bit sizes for integers. Relevant bits of the standard are 2.13 in the old version and 2.14 in the current version if anyone wants to follow along.)

However, what frob is probably referring to is the effective type after integral promotions. When a binary operator is applied to two operands of integral type, both sides are promoted to a common type before being operated on. The rules are sufficiently complex that I'm not going to bother to summarize them, but under the old standard the rules were pretty predictable, but under the current standard the rules are a mess of implementation specific choices. (If you want to follow along, the relevant parts are section 5 paragraph 9 and section 4.5 in both standards, plus 4.13 in the current standard.) frob makes some assumptions about bit sizes of integer types that the standards don't mandate, but is otherwise substantially correct when talking about effective type of the literal rather than actual type. (I'm not sure that his characterization of [tt]foo == MAGIC_NUMBER[/tt] being possibly s32 is correct, but I can see u32, s64 and u64 all happening.)

 

First off, +1 for naming the relevant sections of the standard. I didn't know that hex literals could be considered unsigned or signed.

 

As far as I can tell from 4.5 §3, the enum would have to evaluate to an s64 (on a typical int = long = s32, long long = s64 compiler), since an s32 cannot represent 0x89abcdef and an u32 cannot represent -1. And the comparison would have to be performed as s64 and under no circumstances s32 (the number itself isn't representable as s32).

Share this post


Link to post
Share on other sites
SiCrane    11839
4.5 isn't the section used to determine the underlying type of a non-explicitly typed enum, it's 7.2 (paragraph 5 in the old standard and paragraph 6 in the new standard). Both versions of the standard contain the wonderfully non-specific line "It is implementation-defined which integral type is used as the underlying type except that the underlying type shall not be larger than int unless the value of an enumerator cannot fit in an int or unsigned int." Note that it says the value of an enumerator not that range of the all enumerators must simultaneously fit within an int or unsigned int.

Share this post


Link to post
Share on other sites
frob    44911

Sorry for not making that clear.  In some ways I am living in the future of the language and compilers, not the present.

Nearly all of today's x64 "64-bit compilers" are actually 32-bit compilers with 64-bit extensions bolted on. Fundamentally they are designed around 32-bit architecture, 32-bit integers, and 32-bit optimizations.

Some systems, notably a large number of ARM compilers, have already transitioned to fully 64-bit designs. A growing number of x64 compilers are transitioning to full 64-bit. Notably among the x64 systems, gcc has been pushing for fully 64-bit systems rather than "32-bit with extensions".

Some of us work on games that are cross-platform that include both 32-bit and (true) 64-bit systems.

 

Again, apologies if that was unclear.  The C++11 standard has been expanded to support 64-bit as a standard integer type. For some code that is a terrifying chasm to be crossed.  For some code that is like having shackles removed.  Relatively few programmers exist in the latter.

 

 

 

 

The reason I posted the example was to provide an example of where valid code that worked one way under a C++03 compiler gives different results under a C++11 compiler.

 

The addition of 64-bit integer types and the changes to enumerations are two cases where the behavior is not "fully backwards compatible."

 

 

 

That doesn't sound right. 0x89abcdef is an int literal (not unsigned int as you suggest in the myEnum comment) and should always be whatever an int is with your compiler (which may be 32 or 64 bit, but that isn't new),

Here's the relevant bit of the old standard:

The type of an integer literal depends on its form, value, and suffix.... If it is octal or hexadecimal and has no suffix, it has the first of these types in which its value can be represented: int, unsigned int, long int, unsigned long int.

On a compiler where int is 32-bits 0x89abcdef would not fit in an int, but would fit in an unsigned int, so as a literal 0x89abcdef would be an unsigned int. The rules in C++11 are substantially the same except that the list goes int, unsigned int, long int, unsigned long int, long long int, unsigned long long int. So for a compiler where int is 32-bits, 0x89abcdef is still going to be an unsigned int. In the case where int is 16-bits and long is 32-bits, under both standards 0x89abcdef would be an unsigned long. In either of these cases 0x89abcdef would be an unsigned 32-bit integer. (More exotic architectures where int and long are different bit sizes are left as exercises for the reader.) So frob's comment about the type of the literal being possible something other than u32 under C++11 are incorrect. (Again assuming typical bit sizes for integers. Relevant bits of the standard are 2.13 in the old version and 2.14 in the current version if anyone wants to follow along.)

 

 
Correct about the assumption of systems with 32-bit int. The cross platform game targets a game system that happens to not follow the 32-bit int used on PC and current-gen consoles.  The assumption is increasingly invalid.

 

As for when enums are mixed in, there are many fun and exciting rules involved when C++11 introduced "fixed" enums. A fixed enum has an underlying type specified. If no type is given there are several rules to determine the underlying type. But something interesting happens when you specify values directly with a non-fixed enum. Under 7.2, an enumeration can have a different type than the enumerated values: "If the underlying type is not fixed, the type of each enumerator is the type of its initializing value". The actual type is implementation defined. There are a few more details, but a careful reading (and if you follow standards-notes, a longstanding frustration) is that it allows a few fun quirks, such as an enumeration's underlying type to be signed but an individual enumerator to be unsigned. That's why the example of an enumerator with an initializing value of 0x89abcdef becomes interesting.  Many programmers do not think about implementation-defined behavior.

 

 

Now that the standard permits 64-bit conversions and promotions and types, code is expected to conform to it.  There is much code that will behave badly when moving forward to 64-bit compilers (which I had the joy of experiencing), just as much code was similarly troublesome in the 16-bit to 32-bit migration.

 
 
 
The key difference is that the old standard was limited to 32-bit promotions and conversions.  The new standard includes 64-bit types, promotions and conversions as standard integer types, promotions and conversion rules.  These rules are very nearly but not completely backwards compatible. They have the potential to break your code when you transition to C++11.
 
 

However, what frob is probably referring to is the effective type after integral promotions. When a binary operator is applied to two operands of integral type, both sides are promoted to a common type before being operated on. The rules are sufficiently complex that I'm not going to bother to summarize them, but under the old standard the rules were pretty predictable, but under the current standard the rules are a mess of implementation specific choices. (If you want to follow along, the relevant parts are section 5 paragraph 9 and section 4.5 in both standards, plus 4.13 in the current standard.) frob makes some assumptions about bit sizes of integer types that the standards don't mandate, but is otherwise substantially correct when talking about effective type of the literal rather than actual type. (I'm not sure that his characterization of [tt]foo == MAGIC_NUMBER[/tt] being possibly s32 is correct, but I can see u32, s64 and u64 all happening.)

 

 

That is the second thing I was referring to, yes.

 

The same preferences apply, first signed than unsigned.  Having s32 op u32 can result in both being promoted to s64, since it can represent all the values of the source types.
  

These are just some of the subtle little gotchas that moving to a 64-bit enabled standard provides.
 
If you are using what amounts to a 32-bit compiler with 64-bit extensions, that code can generate very different implementation defined results from a true 64-bit compiler. Both results are legal.

 

 

 

The point of all this, if you look back, is that C++11 is not fully backwards compatible with C++03, as an earlier post in the thread questioned.  

 

Some code that worked one way under C++03 can work slightly differently under C++11 in ways that can break your program.  Some behavior (a small amount) that was specified in C++03 now falls under slightly different specifications. The larger your programs, the greater the risk of breakage.  You cannot replace a C++03 compiler with a C++11 compiler and blindly expect everything to just work.

Share this post


Link to post
Share on other sites
l0calh05t    1796

4.5 isn't the section used to determine the underlying type of a non-explicitly typed enum, it's 7.2 (paragraph 5 in the old standard and paragraph 6 in the new standard). Both versions of the standard contain the wonderfully non-specific line "It is implementation-defined which integral type is used as the underlying type except that the underlying type shall not be larger than int unless the value of an enumerator cannot fit in an int or unsigned int." Note that it says the value of an enumerator not that range of the all enumerators must simultaneously fit within an int or unsigned int.

Well, 4.5§3 states

A prvalue of an unscoped enumeration type whose underlying type is not fixed (7.2) can be converted to a
prvalue of the first of the following types that can represent all the values of the enumeration (i.e., the values
in the range bmin to bmax as described in 7.2): int, unsigned int, long int, unsigned long int, long
long int, or unsigned long long int. If none of the types in that list can represent all the values of
the enumeration, a prvalue of an unscoped enumeration type can be converted to a prvalue of the extended
integer type with lowest integer conversion rank (4.13) greater than the rank of long long in which all the
values of the enumeration can be represented. If there are two such extended types, the signed one is chosen.

which would seem to apply to enums, no?

7.2 §5 states

Each enumeration defines a type that is different from all other types. Each enumeration also has an
underlying type. The underlying type can be explicitly specified using enum-base; if not explicitly specified,
the underlying type of a scoped enumeration type is int. In these cases, the underlying type is said to be
fixed. [...]

but, this isn't a scoped enum (enum class / enum struct), so the type is not fixed and 7.2 §6 states

For an enumeration whose underlying type is not fixed, the underlying type is an integral type that can
represent all the enumerator values defined in the enumeration. [...]

Which would seem to lead to the same result as 4.5 §3

 

In any case, yes, C++11 is not backwards compatible with C++99. In fact even the auto keyword used to have a different meaning.

Share this post


Link to post
Share on other sites
frob    44911

 

For an enumeration whose underlying type is not fixed, the underlying type is an integral type that can
represent all the enumerator values defined in the enumeration. [...]

Which would seem to lead to the same result as 4.5 §3

 

 

See also language defects #1618 (Gratuitously-unsigned underlying enum type) and #1636 (Bits required for negative enumerator values).

 

 

 

 

Enumerations and enumerators on cross-platform code are an implementation-defined mess.  They mostly work okay and they generally work as programmers expect, but for their actual inner workings they play fast-and-loose with the rules. Compilers differ on some rather important implementation-defined details.  Usually if you stick to a tight family of compilers the functionality is the same, but when you hit broader cross-platform boundaries the behavior becomes difficult.  Sharing enumeration-using utility code between PC and Playstation? Not a problem.  Sharing enumeration-using utility code between PC, 3DS, and Android? Heaven help you.

 

C++98 was intentionally vague, leaving almost everything up to the implementation. C++03 they knew about many issues but intentionally didn't address them since it was to be a minor update.

 

In C++11 the committee notes and discussions indicate that they tried their best to add some specifications and they closed almost all the issues, but they again left much of the inner workings to implementation defined behavior. 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this