Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


List of C++11 compliant compilers?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
42 replies to this topic

#21 MarkS   Prime Members   -  Reputation: 882

Like
0Likes
Like

Posted 04 August 2013 - 02:15 PM

what exactly do you feel is missing, that you would like to use? While visual studio is missing quite a bit of C++11, it has a decent amount of it implemented, especially the more important bits. The other chunks and pieces that are missing aren't particularly important for most cases, doesn't mean you won't miss them though on the rare occasion.
 
There is a great deal to C++11, and no current compilers are "standards compliant," although clang makes a decent go at it.
 
 

That is minor in comparison to what is missing from Visual Studio. I have a feeling that gcc will be far better at implementing the standards than Microsoft.

I would consider the missing lack of a regular expressions library, part of the new standard library, to be a rather significant chunk of the C++ language to be missing. Especially since regular expressions tend to creep and find their way into so many different places you wouldn't expect them to.


I don't consider STL to be part of the actual language, seeing as how it is a library based on and for the purpose of expanding the language. To that end, I have known of the existence of regex, but I had no idea exactly what it was until last week. It isn't something that I have ever used or needed. Now that I know what it is, I can find uses for it when gcc implements it, but it will not be missed in the interim. My only gripe with VS' STL implementation is that it doesn't properly implement emplace (emplace_back, et. al). All emplace does in VS is call push_back. I can do that myself!

The lists that Matt-D posted highlights some of the things that I cannot do and want to do in VS. Initializer lists, variadic templates, defaulted and deleted functions (cannot properly do move constructors without 'em) and user-defined literals to name a few. These are not part of STL and I cannot even try to use them in VS 2012.

Yeah, clang really is pulling quite far ahead of gcc now, heck, even if you ignore the C++11/C99/C11 support clangs warnings and error messages alone are a good enough reason to use it over gcc. (allthough gcc is catching up in that area aswell now)


I looked at clang before choosing gcc. There seems to be trouble coupling it with Code::Blocks (I haven't tried it myself yet) and what I forgot to add in my original post was the need for a graphical IDE. I have no allegiance to gcc, other than it can be effortlessly integrated with Code::Blocks. If someone knows how to integrate clang with Code::Blocks or if someone knows of an adequate IDE for clang, I have no problem switching.

Edited by MarkS, 04 August 2013 - 02:21 PM.


Sponsor:

#22 Chris_F   Members   -  Reputation: 2441

Like
1Likes
Like

Posted 04 August 2013 - 02:22 PM

It would be nice if there was some real effort to get Clang on Windows. I'd be real tempted to switch to it from GCC considering what I've seen with error messages and C++11/14 support.



#23 Washu   Senior Moderators   -  Reputation: 5369

Like
3Likes
Like

Posted 04 August 2013 - 02:27 PM


I don't consider STL to be part of the actual language, seeing as how it is a library based on and for the purpose of expanding the language. To that end, I have known of the existence of regex, but I had no idea exactly what it was until last week. It isn't something that I have ever used or needed. Now that I know what it is, I can find uses for it when gcc implements it, but it will not be missed in the interim. My only gripe with VS' STL implementation is that it doesn't properly implement emplace (emplace_back, et. al). All emplace does in VS is call push_back. I can do that myself!

"STL" doesn't exist.

 

The standard library is part of the language in any non-standalone implementation of the language. As per the language standard.

 


The lists that Matt-D posted highlights some of the things that I cannot do and want to do in VS. Initializer lists, variadic templates, defaulted and deleted functions (cannot properly do move constructors without 'em) and user-defined literals to name a few. These are not part of STL and I cannot even try to use them in VS 2012.

 

As I noted previously, there are some things that are missing from VS that are useful (variadic templates is pretty high on my list), however its not actually something I find myself needing or even wanting to use in a large number of cases. You can handle move constructors perfectly fine without the use of default or deleted functions. Might not be as simple as just tacking on a default or deleted, but you can do it quite fine and easily.


In time the project grows, the ignorance of its devs it shows, with many a convoluted function, it plunges into deep compunction, the price of failure is high, Washu's mirth is nigh.
ScapeCode - Blog | SlimDX


#24 MarkS   Prime Members   -  Reputation: 882

Like
0Likes
Like

Posted 04 August 2013 - 02:41 PM

The standard library is part of the language in any non-standalone implementation of the language. As per the language standard.


When I say "part of the language", what I mean is sitting down and writing code without including any headers. Anything that you can write in this case is "the language". Anything else is an add-on.

I do get that STL is part of the design standard and considered part of C++, however, this breaks what I have always considered a computer language. I don't think anyone would argue that "printf" is part of the C language; It is part of the standard IO library. However, somehow, std::cout is considered part of C++. This is the case even though you can write entire programs and never touch STL. Granted, doing so is akin to shooting yourself in the foot just for the heck of it.

#25 Matt-D   Crossbones+   -  Reputation: 1467

Like
2Likes
Like

Posted 04 August 2013 - 03:54 PM

It would be nice if there was some real effort to get Clang on Windows. I'd be real tempted to switch to it from GCC considering what I've seen with error messages and C++11/14 support.

 

There is one, although it's non-free: Embarcadero C++ Builder XE3, http://isocpp.org/blog/2012/12/embarcadero-c-builder-xe3

Haven't had any experience with it, though.

 

 

The standard library is part of the language in any non-standalone implementation of the language. As per the language standard.


When I say "part of the language", what I mean is sitting down and writing code without including any headers. Anything that you can write in this case is "the language". Anything else is an add-on.

I do get that STL is part of the design standard and considered part of C++, however, this breaks what I have always considered a computer language. I don't think anyone would argue that "printf" is part of the C language; It is part of the standard IO library. However, somehow, std::cout is considered part of C++. This is the case even though you can write entire programs and never touch STL. Granted, doing so is akin to shooting yourself in the foot just for the heck of it.

 

 

Technically, what you're referring to is called "the C++ core language" or just "core C++" (standard C++, the core language).

Parts of STL have become incorporated into the C++ standard library (standard C++, the standard library).

This division is reflected in the ISO C++ standardization committee itself, with "core working group" (CWG) and "library working group" (LWG):

http://stackoverflow.com/questions/13221593/what-does-core-language-mean

// However, there are also WGs such as Evolution (aka EWG) or Library Evolution (LEWG) and further (sub)divisions: http://isocpp.org/std/the-committee

 

wg21-structure.png

 

Together, the C++ programming language itself is defined in the C++ standard (and a compliant implementation, whether just a compiler or an entire IDE, has to ship both the core language and the standard library in order to be considered a compliant C++ implementation).

 

Hope this clears things up ;-)


Edited by Matt-D, 04 August 2013 - 04:01 PM.


#26 SiCrane   Moderators   -  Reputation: 9630

Like
3Likes
Like

Posted 04 August 2013 - 05:10 PM

I don't think anyone would argue that "printf" is part of the C language

I would. It's defined in section 7.21 of the current C standard, which is entitled "Programming languages - C", not "The C programming language and various bits not actually part of the language". Similarly, std::cout is defined in 27.4.2 of the current C++ standard.

#27 MJP   Moderators   -  Reputation: 11620

Like
0Likes
Like

Posted 05 August 2013 - 12:11 AM

It would be nice if there was some real effort to get Clang on Windows. I'd be real tempted to switch to it from GCC considering what I've seen with error messages and C++11/14 support.

Yeah I've used clang on non-Windows platforms and it's really great. I know one guy was maintaining a pre-compiled version of clang for Windows along with a VS add-in that could switch out MSVC++ for clang, but I don't know if he's still actively working on it. For the VS add-in to actually workable someone would have to figure out how to generate .PDB's so that the VS debugger would work with it, and that's probably a difficult task.



#28 Ravyne   GDNet+   -  Reputation: 7885

Like
1Likes
Like

Posted 05 August 2013 - 01:48 PM

 


OP, have you tried the VS2013 preview? It supports more of C++11 -- indeed all, or nearly all, of the stuff you'd actually care to use. Again, its just a preview, but the real-deal is just around the corner.

 

Far from it. Implicit move generation is pretty important, and ref qualifiers are very helpful for correctness. And no proper constexpr. And no user defined literals. All delayed to post VC2013 CTPs (so not for production code). The saddest part? On the far right (lowest priority/farthest in the future) there are still two C++98 bullet points. Admittedly not that important but that standard is now 15 years old, and Visual Studio still isn't there yet, and I suspect that C++11/14 will take just as long.

 

Visual Studio is lagging behind extremely.

 

About Mingw, I usually get my MinGW from http://sourceforge.net/projects/mingwbuilds/ those are very up to date.

 

 

I don't disagree that its behind, and I've perhaps made the mistake of overstating what it supports -- it does lack features both important and not-so-important. The point I was pushing at is that C++11 adds a lot of stuff, and VS2013 will contain the majority of it by numbers, and those numbers are made up of the most-important, most-game-changing additions -- things like auto, lambdas, r-value references, move semantics, perfect forwarding, variadic templates, etc -- that actually change the way we build software, rather than just making things nicer. Certainly there are still things to add that are critical for a set of people and also nice-to-haves for even a majority, but for a lot of what remains there are reasonable work-arounds or just making due with what we have now is not so terrible. Perhaps qualifying that as "nearly all of the stuff you'd actually care to use" was worded poorly -- if your goal is to use one of those unsupported features, or to have a play with all of C++11, then VS2013 won't deliver for you -- Its far from perfect, I'm just saying they have most things, and that those things they have provide the most value to the most people in their work.

 

You also have to keep in mind that Microsoft's compiler has a very different market in mind than open-source solutions like GCC or Clang -- The bulk of Microsoft's customers are corporations building and maintaining in-house tools. These corporations change at a glacial pace, and are entirely entrenched in the Microsoft Platform. Microsoft could have a markedly different dialect of C++ (as some have argued in the past, say VC++ 6,) and while that would not be a good thing, it would matter little to the majority their core customers. In fact, the change of new standards or changing behaviors in small ways is often more painful for those customers than the benefit they might provide -- especially if they have critical code that's on life-support where they haven't the budget or interest to update it wholesale, but it needs to compile when they fix a bug. That's why MS maintains a number of switches that restore some of that non-standard behavior (like for-loop index variable scoping from VC6).

 

People who rely on open-source compilers typically (or, moreso than the customers I describe above) keep their code moving along to current standards (sometimes as a re-write, sometimes as a consequence of the software being open-source), and when its not kept-up-to-date, they have the option of going back to whatever branch of GCC or Clang last worked for them, and they get the benefit of whatever bug-fixes have been made to that branch even long-after mainline development has moved on to the next point (or even whole-number) version. For Microsoft's compiler, you only get critical bug-fixes until the support lifetime is up.

 

There's also the fact that many of the challenges implementing new features in Microsoft's compiler is because the compiler's legacy goes back to pre-standard C++ (C++96), which is certainly not true of Clang. Its true of GCC also, but GCC has had the benefit of essentially rolling releases. Everyone who uses GCC upgrades because doing so only costs you time, and its typically very incremental, MS's releases are more spaced-out, and often more drastic, so updating code for the new release is a huge deal in a corporate environment, and code may languish in old features and ways of doing things for a very long time as a result.

 

Its absolutely reasonable to criticize Microsoft for being behind, but such criticism should be tempered by understanding the realities the product faces. They're doing what best-serves their customers, most of whom are not standards-wonks. There's a lot of work ahead to be sure, but I doubt it will be so long before C++11 and 14 will fully conformant. Probably late 2015 at the earliest would be my guess, but not languishing for 5, 10, 15 years like a couple parts of C99 have.



#29 swiftcoder   Senior Moderators   -  Reputation: 10242

Like
0Likes
Like

Posted 05 August 2013 - 02:18 PM

The saddest part? On the far right (lowest priority/farthest in the future) there are still two C++98 bullet points

I don't know which list you are looking at, but I'll give even odds that one of those C++98 features is template export.

Nobody (with the partial exception of Comeau) ever implemented template export - it's widely regarded as a feature that should never have been added to the standard in the first place.

I'd hardly call it fair to ding Microsoft for that one, since both GCC and Clang also fail that test...


Tristam MacDonald - Software Engineer @Amazon - [swiftcoding]


#30 MarkS   Prime Members   -  Reputation: 882

Like
0Likes
Like

Posted 05 August 2013 - 02:27 PM

...snip...


I think I get what you're saying, but I have a question. Isn't C++11 fully backwards compatible with C++98? Couldn't I write a fully C++98 program and have it compile on a C++11 compiler? If I could do so, then there is no excuse to not fully implement C++11. If the implementation will not break existing code bases, then failure to implement C++11 boils down to some other reason.

My understanding is that the new standards add features, not break existing features.

#31 swiftcoder   Senior Moderators   -  Reputation: 10242

Like
2Likes
Like

Posted 05 August 2013 - 02:42 PM


Isn't C++11 fully backwards compatible with C++98?

Nope. There are a handful of C++98 features that were removed from C++11, and a few more areas where the semantics changed between C++98 and C++11.


Tristam MacDonald - Software Engineer @Amazon - [swiftcoding]


#32 MarkS   Prime Members   -  Reputation: 882

Like
0Likes
Like

Posted 05 August 2013 - 02:46 PM

Ah. Thanks.

#33 SiCrane   Moderators   -  Reputation: 9630

Like
1Likes
Like

Posted 05 August 2013 - 03:47 PM

Plus, C++11 added some keywords so C++03 code using those as identifiers would break, though of those keywords, the only one I've seen in real code is nullptr. 



#34 frob   Moderators   -  Reputation: 22294

Like
2Likes
Like

Posted 05 August 2013 - 04:03 PM

 


Isn't C++11 fully backwards compatible with C++98?

Nope. There are a handful of C++98 features that were removed from C++11, and a few more areas where the semantics changed between C++98 and C++11.

 

 

There are also some subtle breaking changes.  One that bit us was that integral promotions now go up to 64 bits.  Then add to it that some promotions signed-ness is either undefined or implementation defined when enums are involved.

 

Example:

enum myEnum {  // Variables of this type are s32
{ 
    smallnum      = 0x01234567,  // This is s32
    signedvalue   = -1,          // This is s32
    bignum        = 0x89abcdef,  // used to be u32 that could be silently reinterpreted as s32.  Today the interpretation is compiler specific.  It may be an u32, a s64, or reinterpreted in some alternate compiler-specific length.
};

// PROGRAMMERS BEWARE! Under C++03 this was u32. Under C++11 this can be (but is not guaranteed to be) an s64.  Generally when used in integral operations the other value will silently be promoted to a compatible integer type,probably s64. In some expressions (such as those involving enums) it can also be a u32.  It should probably have either a "UL" or "LL" specifier at the end.
#define MAGICNUMBER (0x89abcdef)

myEnum foo = bignum; /* foo has underlying type of s32. The value of bignum is not fixed, and may be interpreted as either u32 (-1985229329) or s64 (2309737967) depending on context. The compiler is okay with this assignment due to enum rules, the value is silently reinterpreted to an s32 regardless of how it interprets the value elsewhere. */
if( foo == smallnum )  ...  /* This is a signed 32-bit comparison */
if( foo == bignum ) ... /* This is probably a signed 32-bit comparison.  However, we found a few compilers that expanded it and treated it as a 64-bit comparison */
if( foo == MAGICNUMBER ) ... /* BUG! Compiler specific functionality! The generated assembly can legally be s32, u32, s64, and u64. It is up to the compiler which one is chosen, and if any sign-extension is used.  The result of this expression is false with some compilers, true with others.  Caveat Emptor. */

We were bit by this on one console's compiler.

 

 

 

Just like the 32-bit transition in the early '90s, there will be a bit of growing pain to the new 64-bit standard.


Edited by frob, 05 August 2013 - 04:21 PM.
minor change for clarity and formatting.

Check out my book, Game Development with Unity, aimed at beginners who want to build fun games fast.

Also check out my personal website at bryanwagstaff.com, where I write about assorted stuff.


#35 l0calh05t   Members   -  Reputation: 800

Like
0Likes
Like

Posted 06 August 2013 - 03:49 AM

...

 

 

That doesn't sound right. 0x89abcdef is an int literal (not unsigned int as you suggest in the myEnum comment) and should always be whatever an int is with your compiler (which may be 32 or 64 bit, but that isn't new),



#36 l0calh05t   Members   -  Reputation: 800

Like
0Likes
Like

Posted 06 August 2013 - 03:55 AM

 

The saddest part? On the far right (lowest priority/farthest in the future) there are still two C++98 bullet points

I don't know which list you are looking at, but I'll give even odds that one of those C++98 features is template export.

Nobody (with the partial exception of Comeau) ever implemented template export - it's widely regarded as a feature that should never have been added to the standard in the first place.

I'd hardly call it fair to ding Microsoft for that one, since both GCC and Clang also fail that test...

 

Nope, template export isn't even on the list (and removed in C++11 anyways). The missing C++98 points that are still to be implemented according to the roadmap are correct two-phase lookup (template instantiation) and some preprocessor features.



#37 SiCrane   Moderators   -  Reputation: 9630

Like
3Likes
Like

Posted 06 August 2013 - 09:10 AM

That doesn't sound right. 0x89abcdef is an int literal (not unsigned int as you suggest in the myEnum comment) and should always be whatever an int is with your compiler (which may be 32 or 64 bit, but that isn't new),

Here's the relevant bit of the old standard:

The type of an integer literal depends on its form, value, and suffix.... If it is octal or hexadecimal and has no suffix, it has the first of these types in which its value can be represented: int, unsigned int, long int, unsigned long int.

On a compiler where int is 32-bits 0x89abcdef would not fit in an int, but would fit in an unsigned int, so as a literal 0x89abcdef would be an unsigned int. The rules in C++11 are substantially the same except that the list goes int, unsigned int, long int, unsigned long int, long long int, unsigned long long int. So for a compiler where int is 32-bits, 0x89abcdef is still going to be an unsigned int. In the case where int is 16-bits and long is 32-bits, under both standards 0x89abcdef would be an unsigned long. In either of these cases 0x89abcdef would be an unsigned 32-bit integer. (More exotic architectures where int and long are different bit sizes are left as exercises for the reader.) So frob's comment about the type of the literal being possible something other than u32 under C++11 are incorrect. (Again assuming typical bit sizes for integers. Relevant bits of the standard are 2.13 in the old version and 2.14 in the current version if anyone wants to follow along.)

However, what frob is probably referring to is the effective type after integral promotions. When a binary operator is applied to two operands of integral type, both sides are promoted to a common type before being operated on. The rules are sufficiently complex that I'm not going to bother to summarize them, but under the old standard the rules were pretty predictable, but under the current standard the rules are a mess of implementation specific choices. (If you want to follow along, the relevant parts are section 5 paragraph 9 and section 4.5 in both standards, plus 4.13 in the current standard.) frob makes some assumptions about bit sizes of integer types that the standards don't mandate, but is otherwise substantially correct when talking about effective type of the literal rather than actual type. (I'm not sure that his characterization of foo == MAGIC_NUMBER being possibly s32 is correct, but I can see u32, s64 and u64 all happening.)

#38 l0calh05t   Members   -  Reputation: 800

Like
0Likes
Like

Posted 06 August 2013 - 11:57 AM

 

That doesn't sound right. 0x89abcdef is an int literal (not unsigned int as you suggest in the myEnum comment) and should always be whatever an int is with your compiler (which may be 32 or 64 bit, but that isn't new),

Here's the relevant bit of the old standard:

The type of an integer literal depends on its form, value, and suffix.... If it is octal or hexadecimal and has no suffix, it has the first of these types in which its value can be represented: int, unsigned int, long int, unsigned long int.

On a compiler where int is 32-bits 0x89abcdef would not fit in an int, but would fit in an unsigned int, so as a literal 0x89abcdef would be an unsigned int. The rules in C++11 are substantially the same except that the list goes int, unsigned int, long int, unsigned long int, long long int, unsigned long long int. So for a compiler where int is 32-bits, 0x89abcdef is still going to be an unsigned int. In the case where int is 16-bits and long is 32-bits, under both standards 0x89abcdef would be an unsigned long. In either of these cases 0x89abcdef would be an unsigned 32-bit integer. (More exotic architectures where int and long are different bit sizes are left as exercises for the reader.) So frob's comment about the type of the literal being possible something other than u32 under C++11 are incorrect. (Again assuming typical bit sizes for integers. Relevant bits of the standard are 2.13 in the old version and 2.14 in the current version if anyone wants to follow along.)

However, what frob is probably referring to is the effective type after integral promotions. When a binary operator is applied to two operands of integral type, both sides are promoted to a common type before being operated on. The rules are sufficiently complex that I'm not going to bother to summarize them, but under the old standard the rules were pretty predictable, but under the current standard the rules are a mess of implementation specific choices. (If you want to follow along, the relevant parts are section 5 paragraph 9 and section 4.5 in both standards, plus 4.13 in the current standard.) frob makes some assumptions about bit sizes of integer types that the standards don't mandate, but is otherwise substantially correct when talking about effective type of the literal rather than actual type. (I'm not sure that his characterization of foo == MAGIC_NUMBER being possibly s32 is correct, but I can see u32, s64 and u64 all happening.)

 

First off, +1 for naming the relevant sections of the standard. I didn't know that hex literals could be considered unsigned or signed.

 

As far as I can tell from 4.5 §3, the enum would have to evaluate to an s64 (on a typical int = long = s32, long long = s64 compiler), since an s32 cannot represent 0x89abcdef and an u32 cannot represent -1. And the comparison would have to be performed as s64 and under no circumstances s32 (the number itself isn't representable as s32).



#39 SiCrane   Moderators   -  Reputation: 9630

Like
0Likes
Like

Posted 06 August 2013 - 12:28 PM

4.5 isn't the section used to determine the underlying type of a non-explicitly typed enum, it's 7.2 (paragraph 5 in the old standard and paragraph 6 in the new standard). Both versions of the standard contain the wonderfully non-specific line "It is implementation-defined which integral type is used as the underlying type except that the underlying type shall not be larger than int unless the value of an enumerator cannot fit in an int or unsigned int." Note that it says the value of an enumerator not that range of the all enumerators must simultaneously fit within an int or unsigned int.

#40 frob   Moderators   -  Reputation: 22294

Like
0Likes
Like

Posted 06 August 2013 - 12:58 PM

Sorry for not making that clear.  In some ways I am living in the future of the language and compilers, not the present.

Nearly all of today's x64 "64-bit compilers" are actually 32-bit compilers with 64-bit extensions bolted on. Fundamentally they are designed around 32-bit architecture, 32-bit integers, and 32-bit optimizations.

Some systems, notably a large number of ARM compilers, have already transitioned to fully 64-bit designs. A growing number of x64 compilers are transitioning to full 64-bit. Notably among the x64 systems, gcc has been pushing for fully 64-bit systems rather than "32-bit with extensions".

Some of us work on games that are cross-platform that include both 32-bit and (true) 64-bit systems.

 

Again, apologies if that was unclear.  The C++11 standard has been expanded to support 64-bit as a standard integer type. For some code that is a terrifying chasm to be crossed.  For some code that is like having shackles removed.  Relatively few programmers exist in the latter.

 

 

 

 

The reason I posted the example was to provide an example of where valid code that worked one way under a C++03 compiler gives different results under a C++11 compiler.

 

The addition of 64-bit integer types and the changes to enumerations are two cases where the behavior is not "fully backwards compatible."

 

 

 

That doesn't sound right. 0x89abcdef is an int literal (not unsigned int as you suggest in the myEnum comment) and should always be whatever an int is with your compiler (which may be 32 or 64 bit, but that isn't new),

Here's the relevant bit of the old standard:

The type of an integer literal depends on its form, value, and suffix.... If it is octal or hexadecimal and has no suffix, it has the first of these types in which its value can be represented: int, unsigned int, long int, unsigned long int.

On a compiler where int is 32-bits 0x89abcdef would not fit in an int, but would fit in an unsigned int, so as a literal 0x89abcdef would be an unsigned int. The rules in C++11 are substantially the same except that the list goes int, unsigned int, long int, unsigned long int, long long int, unsigned long long int. So for a compiler where int is 32-bits, 0x89abcdef is still going to be an unsigned int. In the case where int is 16-bits and long is 32-bits, under both standards 0x89abcdef would be an unsigned long. In either of these cases 0x89abcdef would be an unsigned 32-bit integer. (More exotic architectures where int and long are different bit sizes are left as exercises for the reader.) So frob's comment about the type of the literal being possible something other than u32 under C++11 are incorrect. (Again assuming typical bit sizes for integers. Relevant bits of the standard are 2.13 in the old version and 2.14 in the current version if anyone wants to follow along.)

 

 
Correct about the assumption of systems with 32-bit int. The cross platform game targets a game system that happens to not follow the 32-bit int used on PC and current-gen consoles.  The assumption is increasingly invalid.

 

As for when enums are mixed in, there are many fun and exciting rules involved when C++11 introduced "fixed" enums. A fixed enum has an underlying type specified. If no type is given there are several rules to determine the underlying type. But something interesting happens when you specify values directly with a non-fixed enum. Under 7.2, an enumeration can have a different type than the enumerated values: "If the underlying type is not fixed, the type of each enumerator is the type of its initializing value". The actual type is implementation defined. There are a few more details, but a careful reading (and if you follow standards-notes, a longstanding frustration) is that it allows a few fun quirks, such as an enumeration's underlying type to be signed but an individual enumerator to be unsigned. That's why the example of an enumerator with an initializing value of 0x89abcdef becomes interesting.  Many programmers do not think about implementation-defined behavior.

 

 

Now that the standard permits 64-bit conversions and promotions and types, code is expected to conform to it.  There is much code that will behave badly when moving forward to 64-bit compilers (which I had the joy of experiencing), just as much code was similarly troublesome in the 16-bit to 32-bit migration.

 
 
 
The key difference is that the old standard was limited to 32-bit promotions and conversions.  The new standard includes 64-bit types, promotions and conversions as standard integer types, promotions and conversion rules.  These rules are very nearly but not completely backwards compatible. They have the potential to break your code when you transition to C++11.
 
 

However, what frob is probably referring to is the effective type after integral promotions. When a binary operator is applied to two operands of integral type, both sides are promoted to a common type before being operated on. The rules are sufficiently complex that I'm not going to bother to summarize them, but under the old standard the rules were pretty predictable, but under the current standard the rules are a mess of implementation specific choices. (If you want to follow along, the relevant parts are section 5 paragraph 9 and section 4.5 in both standards, plus 4.13 in the current standard.) frob makes some assumptions about bit sizes of integer types that the standards don't mandate, but is otherwise substantially correct when talking about effective type of the literal rather than actual type. (I'm not sure that his characterization of foo == MAGIC_NUMBER being possibly s32 is correct, but I can see u32, s64 and u64 all happening.)

 

 

That is the second thing I was referring to, yes.

 

The same preferences apply, first signed than unsigned.  Having s32 op u32 can result in both being promoted to s64, since it can represent all the values of the source types.
  

These are just some of the subtle little gotchas that moving to a 64-bit enabled standard provides.
 
If you are using what amounts to a 32-bit compiler with 64-bit extensions, that code can generate very different implementation defined results from a true 64-bit compiler. Both results are legal.

 

 

 

The point of all this, if you look back, is that C++11 is not fully backwards compatible with C++03, as an earlier post in the thread questioned.  

 

Some code that worked one way under C++03 can work slightly differently under C++11 in ways that can break your program.  Some behavior (a small amount) that was specified in C++03 now falls under slightly different specifications. The larger your programs, the greater the risk of breakage.  You cannot replace a C++03 compiler with a C++11 compiler and blindly expect everything to just work.


Check out my book, Game Development with Unity, aimed at beginners who want to build fun games fast.

Also check out my personal website at bryanwagstaff.com, where I write about assorted stuff.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS