why c++ is the most widely used language for professional game development?

Started by
34 comments, last by jpetrie 9 years, 9 months ago

Your actually all missing the key point of C++ it was mainly created to deal with larger and larger code bases that's why classes where created, for reusable code and then it evolved into a standardised lanuguage from there.

The compiler strips all your C/C++ into Assembly language first before the code generation stage anyway.

You should really download the GCC source code and have a look through it, you will see that it supports mainly many CPU's.

Advertisement

Your actually all missing the key point of C++ it was mainly created to deal with larger and larger code bases that's why classes where created, for reusable code and then it evolved into a standardised lanuguage from there.

The problem is, C++ is a terrible language for dealing with large codebases.

Its insane compilation model leads to terrible build times, and completely breaks down in large codebases. A windows build of Qt5 takes >2 hours on a quad core system with a SSD. A comparable codebase in C# would take less than 10 minutes (my system manages ~100Kloc of C# per second, which would build the whole 10Mloc of Qt5 in 1.5 minutes.)

Its lack of garbage collection means that long-running applications will suffer from memory fragmentation issues unless special measures are taken.

The lack of an ABI means that its impossible to create reusable components in a truly portable manner.

The complexity of the specification means that no two compilers implement the language in a compatible manner. Portable code is littered with "#ifdef _MSC_VER > 1700" and similar line noise.

C++ may have been an improvement over C when it was created (debatable), but right now the main reason for its popularity is inertia.

[OpenTK: C# OpenGL 4.4, OpenGL ES 3.0 and OpenAL 1.1. Now with Linux/KMS support!]

Its insane compilation model leads to terrible build times, and completely breaks down in large codebases. A windows build of Qt5 takes >2 hours on a quad core system with a SSD. A comparable codebase in C# would take less than 10 minutes (my system manages ~100Kloc of C# per second, which would build the whole 10Mloc of Qt5 in 1.5 minutes.)

Its lack of garbage collection means that long-running applications will suffer from memory fragmentation issues unless special measures are taken.

The complexity of the specification means that no two compilers implement the language in a compatible manner. Portable code is littered with "#ifdef _MSC_VER > 1700" and similar line noise.

C++ may have been an improvement over C when it was created (debatable), but right now the main reason for its popularity is inertia.

Those are, in many ways, the strengths of the language.

The reason build times are insanely huge is because of the compilation and linking model. Pull everything in. Inline everything you possibly can. Optimize and precompute everything possible, perform every optimization possible, restructure everything from the biggest algorithms to the smallest pigeon-hole to be cache friendly, OOO-core friendly, branch predictor friendly, lookahead table friendly, and more.

Lack of garbage collection means less memory used; it is well established in academia that GC-style systems generally require 1.5x the memory requirements to maintain similar performance. Yes it requires more brain power for the humans to manage the object lifetime, but when you are on a console device or mobile device with memory measured in megabytes taking a 1/3 reduction in effective memory just to use automatic garbage collection is an unwise tradeoff.

The ability to have system-specific improvements means you can take advantage of features rather than relying on the most generic or completely portable features. If some hardware or compiler offers a feature you can take advantage of, then take advantage of it. You don't say things like "This hardware offers parallel processing, SIMD, lots of cores, and hardware acceleration for 3D graphics, but I'm going to stick with the basics. None of those fancy instructions or libraries, it is pure C++98, no threading, and all graphics will be done with direct hardware interaction." No, instead you look for features on the system and take advantage of them.


The language is less productive than many newer languages, but we don't use C++ for productivity reasons. We use it because the compilation model makes for incredible optimizations, because it allows programmers to control everything, because unlike other languages you only pay for features when you use them (with only two exceptions, exception handling and RTTI, and those are frequently disabled). We use it because it is trivially compatible with everything else. We use it because there is an enormous library of functionality to rely on. We use it because it allows extensions to take advantage of hardware. And when it is not the right language, we build scripting systems or exposed interfaces or can otherwise leverage high-productivity languages when performance is not key.

For systems level work, C#, Java, and other languages have much to be desired. C++ is great for systems level work. It is more productive than predecessors, and flexible enough for all your low level needs.

The problem is, C++ is a terrible language for dealing with large codebases.

Its insane compilation model leads to terrible build times, and completely breaks down in large codebases. A windows build of Qt5 takes >2 hours on a quad core system with a SSD. A comparable codebase in C# would take less than 10 minutes (my system manages ~100Kloc of C# per second, which would build the whole 10Mloc of Qt5 in 1.5 minutes.)

There are ways to mitigate this with good design. But yes, the C compilation model which C++ uses doesn't encourage it. That's why the C++ committee is looking into modules.

Its lack of garbage collection means that long-running applications will suffer from memory fragmentation issues unless special measures are taken.


Some people would consider lack of a garbage collector a huge benefit, especially in the gaming realm where you can't afford random framerate drops due to collections you can't control. Also, garbage collectors only manage memory - the do nothing to help you with other resources you must manage.

The lack of an ABI means that its impossible to create reusable components in a truly portable manner.


It's certainly a flaw, and various solutions have been proposed including CORBA, COM, and a paper for a C++ ABI is in the works to be submitted to the C++ committee.

The complexity of the specification means that no two compilers implement the language in a compatible manner. Portable code is littered with "#ifdef _MSC_VER > 1700" and similar line noise.

C++ may have been an improvement over C when it was created (debatable), but right now the main reason for its popularity is inertia.


Compilers are very good at following the standard now, this is not usually a problem anymore unless you're using an old version or some really new C++11/14 features.

Seems like you got burned by old C++ a while ago. I suggest checking it out again with a good compiler smile.png
Just to not something:
* a lot of big console games aren't written in C++.
* most console game-engines are written in C++.

i.e. When doing "systems programming", C and C++ are actually productive choices.
When doing gameplay programming, Lua/C#/etc are more productive choices.

You can do systems programming in C#, but the code will be uglier than the equivalent C++ code.
You can do gameplay programming in C++, but the code will be uglier than the equivalent C# code.

The last 4 console games I've worked on have used C++ for the engine and complex/high-performance gameplay systems, and LuaJIT for the rest.
The Jua GC did cause a lot of problems with per-frame performance and memory fragmentation though... So despite being 'easier', it became a huge challenge. We also miss C++'a RAII when moving on to other languages :(

There is nothing worse than starting out on a Java or .NET project only to realize that there is no up-to-date binding available for a technology I plan to use (which is extremely likely to provide a C API). That sinking feeling I get when I need to start messing about with JNI or .NET DLLImport code. What a waste of time when I could just be working on the game instead.

Whereas for C++ I can use the C libraries directly since C++ by design is an extension of C (so long as I am careful with smart pointers and deleter functions I can often get away without any abstraction layers at all).

I think this very reason is why most commercial software written today is still in C++. I think a lot of C# developers on these forums seem to forget that C# is only easy and safe because Unity has already done the actual hard work of writing the engine, binding the libraries and porting the .NET runtime to the many platforms (Same with OpenTK, and XNA-like).

As for the language itself I am not too bothered. C++, Java and C# is similar enough so long as you do not use the old crusty parts. I do however find Microsoft's C++/clr:safe very interesting. Basically a merge of C# and C++ so you get all the RAII and memory management goodness of C++ but also the safety of C#. Looking at the generated IL, Microsoft really could add RAII to C# and if it could also compile to native machine code (so it didnt need such a complex (to port) runtime VM), it would likely dominate the world.

http://tinyurl.com/shewonyay - Thanks so much for those who voted on my GF's Competition Cosplay Entry for Cosplayzine. She won! I owe you all beers :)

Mutiny - Open-source C++ Unity re-implementation.
Defile of Eden 2 - FreeBSD and OpenBSD binaries of our latest game.

Your actually all missing the key point of C++ it was mainly created to deal with larger and larger code bases that's why classes where created, for reusable code and then it evolved into a standardised lanuguage from there.

The problem is, C++ is a terrible language for dealing with large codebases.

C++ is still the best tool for dealing for large project if you need a proper memory management due performance reasons. The only actual alternative is called C, which provides you less features and is less friendly with large projects (note that less friendly doesn't mean you cant's use C instead of C++ to obtain the same result, the same applied to assembly).

Its insane compilation model leads to terrible build times, and completely breaks down in large codebases. A windows build of Qt5 takes >2 hours on a quad core system with a SSD. A comparable codebase in C# would take less than 10 minutes (my system manages ~100Kloc of C# per second, which would build the whole 10Mloc of Qt5 in 1.5 minutes.).

The insane compilation models comes basically from C since C compatibility is a feature of the language. Modern IDEs with modern compilers, librarians and linkers mitigate the compilation issue, however yes, some drastic changes would be appreciated, hopefully with C++17 the introduction of modules will cut the compilation times. Note also that a portion of the compilation time are due code analysis, adding debug information and low level micro-code optimizations (most of them are not suitable to language such Java and C#... oh yep, Microsoft is working on .net native so maybe they will learn something for speed-up compilation time). Speaking of a real-word large piece of software, today I get the UE 4.3 sources and it takes something like 45 minutes to compile from the beginning (ie no pch) all the project of the solutions (engine and tools) with VS2013 on a machine over 4 years old (i5 750).

Its lack of garbage collection means that long-running applications will suffer from memory fragmentation issues unless special measures are taken.

The lack of garbage collector is not an issue in a native programming language where manually memory management is a centrepiece of the language. If you build a system in C++ where memory fragmentation is an issue you are doing it wrong (guess: you are directly using the new and the delete operators everywhere without defining a custom allocator or at least a proper heap container such a pool).

The lack of an ABI means that its impossible to create reusable components in a truly portable manner.

C++11 provides a definition of what a GC can do if one is used and an ABI to help control its actions. C++ is a native programming language and imposing a full trict ABI would be infinitely stupid, it would destroy the freedom to implement the language in the best way for every single architecture, it would destroy a good part of the possible optimization offered by the language. The lack of a solid ABI and GC don't prevent you to make a project where C++ is used only where needed (games are a practical example of different pieces of code written in different languages coexisting with C\C++).

The complexity of the specification means that no two compilers implement the language in a compatible manner. Portable code is littered with "#ifdef _MSC_VER > 1700" and similar line noise.

The C++11 different compiler conformance status are related to the the length path that C++11 take to be developed, a long path due the stupidity of the ISO committee, that waited 13 years (!) to define a new standard.

Portable code is related to what the code does when it runs. Compiler version macros such "#ifdef _MSC_VER > 1700" are intended to used to make portable code (YES!) on different compilers (especially older versions) with different language version support. In other language such C# and Java you cannot target different compilers and language versions in a single source code if you are not using the least common multiple of the language supported features and libraries.

C++ may have been an improvement over C when it was created (debatable), but right now the main reason for its popularity is inertia.

Inertia? Maybe, but there are no still valid alternative... And no, C#, Java, Go, D are not alternatives to C and C++, since managed programming languages are not alternative to native programming languages, they are different tools for different prurpose. The problem is not the lack of ABI or of GC, the problem is why do you use a programming language without GC and ABI if you need them in your work?

This funny but truthful image shows the crux of the matter: http://global3.memecdn.com/If-programming-languages-were-tools_o_32267.jpg

"Recursion is the first step towards madness." - "Skegg?ld, Skálm?ld, Skildir ro Klofnir!"
Direct3D 12 quick reference: https://github.com/alessiot89/D3D12QuickRef/
As Mr Stroustrup himself said,

There are only two kinds of languages: the ones people complain about and the ones nobody uses


C++ is a language that gives you control of how things work. This means you can do a lot of things, but you also need to handle all the steps, what includes managing your own memory (to name an easy one).

Being lower level than most languages makes it perfect for lower level work. That's why lots of games are done in C++ while they have a scripting layer. That makes a lot of sense, let's give ourselves full control over the performance sensitive part while using a cleaner language for the content and higher level portions.

Garbage collection, one of the most praised features of some languages, is perfectly possible in C++. Not only that, you have plenty of options to choose, from Boost to HBoehm, but I personally only seen them in use a couple of times. I am personally proud of my own memory pool implementation.

It is almost like using a screwdriver on screws and a hammer on nails. There'll be some times where you'll do things better in C++ than you could with Lua, and there'll be the opposite. You can try to attach the screwdriver to the hammer's handle, but what if you need a spoon?

Most people I know that have this completely negative view of C++ are the ones who got frustrated with it at some point and formed their opinion based on their initial experience. But well, I got frustrated with Java, I sincerely think it is an awful language that is just being kept alive by Android and inertia. Still, that's just my opinion here too, I don't have any trustworthy data to back it up.

Well, is C++ still being used due to inertia? In part, yes. Why? Because there's still no force strong enough to change that.

I can program well enough in C, C++, C# and Prolog. I have made some for-hire with PHP and MySQL. I think could risk my hand in Pascal, Ruby, Lua and maybe Octave. And I think I can choose well between these.

The golden brick road is: don't over-engineer, don't underestimate.
But If I ever discover how to consistently do that, I certainly wouldn't tell you, I'd be your per-hour consultant.

Inertia plays a big part, but the other reason is simply that C++ is currently the best, most-widely-available language we have at our disposal when you need to touch the hardware. At the lowest levels, doing the kinds of things that runtimes, drivers, and operating systems do, basically everyone speaks C or C++ -- there are other systems-languages with pockets of support, but in consumer software and consumer electronics, C and C++ are it. Then, it stands to reason that if you're writing any kind of high-performance app that needs to get the most of of those runtimes, drivers, and operating systems, using the same language will give you fastest and most authentic, transparent path. Stepping back one space, to achieve high-performance in code that isn't so much touching those underlying elements, you still need to employ the same kinds of low-level techniques that those elements use themselves -- for example, you might write a memory manager, or you might implement certain parts of the code using intrinsic or inline assembly, perhaps making use of instructions like SSE or AVX that aren't available to the VM models of languages like Java or C# -- and again C and C++ are the natural choice.

I think ultimately, it comes down to the fact that C and C++ are one of very few languages, and by far the most popular, that give full trust and confidence to the programmer that they are doing precisely what you want. C and C++ will let you shoot yourself in the foot, sure, but when you pull a gun, point to the ground and shout "Watch this!" C and C++ will almost never second guess that you know what you're doing unless you ask it to -- and even when it thinks better of it and objects, its not hard to convince it that you do, in fact, know what you're doing. This is a perilous but powerful relationship to have with a compiler, other languages are such hypochondriacs that not only will they prevent you from shooting yourself in the foot, they will also stand in your way when you want to undertake all kinds of unorthodox but supremely mundane activities, regardless of how well thought out of a plan you want to try to convince it you have.

That said, those other do-gooder languages are often enough fast enough and flexible enough for a great many things -- used well, and combined with a little arcane knowledge of their supporting run-times, they're even suitable for games. At some level underneath, there's a hidden sea of C and C++ doing all the things that C and C++ do to make stuff go fast, but those other languages are suited well enough for directing its current.

C++ is popular in games because it started doing a job that needed to be done before anyone else was ready to do it, and because no one since can do, or do such a better job at to make the switch worthwhile. There are other interesting systems languages available that are aimed at the consumer software space, D and Rust come to mind, but they don't now are aren't likely to have the necessary industrial support to turn the tide any time soon.

throw table_exception("(? ???)? ? ???");

The reason build times are insanely huge is because of the compilation and linking model. Pull everything in. Inline everything you possibly can. Optimize and precompute everything possible, perform every optimization possible, restructure everything from the biggest algorithms to the smallest pigeon-hole to be cache friendly, OOO-core friendly, branch predictor friendly, lookahead table friendly, and more.

Cache-friendly C++ code? That's a good one.

C++ may be more cache-friendly than Java, but that's it. If you've ever done any high-performance DSP programming, it's C and assembly all the way. C++ compilers generate code that is simply too inefficient to compete (and that's *after* you disable exceptions, RTTI and ditch the STL.)

Keep in mind that these optimizations can be applied for pretty much any statically-typed language.

Compilers are very good at following the standard now, this is not usually a problem anymore unless you're using an old version or some really new C++11/14 features.

Seems like you got burned by old C++ a while ago. I suggest checking it out again with a good compiler smile.png

We have a large C++11 codebase (medical imaging) and I'm the one who keeps it running on Windows. Linux/G++ and Mac/clang++ are great - as soon as you move to any other compiler it's a frigging crapshoot. It's 2014 and a main compiler vendor can't properly implement a 4 year old specification. Wow.

Fun tidbit: I attempted to use TI's C++98 compiler to develop a hard real-time application on 2x6-core C6472 DSPs. My word of advice: don't. Even if the compiler followed the spec and (it doesn't) the generated code is simply... bad (to put it mildly.) The same vendor has a perfectly functional C compiler that produces excellent code for this platform - is their C++ compiler to blame, or is it that the C++ specification makes it very difficult to write a well-performing compiler?

[OpenTK: C# OpenGL 4.4, OpenGL ES 3.0 and OpenAL 1.1. Now with Linux/KMS support!]

This topic is closed to new replies.

Advertisement