why c++ is the most widely used language for professional game development?

Started by
34 comments, last by jpetrie 9 years, 9 months ago

On the case of exceptions, I tend not to find them being a problem for me personally. I acknowledge that they do have a fairly high cost, that this cost tends to be inappropriate for high-performance code, and that the default state of affairs have many people paying this cost without realizing or benefiting from it (And I also have relatively little sympathy for those who lack an understanding of their tools that leads to this situation).


The runtime cost isn't their biggest problem. The fact that it's literally impossible to write truly exception safe code in all non-trivial apps - even with all the RAII smart pointers and such - and the fact that even _mostly_ exception-safe code is super difficult to write even for most true experts to write are their real problems.

A great many algorithms and data structures simply cannot recover to a valid state if basic operations like moving a value fail which - with exceptions - they might. Many algorithms or data structures can be put back into a valid state but only if you're super careful and recognize all of the expressions that might throw or all the ways that the compiler can legally make safe-looking code totally unsafe (see the safety of make_unique vs unique_ptr(operator new), for one easy example) and add a ton of unnecessary recovery code.

Judicious use of noexcept and compile-time assertions (or SFINAE) that required operations are noexcept can avoid the worst of these problems in C++11, but then we're back to needing to write tons of extra code to avoid the problems caused by what is supposed to be exceptional (and hence rare) need. That's a common problem with C++: it has great tools and features but a lot of them are awkward to use or have the wrong defaults due to backwards compatibility concerns (much of it going back as far as the esteemed C).

Exceptions are an experiment of the 90's and the CS community has far better runtime error-handling tools at its disposal these days, almost all of which can be easily used in C++ should you choose to replace the STL with a better container/algorithm library.

Sean Middleditch – Game Systems Engineer – Join my team!

Advertisement
You don't seem to be aware of "placement new".

Thank you for the hint, but you are mistaken. I explicitly mentioned this option - I simply rejected it due to the technical debt it carries. Another nail in the coffin of C++ for this specific project, if you will.

Besides, a robust placement new implementation that can deal with 4 separate heaps (LL2, SL2, DDR normal / bucket multiheap) across 6 processors would take a good amount of time for close-to-zero benefit. (And woe to the person who would have to maintain such a monstrosity once the original developers left the project.)

Embedded systems and DSPs have always been a different world, outside of maybe GPU compute, any game console of the last 10 years looks reasonably close to a PC, even handhelds.

PS3 / Cell is pretty much a 7-core DSP taped onto a weak PPC core. All previous consoles, bar the original Xbox, were pretty much a collection of DSPs that had to be programmed individually.

It isn't until the current crop of x86 consoles that you can treat them as general purpose computers.

As Hodgman said,


Game-developers generally avoid exceptions and RTTI (the compilers for game consoles often have these features disabled by default, with a command line option to turn them on!!), and often avoid the parts of the STL that deal with memory allocation, because embedded hardware requires much more care in that area.

[...]

On that topic, many game console OS's don't even provide malloc / new out of the box. It's often up to you to implement these routines yourself, using the raw OS functions for allocating physical memory ranges, allocating virtual address ranges, choosing page sizes, and binding the physical and virtual allocations together.

[...]

If something is so performance-critical that you'd resort to lovingly hand crafting ASM, usually just using intrinsics from C/C++ is enough (and perhaps actually portable across CPU's).

Hey, game consoles sound surprisingly like embedded programming 101! smile.png

In any case, C++ is fine provided you have a good, modern compiler and a powerful enough CPU to hide the costs of its higher-level features (exceptions, RTTI, virtual inheritance, the STL, etc.) The great thing is that you *can* choose to disable these features to gain performance, at the expense of convenience. It's pretty much the only general purpose language that gives you this amount of flexibility.

On the other hand, it has well-documented pitfalls that make it less appealing than it could be. Some of these are inherent in the language design and cannot be fixed. Some will probably *be* fixed by C++17, and supported by compilers by 2020 or so... In the meantime, Microsoft will have released its own systems programming language which might finally bring some long-awaited competition in this area.

[OpenTK: C# OpenGL 4.4, OpenGL ES 3.0 and OpenAL 1.1. Now with Linux/KMS support!]

PS3 / Cell is pretty much a 7-core DSP taped onto a weak PPC core. All previous consoles, bar the original Xbox, were pretty much a collection of DSPs that had to be programmed individually.
It isn't until the current crop of x86 consoles that you can treat them as general purpose computers.

The original Xbox used a Pentium 3, DDR RAM, a normal HDD -- all regular consumer parts. [edit] Oops, I read "bar the Xbox" as "even the Xbox"... unsure.png [/edit]

I guess it depends on how you define general purpose computer, but the PS3/360 were pretty much regular PC's too, but with PPC CPUs instead of x86 (plus the Cell co-processor, which is similar to the consumer-available Xenon Phi co-processor -- also, you can compile regular, ugly, way-too-virtual C++ code for the Cell and it will just work(tm), inefficiently!). Nintendos aren't that different, using general purpose PPC or ARM processors. Until Apple recently switched to x86 for their Mac PC's, PPC CPU's were a viable choice for a desktop CPU as well wink.png

For the most part, programming games for them is the same as programming games for PCs -- except on consoles there'll be a few small parts of your engine that do some low-level hardware communication, which on PC is done by your device drivers. On the Microsoft consoles, they're even nice enough to give you analogues of many PC APIs that you're used to, so you can port a lot of Windows code without any changes at all. If you're not working on the guts of the engine, it's basically the same as working on a PC game.

For the most part, the biggest change is just that most of the console OS's don't implement virtual memory (even though they still use regular virtual address spaces), so you've got to be very careful about not running out of RAM.

In any case, C++ is fine provided you have a good, modern compiler and a powerful enough CPU to hide the costs of its higher-level features (exceptions, RTTI, virtual inheritance, the STL, etc.) The great thing is that you *can* choose to disable these features to gain performance, at the expense of convenience. It's pretty much the only general purpose language that gives you this amount of flexibility.

And that's why it's popular for game engines cool.png It bridges the divide all the way from C's style of systems/hardware programming, almost all the way up to 'modern' languages like Java/C#. Plus, it's fully compatible with C -- practically every other language is also "compatible with C", but you can write C code inside C++ files, or link C/C++ files together, etc... If you ever need to, it's simple to mix some C99 code into your project biggrin.png

You can write stuff that looks like driver code, and you can also write lambda-based game event systems... You can then also control the way those high level systems work under the hood, optimizing them to be cache friendly or to use SIMD instructions, or multiple cores, etc...

However, the "higher level" features have zero cost when compared to equivalent C code.
e.g. comparing something like polymorphism in C++:


	class IFoo
	{
	public:
		virtual void Stuff(int i) = 0;
		virtual ~IFoo() {};
	};
	class Foo : public IFoo
	{
	public:
		Foo() : m(1337) {}
		~Foo() { g_state *= m; }
		int m;
		void Stuff(int i) { g_state += i*m; }
	};

	void Test()
	{
		std::unique_ptr<IFoo> base( new Foo );
		base->Stuff(42);
	}

The equivalent C code is:


	typedef void (FnStuff)( void*, int );
	typedef void (FnShutdown)( void* );
	struct IFoo_vtable
	{
		FnStuff* pfnStuff;
		FnShutdown* pfnShutdown;
	};
	struct IFoo
	{
		IFoo_vtable* vtable;
	};
	void IFoo_Stuff(IFoo* self, int i) { (*self->vtable->pfnStuff)(self, i); }
	void IFoo_Shutdown(IFoo* self) { (*self->vtable->pfnShutdown)(self); }

	struct Foo
	{
		IFoo parent;
		int m;
	};
	void Foo_Stuff(Foo* self, int i) { g_state += i*self->m; };
	void Foo_Shutdown(Foo* self) { g_state *= self->m; };
	IFoo_vtable Foo_vtable = { (FnStuff*)&Foo_Stuff , (FnShutdown*)&Foo_Shutdown };
	void Foo_Init(Foo* self) { self->parent.vtable = &Foo_vtable; self->m = 1337; }

	void Test()
	{
		Foo* derived = (Foo*)malloc(sizeof(Foo));
		Foo_Init(derived);
		IFoo* base = (IFoo*)derived;
		IFoo_Stuff(base, 42);
		if( base )
		{
			IFoo_Shutdown(base);
			free(base);
		}
	}

The kinds of people who are doing systems programming in C++ should be able to do this translation in their head at all times, to be aware of what they are actually asking the compiler to do for them. Both of the above snippets run at the same speed on my PC cool.png Same goes for vectors/lists/etc - most of the time that there's performance issues, people are just writing silly C++ code, where the equivalent C code would look absolutely horrible and make their mistakes obvious.

If, for whatever reason, you needed to write code using polymorphism, the C code makes the costs involved obvious (and makes a few micro-optimisation opportunities more obvious), but the C++ code is much easier to read, write and maintain - and again, there's no performance difference between them.

As well as the "shortcut" features such as the above, there's plenty of completely free features that are just designed to aid in having decent software engineering practices - such as enforcing invariants, detecting errors are compile-time, etc. At my last job we used a template for pointers, which acted like (and had the same cost as) a raw pointer, but during development builds it would alert of cases where it had been used when uninitialized, or where it had been leaked / not cleaned up. That kind of template has absolutely zero cost in the shipping build, but simply enhances your engineering practices.

Plus, it's fully compatible with C -- practically every other language is also "compatible with C", but you can write C code inside C++ files, or link C/C++ files together, etc... If you ever need to, it's simple to mix some C99 code into your project biggrin.png

[...]

However, the "higher level" features have zero cost when compared to equivalent C code.

Even if the generated code was identical, there is still a non-obvious impact - which makes it all the more deadly.

Consider the following (real) scenario: there is a hardware vendor that ships a device with closed-source binary drivers written in C++ and compiled with VS2010. The codebase for the parent project requires C++11 / VS2013. How do you make this work when you don't have access to the original source? (Answer: you write an IPC shim to isolate the driver process from your application.)

Were the drivers written in C, chances are this shim would have been unnecessary in the first place.

You might think this is an academic concern. In which case, I implore you to check the download page of any popular C++ library and check their binaries. They have to ship separate versions for each major compiler release. Gigabytes of waste, all because of a supposedly "efficient" language! smile.png

There are workarounds, which boil down to either rebuilding your whole dependency tree using the same compiler (which can easily take hours) or pinning your compiler version (in which case, you'd probably still be using VS2008 or VS2010 without C++11 support.) Or you could be using a better language, which wouldn't suffer from this problem in the first place.

Same goes for vectors/lists/etc - most of the time that there's performance issues, people are just writing silly C++ code, where the equivalent C code would look absolutely horrible and make their mistakes obvious.

Which is actually one of the issues of C++: every problem can be solved in multiple ways, and the *obvious* way is often the *wrong* way. It takes years of experience to learn the right way, and you must be a very lucky person indeed if your team consists solely of people who can tell good from bad C++ code. (In which case, I'd love to work with your team. Drop me a line tongue.png )

At my last job we used a template for pointers, which acted like (and had the same cost as) a raw pointer, but during development builds it would alert of cases where it had been used when uninitialized, or where it had been leaked / not cleaned up. That kind of template has absolutely zero cost in the shipping build, but simply enhances your engineering practices.

A reasonable person might counterargue that (a) uninitialized pointers shouldn't even compile and (b) if they did, your runtime should at least be able to inform you of this error when you compile with the, I don't know, "CC --inform-me-of-memory-leaks" option.

Of course, nothing can ever be that reasonable in C++, so the solution is:

(a) to force every project to re-implement "template<typename T> class my::Ptr<T>" from scratch, because that's a good programming exercise;

(b) destroy their build-times in the process, by recompiling every single instance of Ptr<T> and discarding the compiled code during link time;

(c) destroy any hope of interoperability because every project now has a different, incompatible "template<typename T> class your::Ptr<T>" implementation.

Bonus points if your project redefines primitive types and has its own string class, too.

Everything wrong with C++ condensed in a single concrete example. But yeah, "zero cost in the shipping build" indeed. Cheers!

Edit: whitespace always comes out wrong in Firefox, this is weird.

[OpenTK: C# OpenGL 4.4, OpenGL ES 3.0 and OpenAL 1.1. Now with Linux/KMS support!]

Consider the following (real) scenario: there is a hardware vendor that ships a device with closed-source binary drivers written in C++ and compiled with VS2010. The codebase for the parent project requires C++11 / VS2013. How do you make this work when you don't have access to the original source? (Answer: you write an IPC shim to isolate the driver process from your application.)


A closed-source driver with a C++ interface is madness. Regardless of what the driver is written in, it needs a pure C interface. Offering an optional C++ interface might be an option but I would flatly say a C++ interface for anything you cannot recompile yourself is completely useless, whether we are talking about a library or a driver.

Were the drivers written in C, chances are this shim would have been unnecessary in the first place.

What the drivers are written in is irrelevant. As long as they supply a C interface, they can write the driver in Brainfuck for all I care.

You might think this is an academic concern. In which case, I implore you to check the download page of any popular C++ library and check their binaries. They have to ship separate versions for each major compiler release. Gigabytes of waste, all because of a supposedly "efficient" language! smile.png

I don't see the problem. Generating the builds is pretty much a no-brainer since it can be automatized pretty easily. Apart from that, I see no way around it. Even with C you have to deal with library boundaries (usually that means opaque pointers and pairs of alloc/free functions). C++ allows you to ignore all that provided you can make guarantees about the runtimes and compilers involved. Usually you don't switch compilers nor add libraries very frequently. The cost of downloading the right build or building a library yourself is vanishingly small compared to having to deal with each libraries management stuff yourself all the time. I'd rather take C++ and proper RAII or similar concepts (like Qt's QObject ownership semantics) over doing all of that by hand every time.

I don't see a way around that either. We can move everything into some kind of virtual machine (like for example C# or Java). That comes with its own problems and since you are so concerned with every last bit of performance C++ might cost you, that does not seem to be a viable solution.
We could also consider attaching a cleanup-function to every non-trivial piece of data we hand over library-boundaries. Of course that would mean at least one sizeof(function pointer type). And we would have to do something like a virtual function call whenever such a piece of data is destroyed.

There are workarounds, which boil down to either rebuilding your whole dependency tree using the same compiler (which can easily take hours) or pinning your compiler version (in which case, you'd probably still be using VS2008 or VS2010 without C++11 support.) Or you could be using a better language, which wouldn't suffer from this problem in the first place.

As I said, picking C won't really help because it simply shifts the burden of a lot of work which could be done automatically to the developer. It does not really solve the problem either because C has the same library-boundaries problems C++ suffers from.

Same goes for vectors/lists/etc - most of the time that there's performance issues, people are just writing silly C++ code, where the equivalent C code would look absolutely horrible and make their mistakes obvious.


Which is actually one of the issues of C++: every problem can be solved in multiple ways, and the *obvious* way is often the *wrong* way. It takes years of experience to learn the right way, and you must be a very lucky person indeed if your team consists solely of people who can tell good from bad C++ code. (In which case, I'd love to work with your team. Drop me a line tongue.png )

I don't see how being able to solve problems in multiple ways is a bad thing. There are always general solutions and highly specific solutions. For example std::shared_ptr is a very general solution. It can manage any pointer you hand it (provided you are happy with the default deleter or specify a correct one). If you just need to manage the lifetime of something, it's doing the job extremely simply and well. If everything everywhere is an std::shared_ptr you are probably in for some problems (cycles, the cost of copy-constructing/destroying std::shared_ptr instances all the time) and you should have written something which fits your specific problem.
As always, good and experienced developers are a limited resource. As always, junior programmers need to be mentored/supervised and/or working under good coding guidelines watched over by a senior programmer. After all, otherwise junior programmers will be a threat to a project in any language.



At my last job we used a template for pointers, which acted like (and had the same cost as) a raw pointer, but during development builds it would alert of cases where it had been used when uninitialized, or where it had been leaked / not cleaned up. That kind of template has absolutely zero cost in the shipping build, but simply enhances your engineering practices.


A reasonable person might counterargue that (a) uninitialized pointers shouldn't even compile and (b) if they did, your runtime should at least be able to inform you of this error when you compile with the, I don't know, "CC --inform-me-of-memory-leaks" option.

How is that supposed to work? C (and by extension C++) allows you to do all kinds of weird stuff with a pointer. For example converting it into an uint_ptr, storing it in some structure and passing the structure to a (completely opaque) API (like for example the Win32 API). The pointer could then be retrieved at any arbitrary point in time by a different API call and freed. How do you track that automatically at compile time?

Of course, nothing can ever be that reasonable in C++, so the solution is:
(a) to force every project to re-implement "template<typename T> class my::Ptr<T>" from scratch, because that's a good programming exercise;
(b) destroy their build-times in the process, by recompiling every single instance of Ptr<T> and discarding the compiled code during link time;
(c) destroy any hope of interoperability because every project now has a different, incompatible "template<typename T> class your::Ptr<T>" implementation.

(a) If something turns out to be very useful, it can be found in the standard library, popular libraries like Boost or the company's private libraries.
(b) I work in a quite big codebase and while there obviously is a cost with it, I would not call it significant. That aside, a second or two extra time per build would be acceptable if it helps to prevent a bug from happening that could take days of annoying debugging to find. Of course my link times for non-invasive changes are smaller than that...
(c) I see no reason for something like that to be ever in the public interface of a project.

Bonus points if your project redefines primitive types and has its own string class, too.

Everything wrong with C++ condensed in a single concrete example. But yeah, "zero cost in the shipping build" indeed. Cheers!

I can see a lot of good reasons why you should not use C++ for everything (there is a reason after all other languages still exist). You, however, seem to be obsessed with turning everything in the language into a lemon, biting into it and then sucking it dry.
Personally, I enjoy working with C++ for my hobby projects (despite the fact that the majority of my work also deals with the language). That was not always the case. I have looked at quite a few other languages in the meantime. Worked with a few quite a bit. At least got some understanding with others. But after a lot of time, I returned to C++ for my hobby projects. And I actually enjoy it. With nearly 20 years of experience in the language, modern compilers and proper library support, it can be fun.
I'm not saying everyone should work in C++. I'm not saying every project has to be done in C++. But I also object to the way you are completely ignoring everything good about the language and turning it into something bad.

This sort of bullshit is way offtopic here; closed.

Please go start a thread in the General Programming forum if you want to argue this matter further.

This topic is closed to new replies.

Advertisement