• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
rbsupercool

why c++ is the most widely used language for professional game development?

35 posts in this topic

Most professional game engines like Unreal, Cryengine etc. uses C++ . Why c++ is the best language for game development. I want to ask the question as i want to be a professional game programmer in future and the answer will motivate me to learn more and more c++ in future. Please tell me is it really a good idea to be a expert in c++ for the sake of getting a job as a game developer. I know java and bit of C# , should i learn c++ in more in depth ? and what engine should i use in future? Currently i think it will be a good idea to use unity, as i'm a beginner.  Thanks a lot, i really need your advise. 

0

Share this post


Link to post
Share on other sites

Most people using C++ does it because they have no choice. C++ is an old beast with tons of cumbersome issues, it's just that there are compilers for every single platform ever, so if you want to support every single platform ever (tm), then your only realistic choice is C++. Most engines etc are written in C++ but incorporate some form of scripting language which can then be used for the creation of the actual games.

-1

Share this post


Link to post
Share on other sites

Most modern and higher languages, like Java and C#, are running in a runtime (engine like program). So if there is no runtime for you particular platform, you can't deploy you game/program on that platform. So you are dependent to others.

C++ and C is compiled into machine code, which can be ran without any additional program. Furthermore, as other already stated, you have almost complete freedom to do anything in C++ and C. You can really mess around with pointers and your memory, which often results in very fast code. So the benefit is speed, which is very important in engines and games.

0

Share this post


Link to post
Share on other sites

In a game typical critical performance code areas could be resource and memory management. "High level", such managed and interpreted, programming language could potentially negatively impact on that area.

Since in "AAA" games usually every milliseconds matter, it is often reasonable sacrifice the high productivity of certain programming languages for the critical performance portions of the project and if you need to squeeze every possible milliseconds then native programming languages (like C and C++) are the tools you need.

 

C++ provides low level programming and supports a good range of programming paradigms (compared to C).

This two main "features" made C++ the "best fit" choice for AAA games for critical-performance-code areas and contribute to the continue development of solid and well performance compilers, libraries and frameworks (which are as well additional and strong incentives).

 

Modern C and C++ are also more portable than most people think, even (and especially) compared to "write once, run anywhere myth" languages.

Edited by Alessio1989
2

Share this post


Link to post
Share on other sites

Historically, games have been very performance sensitive, and few languages really target that demographic.  Prior to C++, a lot of people used C.  I suspect they jumped to C++ by and large simply because... well... at least it's better than C.

 

It's still used today largely due to inertia.  There are competing languages like Rust and D, but none of them are stable enough that I'd use them for a multimillion dollar project like a game, and none of them have the library/tooling infrastructure in place that C++ does.

2

Share this post


Link to post
Share on other sites

inertia

 

Frankly that's it.

 

It's the most popular because it was the most popular.  There are some technical reasons, but this trumps them all.  It's cause and effect too...  due to it's popularity, tools exist.  Tools exist because it is popular.  It became the most popular because it was backward compatible with the previous most popular language.

 

Also, when Game Company X is going to start game Z after finishing Y using C++, what language are they going to use, the one they have workers proficient in and have a large existing code base, or something completely new with arbitrary advantages?

 

If Java or C# came out and were binary and source compatible with C++, just with modern syntax, type safety, better generics, etc... the industry would have switched ages ago.  Well, if they were allowed to that is.  Both Java and C# owe their existence to a single master, and that is another advantage ( and disadvantage ) of C++.  C++ is run by committee, meaning no corporate lock in, but a glacial development cycle.

Edited by Serapth
0

Share this post


Link to post
Share on other sites

Your actually all missing the key point of C++ it was mainly created to deal with larger and larger code bases that's why classes where created, for reusable code and then it evolved into a standardised lanuguage from there.

 

The compiler strips all your C/C++ into Assembly language first before the code generation stage anyway.

 

You should really download the GCC source code and have a look through it, you will see that it supports mainly  many CPU's.

Edited by rAm_y_
0

Share this post


Link to post
Share on other sites

Your actually all missing the key point of C++ it was mainly created to deal with larger and larger code bases that's why classes where created, for reusable code and then it evolved into a standardised lanuguage from there.

The problem is, C++ is a terrible language for dealing with large codebases.

 

Its insane compilation model leads to terrible build times, and completely breaks down in large codebases. A windows build of Qt5 takes >2 hours on a quad core system with a SSD. A comparable codebase in C# would take less than 10 minutes (my system manages ~100Kloc of C# per second, which would build the whole 10Mloc of Qt5 in 1.5 minutes.)

 

Its lack of garbage collection means that long-running applications will suffer from memory fragmentation issues unless special measures are taken.

 

The lack of an ABI means that its impossible to create reusable components in a truly portable manner.

 

The complexity of the specification means that no two compilers implement the language in a compatible manner. Portable code is littered with "#ifdef _MSC_VER > 1700" and similar line noise.

 

C++ may have been an improvement over C when it was created (debatable), but right now the main reason for its popularity is inertia.

0

Share this post


Link to post
Share on other sites

There is nothing worse than starting out on a Java or .NET project only to realize that there is no up-to-date binding available for a technology I plan to use (which is extremely likely to provide a C API). That sinking feeling I get when I need to start messing about with JNI or .NET DLLImport code. What a waste of time when I could just be working on the game instead.

 

Whereas for C++ I can use the C libraries directly since C++ by design is an extension of C (so long as I am careful with smart pointers and deleter functions I can often get away without any abstraction layers at all).

 

I think this very reason is why most commercial software written today is still in C++. I think a lot of C# developers on these forums seem to forget that C# is only easy and safe because Unity has already done the actual hard work of writing the engine, binding the libraries and porting the .NET runtime to the many platforms (Same with OpenTK, and XNA-like).

 

As for the language itself I am not too bothered. C++, Java and C# is similar enough so long as you do not use the old crusty parts. I do however find Microsoft's C++/clr:safe very interesting. Basically a merge of C# and C++ so you get all the RAII and memory management goodness of C++ but also the safety of C#. Looking at the generated IL, Microsoft really could add RAII to C# and if it could also compile to native machine code (so it didnt need such a complex (to port) runtime VM), it would likely dominate the world.

Edited by Karsten_
2

Share this post


Link to post
Share on other sites

 

Your actually all missing the key point of C++ it was mainly created to deal with larger and larger code bases that's why classes where created, for reusable code and then it evolved into a standardised lanuguage from there.

The problem is, C++ is a terrible language for dealing with large codebases.

C++ is still the best tool for dealing for large project if you need a proper memory management due performance reasons. The only actual alternative is called C, which provides you less features and is less friendly with large projects (note that less friendly doesn't mean you cant's use C instead of C++ to obtain the same result, the same applied to assembly).

 

Its insane compilation model leads to terrible build times, and completely breaks down in large codebases. A windows build of Qt5 takes >2 hours on a quad core system with a SSD. A comparable codebase in C# would take less than 10 minutes (my system manages ~100Kloc of C# per second, which would build the whole 10Mloc of Qt5 in 1.5 minutes.).

The insane compilation models comes basically from C since C compatibility is a feature of the language. Modern IDEs with modern compilers, librarians and linkers mitigate the compilation issue, however yes, some drastic changes would be appreciated, hopefully with C++17 the introduction of modules will cut the compilation times. Note also that a portion of the compilation time are due code analysis, adding debug information and low level micro-code optimizations (most of them are not suitable to language such Java and C#... oh yep, Microsoft is working on .net native so maybe they will learn something for speed-up compilation time). Speaking of a real-word large piece of software, today I get the UE 4.3 sources and it takes something like 45 minutes to compile from the beginning (ie no pch) all the project of the solutions (engine and tools) with VS2013 on a machine over 4 years old (i5 750).

 

Its lack of garbage collection means that long-running applications will suffer from memory fragmentation issues unless special measures are taken.

The lack of garbage collector is not an issue in a native programming language where manually memory management is a centrepiece of the language. If you build a system in C++ where memory fragmentation is an issue you are doing it wrong (guess: you are directly using the new and the delete operators everywhere without defining a custom allocator or at least a proper heap container such a pool).

 

The lack of an ABI means that its impossible to create reusable components in a truly portable manner.

C++11 provides a definition of what a GC can do if one is used and an ABI to help control its actions. C++ is a native programming language and imposing a full trict ABI would be infinitely stupid, it would destroy the freedom to implement the language in the best way for every single architecture, it would destroy a good part of the possible optimization offered by the language. The lack of a solid ABI and GC don't prevent you to make a project where C++ is used only where needed (games are a practical example of different pieces of code written in different languages coexisting with C\C++).

 

The complexity of the specification means that no two compilers implement the language in a compatible manner. Portable code is littered with "#ifdef _MSC_VER > 1700" and similar line noise.

The C++11 different compiler conformance status are related to the the length path that C++11 take to be developed, a long path due the stupidity of the ISO committee, that waited 13 years (!) to define a new standard.

Portable code is related to what the code does when it runs. Compiler version macros such "#ifdef _MSC_VER > 1700"  are intended to used to make portable code (YES!) on different compilers (especially older versions) with different language version support. In other language such C# and Java you cannot target different compilers and language versions in a single source code if you are not using the least common multiple of the language supported features and libraries.

 

C++ may have been an improvement over C when it was created (debatable), but right now the main reason for its popularity is inertia.

Inertia? Maybe, but there are no still valid alternative... And no, C#, Java, Go, D are not alternatives to C and C++, since managed programming languages are not alternative to native programming languages, they are different tools for different prurpose. The problem is not the lack of ABI or of GC, the problem is why do you use a programming language without GC and ABI if you need them in your work?

 

This funny but truthful image shows the crux of the matter: http://global3.memecdn.com/If-programming-languages-were-tools_o_32267.jpg

Edited by Alessio1989
0

Share this post


Link to post
Share on other sites

The reason build times are insanely huge is because of the compilation and linking model. Pull everything in. Inline everything you possibly can. Optimize and precompute everything possible, perform every optimization possible, restructure everything from the biggest algorithms to the smallest pigeon-hole to be cache friendly, OOO-core friendly, branch predictor friendly, lookahead table friendly, and more.

 

Cache-friendly C++ code? That's a good one.

 

C++ may be more cache-friendly than Java, but that's it. If you've ever done any high-performance DSP programming, it's C and assembly all the way. C++ compilers generate code that is simply too inefficient to compete (and that's *after* you disable exceptions, RTTI and ditch the STL.)

 

 

Keep in mind that these optimizations can be applied for pretty much any statically-typed language.

 

Compilers are very good at following the standard now, this is not usually a problem anymore unless you're using an old version or some really new C++11/14 features.

Seems like you got burned by old C++ a while ago. I suggest checking it out again with a good compiler smile.png

 

We have a large C++11 codebase (medical imaging) and I'm the one who keeps it running on Windows. Linux/G++ and Mac/clang++ are great - as soon as you move to any other compiler it's a frigging crapshoot. It's 2014 and a main compiler vendor can't properly implement a 4 year old specification. Wow.

 

Fun tidbit: I attempted to use TI's C++98 compiler to develop a hard real-time application on 2x6-core C6472 DSPs. My word of advice: don't. Even if the compiler followed the spec and (it doesn't) the generated code is simply... bad (to put it mildly.) The same vendor has a perfectly functional C compiler that produces excellent code for this platform - is their C++ compiler to blame, or is it that the C++ specification makes it very difficult to write a well-performing compiler?

-1

Share this post


Link to post
Share on other sites


Game engines typically avoid "typical C++ bullshit" that you might find in academic code-bases.

 

Man... it always depresses me when you use that link, because to this day I still have no idea what's going on.

 


hat's indeed something that C and C++ suck at compared to modern languages... but 2 hours? Really? I've worked on 1M+ LOC C++ projects and never suffered that. IME, a full build of commercial console-game size project is about 5-10 minutes compilation, and a few minutes in linking if LTCG is enabled, per config / per platform. At my last job, for each commit to the VCS, a poor-man's non-distributed build server would build, run and test 3 platforms x 2 configurations in ~20 minutes.

 

It took me probably longer than 2 hours to compile the Linux kernel.  At work we use a distributed build system, but even then compiling the codebase takes about 10 minuets (and I suspect that would grow to ~2 hours on a single machine, and we generally follow most common compile-optimization practices (like forward-declaring instead of including in header files when you can).

0

Share this post


Link to post
Share on other sites

 

Cache-friendly C++ code? That's a good one.
C++ may be more cache-friendly than Java, but that's it. If you've ever done any high-performance DSP programming, it's C and assembly all the way. C++ compilers generate code that is simply too inefficient to compete (and that's *after* you disable exceptions, RTTI and ditch the STL.)

Writing cache-friendly code just requires that struct does what you think it will do, POD exists, and that you can use real hardware pointers (virtual address space) and allocations without a middleman adding extra indirection, and optionally the ability to use intrinsics.
What does C let you do in this arena that C++ doesn't?

C gives you access to the "restrict" keyword which can result in a measurable performance improvement on DSPs; it lacks templates that bloat your generated code; ditto for implicit copy constructors and other "helpful" compiler-generated bloat; finally, C99 designated initializers are pretty useful in practice.

 

You *can* emulate the efficiency of C in C++, provided you do not use any C++ feature. However, without templates, exceptions and the STL what's the point of using C++ in the first place? You simply get worse compilation times and lose C ABI compatibility in exchange for... namespaces? Pretty weak.

 

Edit: even worse C++ new/delete cannot take advantage of multiple heaps, unless you write your own allocator (good luck), and tend to fail horribly when called during an interrupt. Once you lose the ability to call "new Foo", then a whole range of C++ constructs become impossible. And since everything has to be a POD, then you can just use C and be done with it.

 

(Yes, this is not your run-of-the-mill, out-of-order, branch-predicting x86_64 environment that will swallow all kinds of inefficiencies without complaint.)

 

 


Even if the compiler followed the spec and (it doesn't) the generated code is simply... bad (to put it mildly.) The same vendor has a perfectly functional C compiler that produces excellent code for this platform - is their C++ compiler to blame, or is it that the C++ specification makes it very difficult to write a well-performing compiler?

How different were the C and C++ implementations? Is the problem that C++ gives you a lot more tools than C, which tempts people into writing bad code? Or were you compiling the exact same code with a C and a C++ compiler, using the same settings?
e.g. the gross verbosity of manually implementing 'virtual' in C, means that people really think twice (or 10 times) before using that construct, whereas in C++ it's only 7 letters away...

 

They were functionally identical, but the C++ used a couple of STL constructs whereas the C version used hand-rolled implementations. Result: the C++ code could no longer fit into the L2 alongside the SYS/BIOS6 kernel, resulting in a dramatic performance drop. No virtual inheritance, exceptions or anything fancy - just good old std::vector and std::list.

 

Yes, I could rewrite the C++ code to avoid the STL - but then there's simply no point in using C++ in the first place.

 

 


Its insane compilation model leads to terrible build times, and completely breaks down in large codebases. A windows build of Qt5 takes >2 hours on a quad core system with a SSD. A comparable codebase in C# would take less than 10 minutes (my system manages ~100Kloc of C# per second, which would build the whole 10Mloc of Qt5 in 1.5 minutes.

That's indeed something that C and C++ suck at compared to modern languages... but 2 hours? Really? I've worked on 1M+ LOC C++ projects and never suffered that. IME, a full build of commercial console-game size project is about 5-10 minutes compilation, and a few minutes in linking if LTCG is enabled, per config / per platform. At my last job, for each commit to the VCS, a poor-man's non-distributed build server would build, run and test 3 platforms x 2 configurations in ~20 minutes.
10 minutes is still terrible in my opinion wink.png but if you're on a 2h project, I'd subtly recommend this to the project lead.

 

Have you *ever* tried to compile Qt? If not, try that once - it's an interesting experience. smile.png Its copy of WebKit alone takes the better part of an hour to compile.

 

A full rebuild of our dependency tree takes slightly less than 4 hours on a i7 with 16GB ram and a SSD. (This includes Qt5, VTK, ITK, DCMTK, OpenCV, Coin3d and half a dozen other libraries. Times are slightly better on linux/gcc and quite a bit better on mac/clang.)

 

Btw, 1M LOC of C# compile in roughly 10 seconds on my system. 10 *seconds*. How does that sound for a productivity boost? ;)

Edited by Fiddler
0

Share this post


Link to post
Share on other sites

You *can* emulate the efficiency of C in C++, provided you do not use any C++ feature. However, without templates, exceptions and the STL what's the point of using C++ in the first place? You simply get worse compilation times and lose C ABI compatibility in exchange for... namespaces? Pretty weak.

 

Most of the relative "slowness" of C++ compilation in comparison to C is because of templates.  I see no reason why a C++ compiler would be slower than a C compiler (which is already pretty slow anyway) when you're not using them.

 

For that matter, template bloat is mostly a solved problem, Unless you're working on an obscure compiler, you're going to have reasonable template expansion.  However, a crap compiler is not a language problem (except loosely correlated with language complexity).

 


Btw, 1M LOC of C# compile in roughly 10 seconds on my system. 10 *seconds*. How does that sound for a productivity boost? ;)

 

1M LOC of C# compiles in 10 seconds because it's not compiling.  At least not to native code.  It's compiling to MSIL and amortizing the rest of that at runtime with a jitter.  That's a lot faster because it's fundamentally a different process, and you're not comparing apples to apples.  C# native is supposedly a thing now (or at least will be), so you should try that for a better comparison.

2

Share this post


Link to post
Share on other sites
Guest
This topic is now closed to further replies.
Sign in to follow this  
Followers 0