Sign in to follow this  
ynm

What kind of optimization makes C++ faster than C#?

Recommended Posts

Hodgman    51237
In windows 8 you can program in c++ however it is an interesting dialect of c++. There are no naked pointers. Instead they have what is really just a shared_ptr operator ^ .

This means there are absolutely no memory leaks in their version of c++. You can do this without using windows 8 if you wish. So you get the best part of C# in c++. It is rather genius I might add. And there is no need for garbage collection, everything is deleted properly. There is no garbage collector, and everything runs very fast.

The "^" operator isn't equivalent to a shared_ptr (if it were, you would have a ton of leaks, because shared_ptr's can't cope with circular references, and require the programmer to be able to explicitly choose between them and weak_ptrs...), it's exactly equivalent to C# references!
The dialect you're talking about is called C++/CLI, and it is compiled to MSIL -- the same intermediate "bytecode" language that C# is compiled to, and it runs on the same VM and uses the same Garbage Collector that C# does!

Scratch that!

The OP (and many others) have just made the assumption that C# is slower or more cumbersome. I heard the same thing in the 90s that C++ was bloated and more cumbersome than C. I read the same arguments in the 80's that C was painfully slow and could never replace the skilled assembly-writing artisan.


This is true, but what changed? Computers got faster, much faster and the cost of the more expensive language features became affordable.

What changed is that C compilers got smarter and smarter, to the point where they can now write assembly as well as experts can (when given sensible inputs). Likewise, C++ compilers got smarter and smarter, to the point where they matched C compilers.
Following the same tradition, C# compilers are also getting smarter.
Take note that because C# is a lot more restrictive on the programmer, operating on a completely different kind of 'abstract machine' to C/C++, the compiler actually has a lot more information available to it when making optimisation decisions.
e.g. C++ code can look pretty innocent, but still cause a lot of confusion for the compiler. As a straw-man, take this function:

void World::FindPlayerWithMinHealth( int& outHealth, int& outIndex )
{
 outIndex = -1;
 outHealth = 100;
 for( int i = 0; i != m_players.size(); ++i )
 {
  if( m_players[i].health <= outHealth )
  {
    outIndex = i;
    outHealth = m_players[i].health;
  }
 }
}

And then I call it with:

world.FindPlayerWithMinHealth( world.m_players.m_size, world.m_players.m_size );

When the compiler is compiling the function, it has to do it in such a way that the stupid line above will behave as expected. So, looking at it again:

void World::FindPlayerWithMinHealth( int& outHealth, int& outIndex )
{//these two lines could change any bit of memory anywhere in the system!!
 outIndex = -1;
 outHealth = 100;
 for( int i = 0; i != m_players.size(); ++i )//m_players.size() has to be re-called every iteration, perhaps it's value has changed.
 {
  if( m_players[i].health <= outHealth )//have to fetch m_players.begin every iteration, perhaps it's value has changed
  {//these two lines could change any bit of memory anywhere in the system!!
    outIndex = i;
    outHealth = m_players[i].health;//have to fetch m_players[i].health again, even though it just appeared above.
  }
 }
}
Edited by Hodgman

Share this post


Link to post
Share on other sites
RDragon1    1205
In windows 8 you can program in c++ however it is an interesting dialect of c++. There are no naked pointers. Instead they have what is really just a shared_ptr operator ^ .

This means there are absolutely no memory leaks in their version of c++. You can do this without using windows 8 if you wish. So you get the best part of C# in c++. It is rather genius I might add. And there is no need for garbage collection, everything is deleted properly. There is no garbage collector, and everything runs very fast.

The "^" operator isn't equivalent to a shared_ptr (if it were, you would have a ton of leaks, because shared_ptr's can't cope with circular references, and require the programmer to be able to explicitly choose between them and weak_ptrs...), it's exactly equivalent to C# references!
The dialect you're talking about is called C++/CLI, and it is compiled to MSIL -- the same intermediate "bytecode" language that C# is compiled to, and it runs on the same VM and uses the same Garbage Collector that C# does!

 

If he's talking Windows8, he's probably talking about C++/CX, which looks basically identical to C++/CLI, except doesn't compile to .NET code, and in C++CX, the ^ hat symbol *is* used to denote something akin to shared_ptr (it uses refcounting), and suffers from the circular reference problem (they introduced WeakReference to deal with this)

 

http://en.wikipedia.org/wiki/C%2B%2B/CX

Share this post


Link to post
Share on other sites
frob    44920
What you say about speed is just not accurate, c# and java are significantly slower when executing high work loads. There is a reason they develop their environments in c/c++. And there is a reason that hard ware manufactures write their drivers in c. These languages are faster, and are less waste full in memory as well.

Let's not forget the context of a game engine, shall we?

I'd love to see some actual research showing that "c# and java are significantly slower when executing high work loads." Major companies like IBM and Oracle have neatly debunked that for Java. If you are looking for corporate database work you'll be doing it in Java. Microsoft did a fairly good job of debunking that for C# back in 2005, but it hasn't really sunk in yet.

You certainly can do many things in c++ that you cannot do in the other languages. Pointer manipulation and knowledge of the hardware can give you a performance advantage. This is much like the highly skilled assembly artisan of yesteryear.

However, let's consider how c won out over assembly. It certainly isn't faster to execute, at best it can match what the expert artisans produced. No, that isn't it. C took over because of its ability to develop software much more rapidly and in a way that did 't require expert artisans. The computers were faster and could spare the extra cycles that might be introduced by translating into machine code, the main cost of software was all the gruntwork, and that could be dramatically reduced with c. Those few places that needed to be hand-tuned by artisans could still be tuned, while the bulk of the work done more rapidly and with fewer bugs (read: cheaper) than the assembly counterpart. It also helped that you only needed to port a very tiny portion of code to move to new hardware, but in practice that wasn't much of a selling point.

Over the years new hardware features were exposed and were only available to assembly developers, but gradually those were made available to c programmers through libraries and later through compiler-specific features and options.

When C started to lose to c++ many of these same arguments appeared. It was bloated, it couldn't match what the expert artisans produced, it forced extra overhead with virtual tables, name resolution was a mess, and on and on and on. Yet the same factors caused its victory: The computers were faster and could spare the extra cycles that might be introduced in translation, the main cost of software was all the gruntwork, and that could be dramatically reduced with c++. Those few places that needed to be hand-tuned by artisans could still be tuned, while the bulk of the work done more rapidly and with fewer bugs (read: cheaper) than the c-based counterpart.

Over the years now hardware features were exposed and were only available through limited means, but gradually they were promoted to intrinsic operations and other compiler-specific features.

We are now seeing exactly the same thing with C#. Much of the business world saw this with Java, but games did not make that move. It is certainly true that an artisan in C++ or C can often craft slightly faster bits of code with those languages than with C#. However, computers are faster and can spare the extra cycles., the main cost of software is still all the gruntwork, and it can be done both faster and safer with C#.

The c++ artisan can do amazing things. Templates were often derided as a source of bloat, and then when template metaprogramming came around these artisans could produce code that compiled neatly but at a cost... Build times for some projects skyrocketed. The biggest pile of heavily optimized c++ I've ever worked on required nearly two full days on a build farm to compile. Sure the code was nice and fast, but when the turnaround takes so long the cost of development suffers.

Is there a place for these people in games? Certainly! There is a segment of the engine that needs heavily optimized code. But as we are discovering at my studios and others around the globe, that portion of the engine is extremely small and rapidly vanishing.

Something else is subtly different with C#. Java very nearly did this, but Sun couldn't quite manage to do it. Microsoft managed it: They constrained the language to work on a platform agnostic virtual machine. They forced the developers to live with some very tight constraints that many people don't like. But the constraints are more like the rules of art that enable more creative works to come from them: The constraints give us automatically-parallelizing compilers that can automatically split the work, balance it between processors, and do so just as well as the artisans of today can do --- with fewer bugs. The constraints enable code to re-write and re-optimize itself at runtime with no effort from the developer. We (and many other studios) have found that Mono on consoles is a great thing; we have ported to X360, PS3, Wii, WiiU, and even 3DS amazingly enough, and managed to do it for less cost than we expected because the core of the game targeted the C# virtual machine.

Tools developers have taken notice. Middleware is now touted with being CLR-friendly. Many developers are actively pushing on Sony and the other console developers for direct C# support. A rapidly increasing number of studios want access to hardware-specific functionality in c# compilers. And it is only a matter of time before that happens.

There are still features missing from the language, but that is true of all languages be it C++, Python, or Ruby. Sometimes you still need that tiny little boost a bit of assembly will give you. But those times are extremely rare.




Concluding this rather wordy post:

Talk with recruiters and headhunters, pay attention to the jobs and want ads, and you will notice that the only c++ programmers in demand are the artisans. Sure there is still gruntwork to be done in the language, but it is much dimished. The gruntwork of game development is rapidly moving away from C++. Sure the language has many great things, and I've made years of my living from being a language lawyer myself; but that time has passed. Our studio -- and many others -- paid the cost of porting to C#, and I can personally attest to the improvements it has made in what we can do. Not because the language generates magically different executables, but because the gruntwork can be done cheaper.

And when it comes to game development, cheaper will always win.

Share this post


Link to post
Share on other sites
Rattenhirn    3114
this rather wordy post
 
Excellent post!
 
I'd like to point out a few things that might not be accurate:
When C started to lose to c++ many of these same arguments appeared. It was bloated, it couldn't match what the expert artisans produced, it forced extra overhead with virtual tables, name resolution was a mess, and on and on and on.

Ignoring the name resolution mess, which has not much to do with the performance debate, there's still a difference between the additional features that C++ adds over C compared to what C# adds to C++.

First, let's look at a few things that C++ brought to the table:
OOP, including runtime polymorphism via ("virtual"), overloading, RTTI, exceptions and templates.

All of these are concepts that were possible to implement in C and integrated them into the language, while retaining the "do not pay for what you do not use" approach. Everything is optional, some things are even cost free, for example overloading, all of OOP, except runtime polymorphism and, for most part, templates. C++ introduced a multi paradigm approach to programming, which seems to confuse a lot of programmers, but is, in my opinion, essential to most often get the best tool for a certain job.

Now, let's see what C++ took away from C:
Nothing! Well, not strictly true, but nothing that really affects anyone.

One might argue, that it should have removed more stupid stuff from C, but then again, remember the "do not for what you do not use" approach, which makes all leftovers, by definition, benign.

Moving on to the differences between C++ and C#/Java.
What did they add?
Fully automatic memory management through GC, a VM for standardization of data types and interpretation or JIT compilation of "byte code", full reflection

Since they are not really an evolution over C++ or C, the comparison is a bit trickier. All of the improvements come with their penalties and are not optional. Every dereference operation has to go through an extra indirection to find out where the memory actually is, the compiler can't really throw stuff away, because the program can still access everything that has been in the source code through reflection, including the names and the VM makes access to system resources that are not already supported tedious or even impossible.

What has it taken away?
Free functions, templates (except for what generics cover), manual memory management, multiple inheritance (except interfaces), and more.

No I'm not saying these are all good or useful things to have, but I'd rather have the options and not be forced into a behaviour that suits the mindset of the creators of the language. Java was created when OOP was kinda new to the mainstream and "teh future", but now it's an old thing and there's a tendency to go data driven programming, for instance.

Some things that it takes away are downright bad:
The RAII idiom is no longer possible, because there's no way to tell when exactly objects are destroyed (C# works around that somewhat with "using"), the GC is unpredictable and introduces nasty pauses when not managed very carefully and there's no way to manage memory manually _at all_ (like, *gasp*, put in on the stack).

The attempt to push the vast subject of programming into a tight set of rules is what upsets me most about these "new" programming languages. I'd hope that something will come along, that adds new goodies, but keeps the old ones.
 
Tools developers have taken notice. Middleware is now touted with being CLR-friendly. Many developers are actively pushing on Sony and the other console developers for direct C# support. A rapidly increasing number of studios want access to hardware-specific functionality in c# compilers. And it is only a matter of time before that happens.

You know, popularity is not a good indicator of how good something is at something. Take a look at Cobol or, ironically, C or C++. ;)
 
 
Not because the language generates magically different executables, but because the gruntwork can be done cheaper.

Not sure what exactly you mean by grunt work, but if it's something un- or lesser skilled people can do, then it should be automated, regardless of language used. Remember, work smart, not hard! ;) j/k

Share this post


Link to post
Share on other sites
Tribad    981
C++ operates at a lower level of abstraction than C#. In C++, you could (if you wanted to) trivially fill an array with machine code, cast it to a function and run that code (it would no longer have any amount of portability at that point).

This worked with W95 the last time. Execution of data segments is not allowed in any OS available for the broad market.

It may work with OS that are designed for the embbedded market, but normally this is not working anymore.

 

I did that the last time for a Win-3.11 application written in C and at that time no one thought of C++.

Share this post


Link to post
Share on other sites
What changed is that C compilers got smarter and smarter, to the point where they can now write assembly as well as experts can (when given sensible inputs). Likewise, C++ compilers got smarter and smarter, to the point where they matched C compilers.

I'd prase it even more extreme:
What changed is that C compilers got smarter and smarter, to the point where they can now write assembly as well as better than experts can (when given sensible inputs). Likewise, C++ compilers got smarter and smarter, to the point where they matched outperform C compilers.

Neither assembler programmers nor C programmers will like this, I'm sure. But the stunning truth is that that's just what has observably happened. Take for example this statement from the Nedtrie home page:

... there are competing implementations of bitwise tries. TommyDS contains one, also an in-place implementation, which appears to perform well. Note that the benchmarks he has there compare the C version of nedtries which is about 5-15% slower than the C++-through-the-C-macros version

This sounds like total bollocks, but it is just what it is, go ahead and try for yourself.

For some obscure reason (stricter aliasing rules? RVO? move semantics? ...whatever?) a C++ wrapper around unmodified C code is sometimes not "just as fast" but faster than the original C code.

The same is true for C/C++ versus hand-written assembler code. I've been writing assembler since 1983, but I regularly find it hard to match (match, not outperform!) optimized C++ code nowadays. If anything, I use intrinsic functions now, but writing "real" assembler code is a total waste of time and an anti-optimization for all I can tell. The compiler does at least as good, and usually better. You may be able to work out an example where you gain a few cycles over the compiler, but on the average you'll be totally anihilated.

Share this post


Link to post
Share on other sites
Nypyren    12065
Execution of data segments is not allowed in any OS available for the broad market.

This is true, but it is not the entire story. In Windows OSes you can simply call VirtualProtect to allow execution of runtime generated code.

(Edit to fix broken formatting) Edited by Nypyren

Share this post


Link to post
Share on other sites
frob    44920
This is true, but it is not the entire story. In Windows OSes you can simply call VirtualProtect to allow execution of runtime generated code.

I have no idea what this has to do with the topic at hand: Why C++ is chosen over C# as a game development language.

The same is true for C/C++ versus hand-written assembler code. I've been writing assembler since 1983, but I regularly find it hard to match (match, not outperform!) optimized C++ code nowadays. If anything, I use intrinsic functions now, but writing "real" assembler code is a total waste of time and an anti-optimization for all I can tell. The compiler does at least as good, and usually better. You may be able to work out an example where you gain a few cycles over the compiler, but on the average you'll be totally anihilated.

That is exactly it. When C++ compilers started taking over, it wasn't because they generated the fastest code. Now because of popularity and investment, they have been tuned to do it.

Java's had 15 years to mature, and is now the de-facto standard language for over 1/2 of all tablets and smartphones. That is in spite of it originally being derided as slow and bulky and non-portable, the opposite of it's promise.

C#'s had 10 years to mature, and it is hard to find a tools programming job without knowing the language.

It is not about what generates the fastest executable, but what executable gets generated the fastest. Sure an expert assembly artisan is able to write a game like Roller Coaster Tycoon that runs on a 486 and took four years to develop; clones appeared within a year that ran on modern hardware and played just as good -- they didn't take four work years, and they were profitable.

If you talk to recruiters and headhunters and tell them you know C++, they'll ask you what else you know. It isn't because C++ generates faster code at games, it is because studios have learned to be more productive with C#, Lua, Python, Ruby, and assorted other languages.

Productivity, not language performance, is the key feature.

Share this post


Link to post
Share on other sites
EddieV223    1839
Productivity, not language performance, is the key feature.

No this is not accurate. It depends on the application domain. For some projects performance is absolutely key. For others not so much.

Also, productivity is what has improved with c++11 in a BIG way. I'll admit though that it's still not quite c# level of productivity but its much better.
I'd prase it even more extreme:
What changed is that C compilers got smarter and smarter, to the point where they can now write assembly as well as better than experts can (when given sensible inputs). Likewise, C++ compilers got smarter and smarter, to the point where they matched outperform C compilers.

Neither assembler programmers nor C programmers will like this, I'm sure. But the stunning truth is that that's just what has observably happened. Take for example this statement from the Nedtrie home page:
That is exactly it. When C++ compilers started taking over, it wasn't because they generated the fastest code. Now because of popularity and investment, they have been tuned to do it.
This is true that modern optimizing compilers do a very good job these days. However the biggest gains in performance since c/c++ started are not from optimizing but from the evolution of the computer hardware.

"Moore's law is the observation that over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years. The period often quoted as "18 months" is due to Intel executive David House, who predicted that period for a doubling in chip performance (being a combination of the effect of more transistors and their being faster).[1] "

While this law may not be followed exactly it does show that performance has been doubling nearly every two years, no level of optimizing can do that. So that is the biggest reason, c became affordable over assembly and c++ over c and so on. Even though the modern features of the languages when they were new, cost a performance overhead. The reason people would pay that price some times was for productivity. Speed of execution vs programming productivity.

Today's computers are so fast that most typical desktop application's can easily pay for the languages such as java/c#, in exchange for that extra productivity. However some cannot, or do not want to, due to the project domain calling for absolute speed.

However c++ is being upgraded now. C is still basically the same language as it was in the 80s. ( There is a new standard but it doesn't fundamentally change the language )

This is what c++ now has over c and assembly, and why it competes with c#, modern features have been added and many more are on the way. C# has many modern features and that is a top reason that it is so productive to program with. So now that c++11 is back in the game, expect the competition to heat up even more in the following years as c++ will have frequent updates with lots of new modern features being added.


In windows 8 you can program in c++ however it is an interesting dialect of c++. There are no naked pointers. Instead they have what is really just a shared_ptr operator ^ .

This means there are absolutely no memory leaks in their version of c++. You can do this without using windows 8 if you wish. So you get the best part of C# in c++. It is rather genius I might add. And there is no need for garbage collection, everything is deleted properly. There is no garbage collector, and everything runs very fast.

The "^" operator isn't equivalent to a shared_ptr (if it were, you would have a ton of leaks, because shared_ptr's can't cope with circular references, and require the programmer to be able to explicitly choose between them and weak_ptrs...), it's exactly equivalent to C# references!
The dialect you're talking about is called C++/CLI, and it is compiled to MSIL -- the same intermediate "bytecode" language that C# is compiled to, and it runs on the same VM and uses the same Garbage Collector that C# does!
 
 


 
If he's talking Windows8, he's probably talking about C++/CX, which looks basically identical to C++/CLI, except doesn't compile to .NET code, and in C++CX, the ^ hat symbol *is* used to denote something akin to shared_ptr (it uses refcounting), and suffers from the circular reference problem (they introduced WeakReference to deal with this)
 
http://en.wikipedia.org/wiki/C++/CX
 
 
 


That's what I was talking about, thank you. Edited by EddieV223

Share this post


Link to post
Share on other sites
MichaBen    481

I see people talking about programmer productivity a lot, but I really wonder if this is really noticeable. The typical comparison that people make to suggest this is comparing making a simple GUI application in [language X] compared to using Win32 in C++. Of course this increases productivity, but not because of [language X], but because of using a proper GUI library. Anything other then pure Win32 will increase productivity of GUI development, a C++ GUI toolkit like Qt probably achieves the same productivity boost. The language is probably a minor factor here, most of the productivity is determined by the libraries and/or engines you are using.

 

Some aspects that do make a difference in productivity are better error detection at compile- and runtime, this can decrease the time needed to debug nasty bugs. How big of a difference this would make I can't really tell, but in my experience the majority of the debugging time is spend into logic errors, which are very hard to detect by any automatic system and therefore will not be much different with another language. Type and memory safety would be a nice to have, for high performance applications it would be nice to have this optional so you can enable it only for debugging, that way you can still benefit from it to decrease debugging time. Garbage collector systems and shared pointers are not suitable for this however, as these systems do not support being disabled.

 

But the main problem with things like garbage collectors and shared pointers is the fact that they don't solve any problems, only symptoms. For example you have an NPC referencing another NPC as target, and this NPC is then deleted. With manual memory management it would crash because of a dangling pointer, with 'automatic' memory management it would not collect the object and therefore not delete the NPC, even though your intentions were to delete it. So in the end, you removed a crash, but kept the bug. A system with possibility to assert that no references are left when you try to delete the object would really increase productivity here, as it tells you where the problem it, rather the crashes sometime later (C++) or obscuring the symptoms of the bug (C#). Also a system like that can actually be disabled in final release mode, if there are worries about it's performance. With a slightly customized version of the C++ unique pointer you can achieve this, significantly decreasing debugging time of pointer errors, while keeping 100% performance since you can fully disable this in release mode.

 

I think the main problem with C++ are programmers who refuse to use modern and safer features, like in my office it's still a crime to use the word "exception", and you will be frowned upon for suggesting to use templates instead of macros. Once in a while I still see raw memory being allocated for a temporary array that is deleted again the same function, while std::vector would do the exact same thing in a safer way. In that case moving to a different language does increase productivity, but mostly because people are then forced to drop their old habits from the C era.

Share this post


Link to post
Share on other sites
Rattenhirn    3114
Productivity, not language performance, is the key feature.


Since this seems to be your primary argument now, I've spent some time looking into that.

Productivity is very difficult to quantify objectively, and in my experience the main productivity gain you get from C# is the excellent library that comes with it, especially for GUI programs. Hence it's high usefulness for tools.

But it like to hear more about those productivity gains!

Share this post


Link to post
Share on other sites
kop0113    2453
This is a hard one to argue because Microsoft and Sun (/ Oracle) have spent lots of money to discredit the "myths" that C# and Java are slower than native languages as part of their marketing techniques.

If you take look at some academic journals on compiler design and language implementation, it will become very clear to you that there is no possible way that a JIT compiled VM language (produced by your C# compiler) can be faster than native machine code (produced by most C++ compilers). Afterall, this isn't magic or voodoo. However much you would like to believe, it just can never be true ;)

Since academic journals arn't very interesting for bedtime reading, I have some slightly more straight forward examples...

Unity (big pusher of C# for obvious reasons) admits that .NET is slower than C++.

http://docs.unity3d.com/Documentation/ScriptReference/index.Script_compilation_28Advanced29.html

It [.NET code] is around 20 times faster than traditional javascript and around 50% slower than native C++ code
And the following is a comparison between C# and C++ with some nice looking graphs.

http://www.codeproject.com/Articles/212856/Head-to-head-benchmark-Csharp-vs-NET

This uses the default Microsoft C++ compiler and although C++ is still the clear winner, in these tests, C++ would absolutely thrash C# if using specialized C++ compilers for the task, such as Intel's or codeplay's.

But to be honest, there is simply no convincing .NET developers... Hopefully the enlightenment this post gives them will be worth all the downvotes I am going to get ;) Edited by Karsten_

Share this post


Link to post
Share on other sites
frob    44920
This is a hard one to argue because Microsoft and Sun (/ Oracle) have spent lots of money to discredit the "myths" that C# and Java are slower than native languages as part of their marketing techniques.

If you take look at some academic journals on compiler design and language implementation, it will become very clear to you that there is no possible way that a JIT compiled VM language (produced by your C# compiler) can be faster than native machine code (produced by most C++ compilers). Afterall, this isn't magic or voodoo. However much you would like to believe, it just can never be true ;)
And there, I feel, we must disagree.

It does not matter when it is compiled. Ultimately it does get compiled down to hardware native code. The code is not run through an interpreter, it is compiled and run.


http://www.codeproject.com/Articles/212856/Head-to-head-benchmark-Csharp-vs-NET

This uses the default Microsoft C++ compiler and although C++ is still the clear winner, in these tests, C++ would absolutely thrash C# if using specialized C++ compilers for the task, such as Intel's or codeplay's.

But to be honest, there is simply no convincing .NET developers... Hopefully the enlightenment this post gives them will be worth all the downvotes I am going to get ;)

Yes, he has lots of graphs and pictures. Did you actually read his conclusion?

Among them, he wrote: "I believe a performance-conscious C# programmer can write programs whose performance is comparable to similarly well-written C++ code."

That is hardly a damning accusation against performance.

Share this post


Link to post
Share on other sites
LordJulian    151

Well, since the original topic went straight to hell and since everyone is throwing their hat in the ring, here I come as well.

For me (a 6.5 years developer at a huge game company, working at a few AAA titles you might just have heard of - assassin's creed, anyone?, on all the major consoles out there), the deal is like this:

 

GAME DEVELOPERS (because this was the original context of the question) are choosing to build their engines in C/C++ because (choose any/some/all "of the below") :

 

- tradition: since forever engines were made in C/C++, whatever was before that couldn't really be called an engine

 

- 1st/3rd party libraries: there are literally millions of libraries and APIs written in C/C++ out there. Sure, most of them are junk, but you simply cannot build a complete engine without some of them. Also, you can access them in mostly any other language, but why should you? Plus, any translation is likely to cost you.

 

- platform support: even though it is basically the previous reason, it deserves a repetition: any platform owner (game consoles, mainly, for our purpose) will deliver their SDK that WILL target C/C++. That's it. If you want to use it in a different language, a wrapper is a must.

 

- The promise of gold at the end of the rainbow: raw memory access, the way you like it. Even though, at the beginning, you don't need it, when push comes to shove and the framerate just isn't high enough, you WILL want to fiddle with memory layout and all the other tricks in the book that will yield better performance. Don't mistake this point, it is for ninja-tier programmers, but if you want it, it is there. I've witnessed some very nice and some very low level trickery that was done by a colleague of mine for a PS3 project on a particle implementation that was very optimized on the main platform to begin with. The result was mind blowing, we almost got the particles "for free" on PS3, while them being a major strain on the main platform.  To summarize: given a good C# programmer and an average C++ programmer, the C# programmer will probably produce faster code on most of the tasks; but given excellent programmers on both languages, my money is on the C++ one, every time, anytime. He just has more wiggle room and the promise of total memory layout freedom.

 

- Rock solid compilers. The c++ compilers usually carry 20+ years of effort spent into delivering very very fast code. The things the C++ compiler does with your code are just amazing. The other compilers are catching on quite fast, so this is becoming a non-reason fast, but still, the C++ compilers (and linkers, specifically) are geared towards maximum speed output, given the proper switches. Granted, with a certain compiler, I was able to write a VERY simple program that gave WRONG results in a certain configuration, but that was a solitary issue, they fixed it and we never spoke of it since.

 

Well, there are a few more reasons, but basically this is it. And now an advice for you: if you want to do a game and you're sure you can do it in C#, GO AHEAD. It is a lovely language with quite a few tricks up the compiler's sleeve. If you want to do an engine... do it, for fun, in C++. You will never finish it on your own in reasonable time with production-ready features , but it's a very nice exercise and you will gain lots of benefits.

 

Have fun!

Share this post


Link to post
Share on other sites
3Ddreamer    3826

Which is faster,  C++ or C#,  is largely a matter of circumstances. 

 

To answer the question directly, C++ is faster, especially where compiling and precompiling are very demanded, such as the data intense and thread allocated needs of juggling many game engine areas of 3D objects, shading, sound, scene graph, and more.  In other words, C++ allows the extremely crowded type of subway, so to speak, to be organized for streaming flow.

 

The C# can in some cases actually be "faster" in those situations where game source code competes with game engine source code for memory allocation and threading, in such cases the threading and memory cache can be unpredictable which may cause unmanaged (auto) memory of C# to be ideal for game scripting.  In this case, it would be like an art fair and crowd, so to speak,  with C# where the data flows automatically without the micro-management of stream like C++ allows.

 

 

What I write here is really an over-simplification because there are ways to extend both C++ and C# to do the native strength of the other language to some extent through libraries.  C++ has managed as its native strength while C# has unmanaged for emphasis.Specifically, C++ libraries can let you have managed or unmanage memory allocation and C# ones are increasing allowing this flexibility, too.  Memory managed or unmanaged effects stuttering more then speed in general. 

 

When handling huge data demands, C++ wins in the speed context with compilation combined with managed memory, but the amount of advantage depends largely on the amount of variety to be handled.  By contrast, if you only had to handle one data stream, then the advantage of C++ over C# would be the least compared to situations when many threads are used and C++ has the advantage.

 

 

Summary:

 

C# is faster when the demands in terms of memory allocation and threading are mostly or completely unpredictable, such as is often the case with heavily scripting and dynamic game source code that needs unmanaged (auto) memory management.

 

C++ is faster when a wide variety of data streams are to be organized with generally known demands which enable memory and hence thread management, such as in game engine source code implementations.

 

Over the next 5, 10, or 20 years, the differences and advantages of C++ or C# will continue to close, C# and libraries being younger - evolving much like C++ before it.  In other words, in terms of languages and libraries, C++ is the mature adult but C# is the very young adult which is growing to close the gap in capability.

 

 

 

 

Clinton

Share this post


Link to post
Share on other sites
SimonForsman    7642
<blockquote class="ipsBlockquote" data-author="samoth" data-cid="5015132"><p>The same is true for C/C++ versus hand-written assembler code. I've been writing assembler since 1983, but I regularly find it hard to match (match, not outperform!) optimized C++ code nowadays. If anything, I use intrinsic functions now, but writing "real" assembler code is a total waste of time and an anti-optimization for all I can tell. The compiler does at least as good, and usually better. You may be able to work out an example where you gain a few cycles over the compiler, but on the average you'll be totally anihilated.</p></blockquote><br />its not just compilers getting better, hardware has also become far more complex, the 286 for example had no cache and a 1 stage pipeline all instructions were executed in the order they were written, you could take any section of your code, check the manual and be able to tell exactly how long it would take to execute, optimizations could be done on paper without any problems, Todays CPUs are complex and scary monsters, the speed at which a given piece of code runs depend heavily on the context in which it runs so optimizating it can require quite extensive analysis.<br /><br />Our brains aren't improving at any noticable rate so unless we stop increasing CPU and software complexity we will have to push more of the grunt work over to the machines that are improving.

That doesn't mean that we have to abandon the ability to fiddle at a lower level when necessary though, C++ offers a fairly good balance between high level functionality and low level access and since it is reasonably easy to integrate lua or python as scripting languages in a C++ application the lack of higher level functionality becomes less of a problem. (I'd personally avoid using C++ when possible but it is still a language worth learning) Edited by SimonForsman

Share this post


Link to post
Share on other sites
kop0113    2453

Over the next 5, 10, or 20 years, the differences and advantages of C++ or C# will continue to close, 

 

This brings me onto my biggest issue with C#.

 

It wont be around in 5 years. It is not a continuation of C. It is a product from Microsoft and like many of their products before them, they will be dropped once the next newest thing comes out.

 

They are already advising people to develop using C++/CX rather than .NET in many of their talks.

 

Anyone remember Visual Basic? It used to be as popular as C# is today. And now since they dropped it and then emulated it using .NET, now it is a niche (teaching) language.

 

Anyone remember J#? Microsoft used it to compete with Java in the short term and then dropped it in about 1 week. Sure C# is to compete with Java in the long term but then will be dropped like a sack of potatoes in the same amount of time.

 

Anyone remember Microsoft Managed C++? That is the last time I waste effort learning a non standard C++ extension. Now I have code that is effectively useless without a serious amount of time and money porting. What is worse is that the only platform it runs on is now EOL.

 

So I advise developers to stop messing around with novelty languages, and use the standard C++ language so your customers can still play your games in a few years time once platforms have changed. Otherwise I find it a tad careless tbh...

Share this post


Link to post
Share on other sites







Productivity, not language performance, is the key feature.

No this is not accurate. It depends on the application domain. For some projects performance is absolutely key. For others not so much.
Unluckily, this is exactly true. I don't like it any more than you do, but it is true. The biggest advantage of C++ over C (and C#/Java over C++) is that you can hire a mediocre programmer to do the same thing that a very expensive highly skilled programmer could do otherwise, and in 1/2 to 2/3 of the time.
Don't get me wrong, I'm not saying that C# programmers as such are inferior in any way. What I'm saying is that someone at considerably lower skill using C# can outperform someone at higher skill using C++ time- and cost-wise (replace C# with Java if you will). C# and Java come with huge standard libraries that are not only very complete, but also very easy to grok. Plus, automatic memory management.
That means that a programmer needs to have a lot less skill (and needs to use less time) to produce "something that works". Maybe not the best possible thing (this still requires someone with skill!), but something that works.

A lot of browser games are of sheer embarrassingly poor quality, and consume embarrassing amounts of resources to deliver something ridiculous in comparison. Who cares?
It takes a moderately skilled team 3 weeks to puke out something that sells. On the other hand, it takes a highly skilled team 3 years to produce something really good that also sells, but only 3 years later after all competitors have already sold theirs. From a business perspective, which one is better?

Quality or performance do not matter as much as you think. As long as it sells, all is good. Did you ever wonder why every incantation of [insert any software title] gets more bloated and slower without adding real value?

A WYSIWYG text processor / DTP used to fit on a floppy disk and run on a 8 MHz processor with 512kB of RAM in the mid-1980s. Written in C, on an operating system written in C, by the way. Computers at that time were entirely capable of performing well with C.

A program that does (apart from greatly improved but still nowhere near perfect spellchecking) exactly the same today runs on a computer with about 3000 times as much CPU power and about 8000 times as much main memory. And, it doesn't truly work "better" or faster in any observable way.
Such a program typically has a working set upwards of 100 MiB just for showing an empty window, reserves upwards of 300 MiB of address space, and takes anywhere from 300 to 900 MiB on your harddisk.

So what is the conclusion? Software companies deliberately produce bad software to force people into buying bigger and more expensive computers? Of course not.

It is just much, much better for business. As long as people keep buying, you're only reducing profit by doing better. The good horse only jumps as high as it needs to. It isn't worth hiring a team of highly skilled people for something a low-wage guy can do, even if that means it's 30% slower (as long as people still buy).
Moore's law [...]
Moore's law was initially a 10 year extrapolation of some observation made by an Intel founder based on (questionable) data. It however turned out being a very clever marketing strategy followed ever since, and that is all Moore's "Law" really is. Marketing.

C and C++ were very affordable on 15, 20, or 30 year old hardware, even with compilers of that time. A lot of very serious, good programs on the Atari ST and Amiga were written in GFA BASIC, which offered both a bytecode interpreter and a compiler. The performance of the GFA BASIC compiler was entirely sufficient for 99% of anything that you'd ever want to write at that time.

Every software running on the BeBox in the mid-90s was written using the Metrowerks C++ compiler (initially you had to cross-compile from Mac, what a joy!). Compared to today's compilers, MW C++ was embarrassingly poor. However, this was never an issue. Comparing my old dual-CPU 66MHz BeBox to my modern 4-core 2.8GHz Windows system, I see no substantial improvement in the "general feel" of most programs.
C is still basically the same language as it was in the 80s.
Well, yes and no. It is of course "basically" the same language, but that is true for C++ or Java too.

C has, over the years, gone a long way to make many things easier, more explicit and efficient, less ambiguous, and safer (headers like inttypes/stdint, restrict pointers, threading support, bounds checking, alignment, static assertions). In some way, if you compare C11 to, say, C89 or C90, or to K&R's original invention, it is "some completely different language".
The same is true for C++ (and probably Java, I wouldn't know... have not used Java since around 2003).
...it will become very clear to you that there is no possible way that a JIT compiled VM language (produced by your C# compiler) can be faster than native machine code
This a very obvious truth, which should be clear even without reading academic papers.

JIT compiled code may, in some situations, and depending on the programmer's skill, perform better. A poor C# programmer may easily be able to outperform a poor C programmer, simply because the C# standard library is well-optimized, and a poor C programmer might not be able to properly implement a competitive algorithm. However, the same is not true when comparing skilled programmers.

In the end, anything that comes out of a JIT compiler is executed as native machine code, so assuming proper input (i.e. equally skilled programmer) it can only ever be equally fast, never faster. However, other than a normal compiler, a JIT compiler has a very hefty constraint, namely it has to run in "almost realtime". The end user expects something to happen more or less instantly when launching a program. Nobody wants to wait a minute or two. Or ten. Caching does help, but only to some extent.

A normal optimizing compiler runs offline on the developer's machine, and this happens just once. It does not matter that much whether a release build runs in 15 seconds or 45 minutes or 4 hours (build times for non-release are a different story). It also doesn't really matter whether compiling takes 2 or 6 or 10 gigabytes of RAM, because the developer's machine will have that much -- the end user doesn't care.

Therefore, the compiler has a lot of opportunities and a lot of freedom in what it can do that a JIT simply cannot afford. With that in mind, JIT can, in general, not be faster than a normal compiler either. It just isn't realistic, no matter how clever JIT gets.

Think of playing chess against Anatoly Karpov, except Karpov only has 2 seconds for every move, and is allowed to look at only half of the board. You, on the other hand, can take any amount of time you like, use a chess computer, and may consult any amount of experts you want. He may be the best chess player in the world, but it is astronomically unlikely that he will win. Edited by samoth

Share this post


Link to post
Share on other sites
Telastyn    3777
I see people talking about programmer productivity a lot, but I really wonder if this is really noticeable.

It is absolutely noticeable. Despite my reputation as a C++ hater, I spent about a decade using it as my primary language. Just switching to C# provided me about an order of magnitude productivity increase.
For example you have an NPC referencing another NPC as target, and this NPC is then deleted.

Sure, you have to deal with that anyways, but more often than not this scenario isn't your problem. It's trying to juggle the code so that you're sure to delete things that need deleting and the pointers to them get there. That overhead is not trivial.

That is certainly one part of the productivity gains. Another is the ability to have a large, well-written and modern standard library.

But what really takes the cake is tooling. C++'s design is so antithetical to partial-evaluation that you can't even get decent error messages out of the thing, let alone intellisense or refactoring tools.
It wont be around in 5 years. It is not a continuation of C. It is a product from Microsoft and like many of their products before them, they will be dropped once the next newest thing comes out.

Have you looked around recently? Java's neglect and the universal distrust of Oracle have neutered its use in new development that isn't on Android. Scala hasn't gained a foothold due to its dependence on the JVM (and hence, Oracle) and its over-complexity. C++ hasn't been used for business development for more than a decade (and no, C++/CX isn't going to help that since Windows 8 is being adopted by few people in mobile, and far less on a desktop). What else is there? Python? Not for Enterprise development. Objective C? Not outside of iOS.

C# might not be popular in 5 years (and I expect it will be waning by then), but it will be because something superior comes to replace it. Until then, even Microsoft doesn't have much that clout.

Share this post


Link to post
Share on other sites
3Ddreamer    3826

The industry and specifically Microsoft have development cycles which effect what is happening.   The standard business model is to overlap and stagger the cycles where possible to smooth the income flow. 

 

This brings me onto my biggest issue with C#.

It wont be around in 5 years. It is not a continuation of C. It is a product from Microsoft and like many of their products before them, they will be dropped once the next newest thing comes out.

 

C# and libraries are simply far too good to drop any time in the foreseeable future.

 

Microsoft and millions of companies and individuals use C# worldwide.  In business applications, C# use is accelerating (even in non-USA markets), as is the case also with several major languages, especially in scripting.

 

Karsten, why do you believe that C# is so popular?  Do you think it is because Microsoft promotes it or does the use of C# grow because it is a fantastic language with great existing support?  C# and libraries continue to evolve and keep pace with technology mostly independent of Microsoft investment, so why would that be different in 5 or more years?

 

C# is not only the .NET Framework standard which Microsoft created, but one of the ECMA standard languages which anyone can and does use independently of Microsoft support.  C# development has truly taken a life of its own. 

 

 

So I advise developers to stop messing around with novelty languages, and use the standard C++ language so your customers can still play your games in a few years time once platforms have changed. Otherwise I find it a tad careless tbh...

 

 

No single standard language exists across the whole field of development. C# dominates in some segments of the development industry and C++ does in others, while another language may dominate in a narrower niche.

 

C# is widely accepted, used, and massively invested in the billions of dollars.  C# is no novelty language. 

 

These languages will be widely used for many years: C, C++, C#, Java, Python, and Lua  It would not surprise me at all if they stayed in common use 10 or 20 years from now after the next big language is introduced, as has happened when C# was published.

 

 

C# and libraries are evolving to reach higher and lower in coding, but in far less convoluted growth than C++ did before it.  Much of this had to do with tighter standardization with C# than had occurred in the C++ lifetime previous.  Industry cooperation and standards made this possible, but many people aren't aware of that overseeing.

 

C# improvements are a direct cause and effect relationship of industry associations to standards.

 

 

so your customers can still play your games in a few years time once platforms have changed.

 

 

Cross-platform implementations of C# exist which will allow current games made with it to be played years in the future on new systems and also still be playable on older ones.  Both non-Micosoft and Microsoft APIs exist which allow this.  It is true for all the other major languages, by the way. 

 

The language is not the issue with being cross-platform, backwards compatible, and forwards compatible:  The programmer skill in the use of APIs is the core of such cross-platform implementation.

 

Take the same game source code, if appropriately written, and the developer can use APIs to make it run by any framework of that language.  Lawsuits on Microsoft, Apple, Sony, Google, and other companies by governments and private parties have insured that this will be the situation for many years to come.

 

To say that oranges will disappear because the next hybrid apple appears is non-sense.

 

 

Clinton

 

 

 

 

 

Share this post


Link to post
Share on other sites
Hodgman    51237

I see people talking about programmer productivity a lot, but I really wonder if this is really noticeable.

It is absolutely noticeable. Despite my reputation as a C++ hater, I spent about a decade using it as my primary language. Just switching to C# provided me about an order of magnitude productivity increase.

It depends on the type of work you're doing.
For my engine's tool-chain, I use C#, because it really is easy to just get stuff done(tm) with it, but for the engine runtimes (which are more "systems programming" than "application programming", to use a shaky generalization) I'm more productive using C++, because C# code gets really ugly when doing systems-level tasks, while C++ makes it easy (or, is just as ugly as usual (-;)
e.g. I just posted some C++ code in a thread about optimized renderers -- writing that same code in C# would be a ton uglier and would take me a lot longer to write.

Share this post


Link to post
Share on other sites
Telastyn    3777
Enh, that code wouldn't be uglier in C#. You would still have the structs for device/command and still have the array for the variable behavior. The issue would be that C# delegates don't have the same performance characteristics as the function pointer, meaning you don't gain your cache benefits.

That said, that sort of virtual dispatch optimization is right in the wheelhouse for things that JIT'ed languages can optimize that C++ can't.

Share this post


Link to post
Share on other sites
3Ddreamer    3826

[quote name='frob' timestamp='1356671484' post='5014978']

The OP (and many others) have just made the assumption that C# is slower or more cumbersome. I heard the same thing in the 90s that C++ was bloated and more cumbersome than C. I read the same arguments in the 80's that C was painfully slow and could never replace the skilled assembly-writing artisan.

The question has never been "will c++ be replaced", but "when". I believe we passed the tipping point a few years ago. It is now more difficult to get a seasoned C++ developer than to get a seasoned C# developer who is also more productive overall than that c++ developer.
[/quote]

 

Agree, I do here totally.  

 

Skill of the developer/ programmer has the most effect on performance of any aspect of game development.  Since C# programmers are growing in numbers, we can see where this is going.  The more programmer friendly nature of C# means that there will continue to be an increase in numbers and experience of C# developers.  Hardware advances will seal the deal in regard to C# being competitive or outperforming C++ development when C# experience is combined effectively with hardware performance increases in the coming years.

 

We are already seeing hardware and systems architecture taking into account the advantages of unmanaged languages and their increase in popularity, such as "Auto-threading" and "Auto-caching".

 

Clinton

Share this post


Link to post
Share on other sites
Hodgman    51237
Enh, that code wouldn't be uglier in C#. The issue would be that C# delegates don't have the same performance characteristics as the function pointer, meaning you don't gain your cache benefits.
Then it's not *the same* code. I'm talking about writing the same kind of low-level code where you're manually optimizing for cache-misses and load-hit-stores and branch-mispredictions and whatnot. Modern versions of C# have the tools to do this, but it's quite a deviation from the typical C# style.

If the task at hand is concerned with these kinds of details, then C++ is a more productive language to be writing in. Edited by Hodgman

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this