GC is another topic that always pops up in these kind of discussions... IMO is pure non-sense. GC won't trigger if you don't "new" stuff, and will be VERY fast if you don't have objects with medium life expectancy.. it's just a matter to take some time to understand how the system works and how you can make it work for you... it's much easier to learn to deal with .NET's GC than learning proper memory management in C++, simple or through the 6-7 "smart" pointers available.
Just as you try to avoid new and delete in your game loop in C++, avoid newing class objects in C# and GC won't cause any troubles.
Dynamic memory management incurs by definition a certain amount of performance penalties. No matter what system is used, these penalties can be managed.
However, in a language that forces you to use one single tool for dynamic memory management, the garbage collector, limits one's flexibility in dealing with issues that arise quite a lot.
This is why languages that allow manual memory management will always have an edge in performance potential. Whether that's used is up to the programmers involved.
I don't think that in the future, general GCs will be that good, that manual memory management won't matter any more. After all, GCs also need to be implemented somehow. ;)
So what will happen (and is happening already, if you look close enough), is that manual and automatic memory management will be mixed.
you dont seem to understand how C# runtime works at all, so your claim are as wrong as it gets.
Every single C# function gets compiled to native code by the JIT the first time it is invoked, from that point on, that function is running native code period. So the "more work to do for every instruction" is just... uninformed and uninformative.
This has been the case for ages, since Java started doing it loooong time ago.
It's important to know that not all platforms allow emitting native code, because you either can't write to executable pages, can't change the executable flag on pages or the platform will only execute code signed with a secret key. This is especially true for the platforms we're usually dealing with in gamedev (consoles, smartphones, tablets).
In all of these cases, there's no (allowed) way to avoid using runtime interpretation of byte code.
It is possible, to "pre-JIT" byte code in some languages, but at that point you're basically back to a standard compiled language with a worse compiler.
Additionally, thanks to the LLVM project (and others like Cint or TCC), it's possible to JIT or interpret C and C++ source or byte code, closing this particular gap even more.
What remains is, that "cafe based" languages (Java, .net) need to assume a virtual machine to work properly. So runtime performance can only ever be as good as this virtual machine matches to the real machine used, causing more and more troubles as the virtual machine ages and the real machines progress.
Therefor, one will, all other things being equal, always pay a performance penalty when using languages targeting virtual machines. The question is how big this gap is. In my opinion, this performance penalty will shrink to almost zero over time, as JIT and regular compilers converge (again, see LLVM).