Jump to content

  • Log In with Google      Sign In   
  • Create Account


#ActualRattenhirn

Posted 27 December 2012 - 07:57 AM

GC is another topic that always pops up in these kind of discussions... IMO is pure non-sense. GC won't trigger if you don't "new" stuff, and will be VERY fast if you don't have objects with medium life expectancy.. it's just a matter to take some time to understand how the system works and how you can make it work for you... it's much easier to learn to deal with .NET's GC than learning proper memory management in C++, simple or through the 6-7 "smart" pointers available.
Just as you try to avoid new and delete in your game loop in C++, avoid newing class objects in C# and GC won't cause any troubles.


Dynamic memory management incurs by definition a certain amount of performance penalties. No matter what system is used, these penalties can be managed.

However, in a language that forces you to use one single tool for dynamic memory management, the garbage collector, limits one's flexibility in dealing with issues that arise quite a lot.

This is why languages that allow manual memory management will always have an edge in performance potential. Whether that's used is up to the programmers involved.

I don't think that in the future, general GCs will be that good, that manual memory management won't matter any more. After all, GCs also need to be implemented somehow. ;)

So what will happen (and is happening already, if you look close enough), is that manual and automatic memory management will be mixed.
 

you dont seem to understand how C# runtime works at all, so your claim are as wrong as it gets.
Every single C# function gets compiled to native code by the JIT the first time it is invoked, from that point on, that function is running native code period. So the "more work to do for every instruction" is just... uninformed and uninformative.
This has been the case for ages, since Java started doing it loooong time ago.


It's important to know that not all platforms allow emitting native code, because you either can't write to executable pages, can't change the executable flag on pages or the platform will only execute code signed with a secret key. This is especially true for the platforms we're usually dealing with in gamedev (consoles, smartphones, tablets).

In all of these cases, there's no (allowed) way to avoid using runtime interpretation of byte code.

It is possible, to "pre-JIT" byte code in some languages, but at that point you're basically back to a standard compiled language with a worse compiler.

Additionally, thanks to the LLVM project (and others like Cint or TCC), it's possible to JIT or interpret C and C++ source or byte code, closing this particular gap even more.

What remains is, that "cafe based" languages (Java, .net) need to assume a virtual machine to work properly. So runtime performance can only ever be as good as this virtual machine matches to the real machine used, causing more and more troubles as the virtual machine ages and the real machines progress.

Therefor, one will, all other things being equal, always pay a performance penalty when using languages targeting virtual machines. The question is how big this gap is. In my opinion, this performance penalty will shrink to almost zero over time, as JIT and regular compilers converge (again, see LLVM).


#3Rattenhirn

Posted 27 December 2012 - 07:56 AM

<blockquote class="ipsBlockquote" data-author="kunos" data-cid="5014620"><p>GC is another topic that always pops up in these kind of discussions... IMO is pure non-sense. GC won't trigger if you don't "new" stuff, and will be VERY fast if you don't have objects with medium life expectancy.. it's just a matter to take some time to understand how the system works and how you can make it work for you... it's much easier to learn to deal with .NET's GC than learning proper memory management in C++, simple or through the 6-7 "smart" pointers available.<br />Just as you try to avoid new and delete in your game loop in C++, avoid newing class objects in C# and GC won't cause any troubles.</p></blockquote><br />Dynamic memory management incurs by definition a certain amount of performance penalties. No matter what system is used, these penalties can be managed.<br /><br />However, in a language that forces you to use one single tool for dynamic memory management, the garbage collector, limits one's flexibility in dealing with issues that arise quite a lot.<br /><br />This is why languages that allow manual memory management will always have an edge in performance potential. Whether that's used is up to the programmers involved.<br /><br />I don't think that in the future, general GCs will be that good, that manual memory management won't matter any more. After all, GCs also need to be implemented somehow. ;)<br /><br />So what will happen (and is happening already, if you look close enough), is that manual and automatic memory management will be mixed.<br /><blockquote class="ipsBlockquote" data-author="kunos" data-cid="5014636"><p>you dont seem to understand how C# runtime works at all, so your claim are as wrong as it gets.<br />Every single C# function gets compiled to native code by the JIT the first time it is invoked, from that point on, that function is running native code period. So the "more work to do for every instruction" is just... uninformed and uninformative.<br />This has been the case for ages, since Java started doing it loooong time ago.</p></blockquote> <br />It's important to know that not all platforms allow emitting native code, because you either can't write to executable pages, can't change the executable flag on pages or the platform will only execute code signed with a secret key. This is especially true for the platforms we're usually dealing with in gamedev (consoles, smartphones, tablets).<br /> <br />In all of these cases, there's no (allowed) way to avoid using runtime interpretation of byte code.<br /> <br />It is possible, to "pre-JIT" byte code in some languages, but at that point you're basically back to a standard compiled language with a worse compiler.<br /> <br />Additionally, thanks to the LLVM project (and others like Cint or TCC), it's possible to JIT or interpret C and C++ source or byte code, closing this particular gap even more.<br /> <br />What remains is, that "cafe based" languages (Java, .net) need to assume a virtual machine to work properly. So runtime performance can only ever be as good as this virtual machine matches to the real machine used, causing more and more troubles as the virtual machine ages and the real machines progress.<br /> <br />Therefor, one will, all other things being equal, always pay a performance penalty when using languages targeting virtual machines. The question is how big this gap is. In my opinion, this performance penalty will shrink to almost zero over time, as JIT and regular compilers converge (again, see LLVM).<br />

#2Rattenhirn

Posted 27 December 2012 - 07:55 AM

<blockquote class="ipsBlockquote" data-author="kunos" data-cid="5014620"><p>GC is another topic that always pops up in these kind of discussions... IMO is pure non-sense. GC won't trigger if you don't "new" stuff, and will be VERY fast if you don't have objects with medium life expectancy.. it's just a matter to take some time to understand how the system works and how you can make it work for you... it's much easier to learn to deal with .NET's GC than learning proper memory management in C++, simple or through the 6-7 "smart" pointers available.<br />Just as you try to avoid new and delete in your game loop in C++, avoid newing class objects in C# and GC won't cause any troubles.</p></blockquote><br />Dynamic memory management incurs by definition a certain amount of performance penalties. No matter what system is used, these penalties can be managed.<br /><br />However, in a language that forces you to use one single tool for dynamic memory management, the garbage collector, limits one's flexibility in dealing with issues that arise quite a lot.<br /><br />This is why languages that allow manual memory management will always have an edge in performance potential. Whether that's used is up to the programmers involved.<br /><br />I don't think that in the future, general GCs will be that good, that manual memory management won't matter any more. After all, GCs also need to be implemented somehow. ;)<br /><br />So what will happen (and is happening already, if you look close enough), is that manual and automatic memory management will be mixed.<br /><blockquote class="ipsBlockquote" data-author="kunos" data-cid="5014636"><p>you dont seem to understand how C# runtime works at all, so your claim are as wrong as it gets.<br />Every single C# function gets compiled to native code by the JIT the first time it is invoked, from that point on, that function is running native code period. So the "more work to do for every instruction" is just... uninformed and uninformative.<br />This has been the case for ages, since Java started doing it loooong time ago.</p></blockquote> <br />It's important to know that not all platforms allow emitting native code, because you either can't write to executable pages, can't change the executable flag on pages or the platform will only execute code signed with a secret key. This is especially true for the platforms we're usually dealing with in gamedev (consoles, smartphones, tablets).<br /> <br />In all of these cases, there's no (allowed) way to avoid using runtime interpretation of byte code.<br /> <br />It is possible, to "pre-JIT" byte code in some languages, but at that point you're basically back to a standard compiled language with a worse compiler.<br /> <br />Additionally, thanks to the LLVM project (and others like Cint or TCC), it's possible to JIT or interpret C and C++ source or byte code, closing this particular gap even more.<br /> <br />What remains is, that "cafe based" languages (Java, .net) need to assume a virtual machine to work properly. So runtime performance can only ever be as good as this virtual machine matches to the real machine used, causing more and more troubles as the virtual machine ages and the real machines progress.<br /> <br />Therefor, one will, all other things being equal, always pay a performance penalty when using languages targeting virtual machines. The question is how big this gap is. In my opinion, this performance penalty will shrink to almost zero over time, as JIT and regular compilers converge (again, see LLVM).<br />

#1Rattenhirn

Posted 27 December 2012 - 07:55 AM

<blockquote class="ipsBlockquote" data-author="kunos" data-cid="5014620"><p>GC is another topic that always pops up in these kind of discussions... IMO is pure non-sense. GC won't trigger if you don't "new" stuff, and will be VERY fast if you don't have objects with medium life expectancy.. it's just a matter to take some time to understand how the system works and how you can make it work for you... it's much easier to learn to deal with .NET's GC than learning proper memory management in C++, simple or through the 6-7 "smart" pointers available.<br />Just as you try to avoid new and delete in your game loop in C++, avoid newing class objects in C# and GC won't cause any troubles.</p></blockquote><br />Dynamic memory management incurs by definition a certain amount of performance penalties. No matter what system is used, these penalties can be managed.<br /><br />However, in a language that forces you to use one single tool for dynamic memory management, the garbage collector, limits one's flexibility in dealing with issues that arise quite a lot.<br /><br />This is why languages that allow manual memory management will always have an edge in performance potential. Whether that's used is up to the programmers involved.<br /><br />I don't think that in the future, general GCs will be that good, that manual memory management won't matter any more. After all, GCs also need to be implemented somehow. ;)<br /><br />So what will happen (and is happening already, if you look close enough), is that manual and automatic memory management will be mixed.<br /><blockquote class="ipsBlockquote" data-author="kunos" data-cid="5014636"><p>you dont seem to understand how C# runtime works at all, so your claim are as wrong as it gets.<br />Every single C# function gets compiled to native code by the JIT the first time it is invoked, from that point on, that function is running native code period. So the "more work to do for every instruction" is just... uninformed and uninformative.<br />This has been the case for ages, since Java started doing it loooong time ago.</p></blockquote>&nbsp;<br />It's important to know that not all platforms allow emitting native code, because you either can't write to executable pages, can't change the executable flag on pages or the platform will only execute code signed with a secret key. This is especially true for the platforms we're usually dealing with in gamedev (consoles, smartphones, tablets).<br />&nbsp;<br />In all of these cases, there's no (allowed) way to avoid using runtime interpretation of byte code.<br />&nbsp;<br />It is possible, to "pre-JIT" byte code in some languages, but at that point you're basically back to a standard compiled language with a worse compiler.<br />&nbsp;<br />Additionally, thanks to the LLVM project (and others like Cint or TCC), it's possible to JIT or interpret C and C++ source or byte code, closing this particular gap even more.<br />&nbsp;<br />What remains is, that "cafe based" languages (Java, .net) need to assume a virtual machine to work properly. So runtime performance can only ever be as good as this virtual machine matches to the real machine used, causing more and more troubles as the virtual machine ages and the real machines progress.<br />&nbsp;<br />Therefor, one will, all other things being equal, always pay a performance penalty when using languages targeting virtual machines. The question is how big this gap is. In my opinion, this performance penalty will shrink to almost zero over time, as JIT and regular compilers converge (again, see LLVM).<br />

PARTNERS