Jump to content
  • Advertisement
Sign in to follow this  
lycaonos

why isnt managed code faster than usual?

This topic is 3989 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I was feeling kind of stupid since I didnt fully understand the meaning of managed code and started looking around. I have always assumed that it meant something like java, which I discovered is obviously wrong. After learning about the just-in-time compilation I am faced with a different question: why isnt managed code faster than unmanaged? If it is bytecode optimally compiled for the cpu that it is running on, if anything it should be much faster than unmanaged code that doesnt mess with the clr or cli or anything. So why isnt it? p.s. Here is the site that inspired me to post this, and seems to be pretty unbiased, but what do I know :P http://www.grimes.demon.co.uk/dotnet/man_unman.htm

Share this post


Link to post
Share on other sites
Advertisement
That is really a meaningless question. You can write really inefficient code in assembly if you don't know what you are doing. What are you comparing against? How are you measuring performance?

Share this post


Link to post
Share on other sites
The notion that JIT compiled code "could be faster" because it can take advantage of deployment-specific optimizations is mostly theoretical. It doesn't happen much, if at all, in practice.

JIT compilers are limited by the need to balance compile speed with speed and quality of generated code; aggressive optimization frequently requires more time and cannot be persued.

For example, the highly processor-specific instructions that can be generated -- SIMD type stuff -- are frequently misunderstood from a performance perspective. They offer great speed gains when used in bulk, not in small bits. While a JIT compiler might be able to determine that its target platform supports SSE and generate SSE instructions, it's not going to be able to reparallelize the algorithm and data to be efficient for batch-computation, where SIMD really excels. So generating some SSE for a simple floating point add isn't worthwhile at all.

Share this post


Link to post
Share on other sites
That article draws some eyebrow-raising conclusions based on their one simple (contrived) experiment, which is doesn't sound particularly smart to me. I'm sorry but "Managed code is faster than un-managed code because it ran some fourier algorithm faster" just doesn't cut it.

I'm certainly no expert on the topic, but like most things I would guess that real truth is much more complicated than they're making it out to be. For example, what happens in situations where controlling memory access and allocation is critical, such as with embedded systems and consoles? How much of the performance gap should we attribute to programmer skill? Someone more experienced than I could probably come up with dozens of scenarios where either might have their advantages.

Share this post


Link to post
Share on other sites
Actually under the hood java does the exact same thing .net does. It has byte code that is jit compiled too.

There are several reasons why managed code isn't that much different from unmanaged code. First offline compilers can do more optimizations because it has more time. Second runtime now includes compile time, that is if jit compilation time took 2 seconds that is 10% of their measured run time lost to compiling on the fly. Third the example they use is quite poor a Fourier transform is quite math heavy. Since they used the built in Library functions I bet 90% of the time their examples run are executing identical code from Microsoft's standard library since it isn't reimplemented for each language. Also the Fourier transform by their own admission isn't optimized code, but it was originally written in C++ and so more than likely plays to C++'s strengths, not managed code's strengths. Finally they use the same code base, and as such there isn't much the compiler can do. If I tell the compiler to add two numbers together it has to produce the cpu's add opcode, if I lay the code out right it can try to do some small tricks, but at the end of the day it has to do exactly what I told it to do.

Share this post


Link to post
Share on other sites
I think the question why managed code isn't faster than native code is actually a valid one and obviously the CLR people at Microsoft as well as the JVM people at Sun are working hard on making that happen.

If you look at what the more advanced runtimes for the Self programming language do with dynamic optimization, it becomes quite obvious that managed code doesn't have to be slower than native code. In fact, because these runtimes constantly profile the code they run and gradually optimize it using data gathered during the execution, the machine code they generate is usually extremely efficient and often more efficient than statically optimized code.

Another thing (that isn't restricted to managed code though) is the use of garbage collectors. Automatic garbage collection obviously comes at a price, but it can also provide some performance benefits. A compacting garbage collector will for example allow you to do heap allocations at memory bandwidth speed.

Here's the thing though: I've read a lot of articles and benchmarks claiming that the latest Hotspot JVMs are actually faster than statically compiled C code. That of course is complete bullsh*t. There are some situations where a static compiler will produce very inefficient machine code while a JIT-compiler will produce more efficient code. Also, there are some cases where automatic garbage collection will actually be faster than new/delete or malloc/free. However, all advantages managed code gives you come at a price. JIT-compilation has a significant overhead, automatic garbage collection has a significant overhead, etc.
I'm sure we'll see extremely fast VMs in the future and the gap between native and managed performance will become more and more narrow. But there will always be cases where statically compiled code is faster just like there will always be cases where managed code is faster.

Share this post


Link to post
Share on other sites
Here's a great paper discussing the performance implications of garbage collection: http://lambda-the-ultimate.org/node/2552.

They came up with a method of taking the same algorithm, running it either with a GC or with (the equivalent of) explicit memory management, and they compare the performance. The GC version was a little slower, used more memory, and did severely worse when it needed to page memory to disk.

Share this post


Link to post
Share on other sites
Reflection.

VMs are same as running C++ code in debug mode. Now compare performance between the two, and you'll see that VMs are incredibly fast. They do everything that debug build does and more, yet run only 2 times slower than release build at most. Remove reflection from those languages, and you can JIT them to equivalent of C++.

Second reason is dynamic allocations. Most of the time VMs force you to allocate things on heap when that's not necessary, often simply due to convenience. Memory allocation is incredibly expensive.

Third reason is that 2x loss of performance offers 500% productivity. The trade-off here, in today's world, is so worth it, that nobody has looked back.

Then again, there's people who would argue that C++ is horribly inefficient, and that one needs to optimize things manually in assembly. In a way it's true, but again, at huge productivity loss.

Also - the most popular applications today are written in: *drum roll* javascript.

Performance of this type doesn't matter. Today it's all about scalability, and that's a completely different class of algorithms.

It should also be noted that specialized algorithms (multimedia, SIMD) are different class altogether and cannot even be considered in evaluation of VMs. That is not what VMs are for.

Share this post


Link to post
Share on other sites
Quote:

Second reason is dynamic allocations. Most of the time VMs force you to allocate things on heap when that's not necessary, often simply due to convenience. Memory allocation is incredibly expensive.


Both the Sun JVM and the CLR allow you to do heap allocation at stack speed because they use copying and compacting garbage collectors. Memory allocation in these languages is incredibly fast... reclaiming the memory is a whole different story.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!