Sign in to follow this  

JIT compiled code vs native machine code

This topic is 3195 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey all, I have been trying to figure out which methodology is optimum, using a language like C++, which compiles to native machine code, or a language like C#, which is JIT compiled. You might think it's obvious that pre-compiled programs run faster, but upon closer investigation, JIT compiling, in theory (also in practice?) would be the optimal way to go. The reason I say this is because JIT compiling gives you the advantage that the machine code it generates is optimized to the specific processor the code runs on, while code that is compiled straight to machine code has no way of knowing the exact processor it will run on (apart from general knowledge like it is on an x86 architecture). I'm really curious about this because, as many of you who program out there might agree with, C++ is a p.i.t.a. to work with sometimes, and languages like C# have some nice features, and are a lot easier to work with. Have there been many studies done on this? What do you guys think? Any experiences that have you lean one way or the other?

Share this post


Link to post
Share on other sites
JIT compilation is potentially far better. It has access to more of the program, processor specific features [like you said] and can perform analysis that just plain can't be performed in a statically compiled but dynamically linked sort of environment.

That said, performing this sort of analysis is computationally expensive. The result is that corners are cut in favor of getting the program up and running as fast as possible, where as computation time is largely irrelevant to a statically compiled program [not really, but a day-long compile is tolerable in this environment, where as JIT compilers are expected to work in a fraction of a second]. There is simply more that can be done in a JIT compiled environment, but this added effort is very rarely taken because of the real-world time it takes to do it. Hopefully, multi-core machines will begin to influence this, as JIT compiled code also can be compiled incrementally, but such a thing is pretty rare to see used in practice currently [at least not to the degree that it could be used]. Without a shadow of a doubt though, more can be done in a JIT compiler than in a static compiler.

Share this post


Link to post
Share on other sites
While its important to wring the most that you can from your code, the binary/ISA level is really the last thing you should be worrying about. Ask yourself honestly if you're going to be in the position to spend 50% of your time getting another 10-15% performance. Are you going to be making a game that's comparable to Rage or Crysis any time soon? Unless you're on that level, you can accomplish really any game with more straight-forward techniques.

Things like good algorithms, data structures and memory usage are far more important for performance, and really none of those things can be done better in any one language over another, if they provide comparable facilities.


In theory, yes JIT code is able to compile for the actual hardware that its running on, however I don't believe that that is exploited to the fullest at this point. Code optimization isn't a simple thing, and even high-end, expensive, stand-alone compilers aren't able to do much with things like auto-vectorization. Native code, for now, generally wins.

Share this post


Link to post
Share on other sites
In theory, JIT compiling is superior. In practice, it's more or less a wash.

Regardless, unless you are talking about doing distributed computing across huge numbers of nodes, or something similar, the speed difference is completely and utterly irrelevant.

Programmer productivity is infinitely more important than a few milliseconds.

Share this post


Link to post
Share on other sites
Quote:
Original post by zenprogrammer
You might think it's obvious that pre-compiled programs run faster, but upon closer investigation, JIT compiling, in theory (also in practice?) would be the optimal way to go.

In theory: Yes.
In reality: [lol] [lol] [lol] [lol]

Static compilation technology has had a good four decades or so to evolve, much of it focused on compilation of C and later C++. A modern C++ compiler is beastly when it comes to generating lean, fast machine code. Even the bad ones, like GCC. Modern techniques like whole program analysis and link time code generation, while expensive, provide fantastic results.

JIT, while theoretically way better, performs quite poorly in reality. Nevermind runtime JIT, even ahead of time JIT ("Ngen" in the .NET world) doesn't perform at nearly the same level. I'm not entirely clear on why this is the case, although I'm sure there are good reasons. But the current crop of JIT engines are poor at basic tasks like inlining, let alone tasks like autovectorization that are difficult even in static compilation. (Why Microsoft is 3.5 versions in and hasn't bothered to provide something like Mono.Simd yet is beyond me.)

It wasn't that long ago we were still coding assembly to really get the last mile. Quake 2's software renderer comes to mind -- that was only a bit over 11 years ago.

Sooo...wait until 2020 and we're golden?

Of course, while JIT itself is fairly awful, JIT based runtimes have a number of very interesting properties, primarily centered around memory usage, that make them absolutely fabulous for performance in some scenarios. They can match or outrun a competently designed and comparable C++ program, with a fraction of the development time. While the C++ code could theoretically match this, it'd basically involve a wholesale reimplementation of the runtime engine powering the managed language. So don't center your attention around just the JIT when it comes to performance.

P.S. C++ vs C# is still not allowed and I will still be deleting posts on a whim.

Share this post


Link to post
Share on other sites
I'm by no means an expert on the subject, but in this talk one of the developers of XNA likens machine code created by the JIT with a 90ies C++ compiler. That is, the machine code generated by the JIT is poorly optimized, just as machine code from C++ compilers were in the 90ies, but it still _is_ machine code. So if you know what you are doing you can, theoretically, get a very good performance out of C#, you just have to do the optimizing yourself.

Share this post


Link to post
Share on other sites
Quote:
Original post by Promit
P.S. C++ vs C# is still not allowed and I will still be deleting posts on a whim.


Oh snap! I wasn't aware it came across like that. Really the question isn't aimed at a particular language, I'm mostly concerned about compilation methods.

Share this post


Link to post
Share on other sites
I think some of the new features introduced in Visual Studio can really affect this comparison. The addition of profile guided optimization really brings the best of both worlds. You get the native speed of static code with the additional performance gains you'd get from run-time optimizations.

Although, generated profile data is a total pain and the whole process is rather cumbersome, the outcome is sometimes really significant.

I find worrying about this sort of thing, although an interesting discussion, just gets in the way of writing applications (and getting actual work done.)

Share this post


Link to post
Share on other sites
Quote:
Original post by AAA
I find worrying about this sort of thing, although an interesting discussion, just gets in the way of writing applications (and getting actual work done.)


Before one writes an application, there are many important things to consider in the apps architecture. Surely, if we are to be GOOD Software Engineers, our systems must be well planned BEFORE any code is written. Saavy?

Share this post


Link to post
Share on other sites
Quote:

Before one writes an application, there are many important things to consider in the apps architecture. Surely, if we are to be GOOD Software Engineers, our systems must be well planned BEFORE any code is written. Saavy?

There are more important factors to consider -- those that can actually have an impact on design and architecture -- than whether the program will be at it's heart native-compiled or JIT'd.

Share this post


Link to post
Share on other sites
Quote:
Original post by Drigovas
JIT compilation is potentially far better. It has access to more of the program, processor specific features [like you said] and can perform analysis that just plain can't be performed in a statically compiled but dynamically linked sort of environment.


I also thought that, until I tried it out.
There is a simple raytracer at this page:
http://ompf.org/forum/viewtopic.php?f=6&t=1124
Compiled with GCC 4.1.2 it took approx. 4 seconds to run on my 64bit linux machine.

Being interested to know how Suns JDK 1.6 would perform on the same task, I ported the raytracer to Java. Memory allocations are performed upfront, so no new in an inner loop. Now it took 8 seconds to run...

Maybe with some improvements/tricks one could gain the same speed as the C++ version, but I doubt that the Java version would ever be faster than the C++ one. Someone might try the test with C# ?

So as was already stated, in _theory_ JIT is superior, but in _practice_ it's inferior at this time.

Share this post


Link to post
Share on other sites
Quote:
Original post by nmi
Maybe with some improvements/tricks one could gain the same speed as the C++ version, but I doubt that the Java version would ever be faster than the C++ one. Someone might try the test with C# ?
Benchmarks of one particular piece of software, tested with two specific implementations, at one arbitrary point in time, on one particular hardware configuration, are barely even relevant at the time they're conducted, let alone being any indication of future performance.

Share this post


Link to post
Share on other sites
I wonder that no one has mentioned yet llvm. A full C-compiler has been built with it, C++ support is growing. It is powered by apple (LLVM in Apple OpenGL) and it is "non-restrictive" open source (admittedly, I am not the biggest fan of Apple, but then that's POV).

It has a damn lot of optimization features, and it's API can be used to build a jitter for <insert language of choice (*)>, but it also supports eager compilation (and in the long term it is looking forward to implement lifelong optimization (**), but for that I refer to their publications page: http://llvm.org/pubs/ (for a basic overview, see the 2002 papers)).

A demo is available here, including a complete description of all cfg-analysis passes.

Some performance comparisons look very promising:




(*) That is, for most existing languages you would have to write a parser yourself, as LLVM is not a compiler but a backend. To jit away, you basically build up a very very simple AST (afair, there are less than 35 instructions in llvm assembly).
(**) But I am not going to rely on such feature until implemented [smile]

Share this post


Link to post
Share on other sites
The only reason that the C or C++ compiled code runs faster is because of the optomization that has had more than four decades of constant improvement and refinement.
In theory JIT code should outperform the machine code because it would be possible to dynamically recompile code on the fly as it is running whilst taking into account statistical information gathered by the program.
In practice however the code optomizers available for Java and C# etc are just not on a par with their C counterparts.
This will change eventually though, back in the early nineties fast code still had to be written in hand optomized assembly. Nowadays it's very rare that you need to do such tedious optomization or indeed it's very unlikely that hand coded assembly would run faster than compiled C or C++ code.

Share this post


Link to post
Share on other sites
Quote:
In theory JIT code should outperform the machine code because it would be possible to dynamically recompile code on the fly as it is running whilst taking into account statistical information gathered by the program.

I propose a law (like moores law say) - JIT will never be faster than precompiled stuff in average programs

Share this post


Link to post
Share on other sites
Java is 14 years old. How long will it take until these mythical performance characteristics of JIT methods are realized? I'm way too busy doing actual work to worry about these things but once they come to fruition I'll be the first in line.

Share this post


Link to post
Share on other sites
Quote:
Original post by zedz

I propose a law (like moores law say) - JIT will never be faster than precompiled stuff in average programs


On-the-fly compilation and self-modifying code has theoretical potential of being faster. Just consider a JIT that evaluates branches and eliminates them.

In theory, run-time optimizations can offer considerable optimization as they adapt to data. For streaming operations, static optimizations are in theory better.


Unfortunately, we're stuck with either/or zealotry here. There are classes of problems which are better suited for run-time and those suited for static optimization.

The reason why no attention is paid is because as of right now, no technique is available that would make it worthwhile. One class of optimizations would need to offer O(n) improvements, something which just isn't feasible in most cases.

Then there's also middle-ground - caching. Profile-guided optimization is a hybrid between run-time and static optimization. It is claimed that in case of Firefox, PFO improves run-time performance by 15-20% (probably need citation).

But none of languages in question here offer adequate context information to improve performance beyond what is expected today. For compiler to drastically improve run-time, it would need to be aware of the problem domain, context and input. Historically, proposal for such languages have not been adopted in practice.

Also, this is all just academic. To this day, one of biggest stumbling blocks in adoption of Java remains that it does not produce .exe files, which means it's "broken" for desktop applications.

The mythical and popular x.y% performance differences in synthetic benchmarks simply aren't relevant except in small fraction of cases. There's always completely different issues to worry about.

Here's my favorite example. An application which takes 1 minute, 20 seconds to start on a multi-core machine with gigabytes or RAM, RAID drives, etc... Except that the fact it's written in C# is completely unrelated as to why it's so comically slow. Written in C++, it would not be any faster.

But since market has no problem accepting such software, it seems to be good enough.

In my experience, idiomatic approach to coding for popular VMs vs. straight C++ results in about 5-10x "slower" code.

If idiomatic approach is abandoned in favor of performance, then VMs can be brought close to idiomatic C++. With custom optimized allocations (factor 10-50 longer development time), C++ will again outperform perfectly optimized managed code by similar factor.

The important thing here is, producing optimal VM-based code takes 1 hour, in C++ it takes a week of profiling, designing and optimizing, quite literally.

Concurrent code however needs to be measured for latency, and unless one truly relies on very specific response rates, there will be effectively no difference between managed or non-managed code, the difference will again come only from memory allocation strategies.

Share this post


Link to post
Share on other sites
I am quite surprised of how so many people support JIT now, considering that a few years ago, before C# or .NET even existed, people were bashing Java to no end on how slow it is..

Share this post


Link to post
Share on other sites
Quote:
Original post by Momoko_Fan
I am quite surprised of how so many people support JIT now, considering that a few years ago, before C# or .NET even existed, people were bashing Java to no end on how slow it is..


On Java:
- from 1.4 on, the performance has been vastly improved compared to previous versions
- much of "slowness" came from usability perspective
-- string manipulation was slow (uses StringBuffer internally now)
-- Swing was, partly due to poor design choices, a memory hog
-- Most of UI was indeed slow due to various reason
-- OpenGL is used to improve performance in some cases
- garbage collection has been extensively studied and improved
- recent VMs are aware of generally high thread count in Java applications and take that into consideration

On practical sides:
- Computers have improved. 5 years ago, some people were still using 5 year old computers. What did middle-range PC look like in 1998?
- Idioms have become more known and general knowledge on do's and don'ts has spread
- Quality libraries emerged which solve common tasks efficiently
- After analyzing existing applications, commonly recurring patterns and codepaths in standard library, as well as JVM were optimized
- People stopped trying to use Java on clientside and left it on servers where no UI is needed

So yes, Java was "slow", so was C#, but for many other reasons than just VM. Both have matured, and each of them settled into its niche, for which it is actively being optimized.

Another thing to look at are javascript engines. Almost all have seen factor 30-100 (that is 100 times) improvements in just recent year.

Share this post


Link to post
Share on other sites
Quote: Buster2000
“The only reason that the C or C++ compiled code runs faster is because of the optimization that has had more than four decades of constant improvement and refinement.”

I know that I could be dead wrong here but couldn’t some of the knowledge obtained from those four decades be put to use on a jit compiler?

Ok I’m no expert on this subject but I think that one of the reasons applications made with jit compilers are a little slower might not be the fault of the compiler but the features used by the language that the application was made with. What I mean by this is when languages use features like bounds checking for arrays, garbage collection and similar things they may be putting code in that has too many variables to optimize out.

How can any compiler (jit or native) safely optimize code like this out?

If(element >= 0 || element <= maxelements)
array[elementnum] = data;

Now I know that this is something that in most cases doesn’t matter and is probably the best way to go but it will affect performance on things like this.

Correct me if I’m wrong on any of this.

Share this post


Link to post
Share on other sites
Quote:
Original post by PromitBenchmarks of one particular piece of software, tested with two specific implementations, at one arbitrary point in time, on one particular hardware configuration, are barely even relevant at the time they're conducted, let alone being any indication of future performance.


I'm glad somebody was paying attention when they taught the scientific method. [smile]

Share this post


Link to post
Share on other sites
Quote:
Original post by jpetrie
There are more important factors to consider -- those that can actually have an impact on design and architecture -- than whether the program will be at it's heart native-compiled or JIT'd.


It depends on what you are designing. This is definitely an important performance issue, especially in game/physics engine design.

Share this post


Link to post
Share on other sites
Quote:
Original post by asp_
Java is 14 years old. How long will it take until these mythical performance characteristics of JIT methods are realized? I'm way too busy doing actual work to worry about these things but once they come to fruition I'll be the first in line.


Who said anything about Java?

Share this post


Link to post
Share on other sites
Quote:
Original post by helpmenow
I know that I could be dead wrong here but couldn’t some of the knowledge obtained from those four decades be put to use on a jit compiler?
Can and do. Everything a static compiler does, a JIT compiler takes a crack at. A JIT compiler can do everything a static compiler can. The customers of a JIT compiler though are simply not willing to wait around for a program to be compiled like is tolerated in a static compilation process. The result is that a JIT compiler drops optimizations that are not really simple to do in order to get that fraction of a second compile time [I'm actually compiling a C++ program right now that will take nearly 2 hours to compile into a debug build. A release build takes about 3 times longer without doing whole program optimization. This is just plain not tolerated in a JIT compiler]. Surely you remember from your algorithms classes about O(*) notation. A static compiler routinely does work on O(n)-O(n^3) time where n is the size of the compilation unit with respect to certain characteristics. A JIT compiler has to abandon these in favor of optimizations at function scope rather than compilation unit or program scope, and uses a lot of O(1) heuristics for inter-function optimizations such as inlining. The result is a less than optimal compilation due to corners cut to get the program up and running *right now*, rather than 8-10 hours from now.
Quote:
Original post by helpmenow
Ok I’m no expert on this subject but I think that one of the reasons applications made with jit compilers are a little slower might not be the fault of the compiler but the features used by the language that the application was made with. What I mean by this is when languages use features like bounds checking for arrays, garbage collection and similar things they may be putting code in that has too many variables to optimize out.
These are language characteristics. A JIT compiled language does not have to be a managed language, but pretty much all of the big-named languages you see these days that are JIT compiled are also managed. These are two parallel but unrelated trends.
Quote:
Original post by helpmenow
How can any compiler (jit or native) safely optimize code like this out?

If(element >= 0 || element <= maxelements)
array[elementnum] = data;

Now I know that this is something that in most cases doesn’t matter and is probably the best way to go but it will affect performance on things like this.

Correct me if I’m wrong on any of this.
This sort of work is done by examining programs for invariants through something called "logical abstract interpretation". For example, consider this:
int *array = new int[n + 3]; //not shown, set each element of array to something
for(int i = 0; i < n; i++)
{
array[i] = array[i + 1] * array[i + 2];
}
This is a pretty simple case, but it works something like this. The program sees a assignment to i of 0. it follows through the loop to find that each iteration results in i = i + 1. It finds i as monotonically increasing, and drops lower bound checks on arrays indexed by i since 0 [the lowest value of i] is found to be a valid index. Furthermore, the loop is bound by n, which makes the highest index reached anywhere be n+2. This is known as the last element of the array because it is known that the array is allocated with size [n+3]. Thus the upper bound checks are dropped.

How this is actually done is a big mess, and it isn't a simple thing, but it works. It works by following program flow to find functional dependencies between values. It uses this data to establish program invariants. It uses this to check for things like bounds, reachability, ect, which is then in turn uses for effective loop unrolling, removing checks that always result in true or false, constant propagation, dead code removal, and many other things.

This sort of thing is done in Java,C++, everything....

Share this post


Link to post
Share on other sites
I never really cared for Java or JIT compiled languages because of the gobs of memory that they eat up. This is because they have a garbage collector and it doesn't handle memory as efficiently as a person could in a language like C++. Furthermore, why trust that a program will be compiled into native code. Languages like Java might only do this occasionally. I'd say stay to native code languages like C and C++ for game development. They appear to be far more portable than languages like Java and C#.

Share this post


Link to post
Share on other sites

This topic is 3195 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this