Archived

This topic is now archived and is closed to further replies.

JIT vs compile-time optimization

This topic is 5247 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Can a JIT optimizer do everything a compile-time optimizer can? In my mind it seems like a compile-time optimizer has an advantage in that it can be more thorough, whereas the JIT has to balance between time taken and total benefit. Obviously, a JIT compiler knows a great deal more than a compile-time optimizer does about the operating environment, but some guys at work mentioned MS might work on highly specific JIT optimizers for platforms (think: WinXP with Athlon XP processor) and the result could be managed code that is faster than native code. I don''t buy it, personally.

Share this post


Link to post
Share on other sites
Jit code's one disadvantage is that it must be compiled on the target machine anyway. This may be time-consuming.
Also, the 'source' must go with the program at distribution.
In addition, there is the clr overhead to consider.

However, the .net clr compiles the code on the first run of the program. Then it can theorethically be as fast as native code.

So, if the executable size is not a problem, i would use managed code.

[edited by - Nik02 on August 5, 2003 11:04:24 AM]

Share this post


Link to post
Share on other sites
So much wrong information.

::Jit code''s one disadvantage is that it must be compiled on the
::target machine anyway.

How is this a disadvangate? You own a P4, I own an Athlon, on both machines the code is sompiled ptimally. Distribute a ergular app - this does not happen.

::Also, the ''source'' must go with the program at distribution.

Nice you put "source" into markings. What source? Java bytecode is a type of assembler, as is MS MIL.

::In addition, there is the clr overhead to consider.

Not necessarily, and the "clr overhead" has not really anything to do with JITing. The "clr overhead" is for services provided that are painfully missing otherwise.

::However, the .net clr compiles the code on the first run of
::the program. Then it can theorethically be as fast as native
::code.

Learn your logic, please. Once the CLR has compiled the code, it IS as fast as native code, because it IS native code. The question is whether it is as optimised as native code generated by a C++ compiler. But it IS as fast as native code as it IS native code.

::So, if the executable size is not a problem, i would use
::managed code.

Au contraire - the executable is much smaller for a C# application than for a comparable C++ application. NOW - you could say "but there is the hugh runtime", but then I way "it is not the executable, stupid". See, the .exe is smaller. The runtime provides the services.

If the size of the INSTALLER is not a problem, then go for managed code :-) The EXECUTABLE is smaller in most cases, due to the RUNTIME, which, though, is not part of the executable.

Now, anrateus, some informaiton from tests:

* Quake 2, reworked into a managed C++ application (without any further code change than necesary t o get it compiled - call this one of the worst uses of managed code, possibly) is15% slower than the native version :-)
* Other tests show results from this up to a 10% INCREASE in performance under certain conditions.

The JIT has the advantage of being able to optimsie for a platform, WHILE the compiler has more time for optimisations (and does so - at the moment the Managed C++ bytecode is much better than the C# bytecode, in terms of what the compiler optimises). Normally it should not make a difference.

Share this post


Link to post
Share on other sites
Yeah I saw that Quake 2 port, very cool stuff.

It looks like a mix of the two approaches would be best, although I wouldn''t be surprised if .NET already does this. Seems like a compile-time optimization could do the stuff that is platform-agnostic and takes some time, whereas the JIT optimizer could then tailor it for the specific computer and even the particular operating environment (low/high memory, server vs. workstation, etc).

Actually on my stuff its so reliant on templates that I can''t really use managed code at the moment. I know you can emulate them in various ways but I''ll put up with the Win32 API for this project. I was just curious about how good JIT optimizers stacked up to compile-time ones.

Share this post


Link to post
Share on other sites
quote:
Original post by thona
::Jit code''s one disadvantage is that it must be compiled on the
::target machine anyway.

How is this a disadvangate? You own a P4, I own an Athlon, on both machines the code is sompiled ptimally. Distribute a ergular app - this does not happen.



I mean, the compiling takes time. However, this is not a bad disadvantage, since it only needs to be done once. It does have enormous advantage in long run!

quote:

::Also, the ''source'' must go with the program at distribution.

Nice you put "source" into markings. What source? Java bytecode is a type of assembler, as is MS MIL.



Have you looked some managed code with ildasm?
The methods are clearly separated, for example.
You can easily disassemble a program, change something, and assemble it right back, resulting in fully working assembly, only with your changes.
This is not an easy feat in native code.

quote:

::In addition, there is the clr overhead to consider.

Not necessarily, and the "clr overhead" has not really anything to do with JITing. The "clr overhead" is for services provided that are painfully missing otherwise.



I agree you on that, on second thought.

quote:

::However, the .net clr compiles the code on the first run of
::the program. Then it can theorethically be as fast as native
::code.

Learn your logic, please. Once the CLR has compiled the code, it IS as fast as native code, because it IS native code. The question is whether it is as optimised as native code generated by a C++ compiler. But it IS as fast as native code as it IS native code.

The JIT has the advantage of being able to optimsie for a platform, WHILE the compiler has more time for optimisations (and does so - at the moment the Managed C++ bytecode is much better than the C# bytecode, in terms of what the compiler optimises). Normally it should not make a difference.



I was not clear on my terms here.
I meant, theorethically as fast as finely-tuned machine code.
The compiler will have more deep understanding of the program, and more time to optimize it, when compiled at developer''s machine. Also, this is not usually a huge problem.


I generally use managed code myself, and find it very productive.
Thanks, thona, for clarifying some things for me too, i now see i''m not Managed pro yet

kind rgds, Nik

Share this post


Link to post
Share on other sites