Archived

This topic is now archived and is closed to further replies.

Viro

[java] Java vs C benchmarks

Recommended Posts

Hi, I wonder if anyone here remembers the benchmark done by Chris Rijk a couple of years ago at Aceshardware? In brief, it compares the performance of Java against C, and the results are quite astonishing. The original article can be found here. However, a lot has changed on the Java front over the past 2 years. Java 1.1 was used in the benchmark, and if you''re like me, you''d be curious to know how the current crop of Java VMs would perform. So I''ve rerun the benchmarks, using j2sdk1.4.1_01 (ClientVM and ServerVM) and gcc-3.2. The results can be found here. I''ve refrained from making any conclusions because I''m technically unqualified to do so, thus I''ll leave you to draw your own. Do send me and email or post here with your comments.

Share this post


Link to post
Share on other sites
My results are a lot alike, except that C did better overall. This could be caused by a lot of things though. I exported the Open Office spreadsheet and graphs to HTML here (tarballed to 45 KB). The fonts came out poorly from Open Office, but I couldn''t get Gnumeric to allow me to export the images (with looked much nicer); oh, well .

My system that I tested on:
Linux 2.4.19-k7
1200 Mhz Athlon T-Bird (266 Mhz FSB)
256 MB 2100 DDR SDRAM
ABit KG-7 Mobo (AMD-761 Chipset)
GCC 3.2.1 and Sun''s JRE 2 ("1.4.1_01")

I didn''t recompile the Java files, but I did recompile the C files based upon what you said you used in your document. I made sure enough memory was free for the Java VM to be happy (150+ MB, that''s more than enough ), and I tested non-first runs each time.

I experimented with the Fibonacci one, but I couldn''t get it to speed up more than a little in GCC. I guess the Java VM is optimizing something a lot better than GCC in that case (very likely something with the recursion; I''ll experiment with recursion optimizations in GCC some other time).

Share this post


Link to post
Share on other sites
Well, I just got MS Office XP a week ago, and I''m hitting everything with my shiny new ''hammer''

I could export it in HTML format, but then Word doesn''t do HTML well, does it?

Share this post


Link to post
Share on other sites
Those results you got for Java look like the results from the ClientVM. From the docs at Sun, the ClientVM emphasizes fast startup times, and forgoes some of the nice optimizations, like loop unrolling and instruction scheduling.

Invoke the ServerVM by using the command java -server class.

That would probably give Java a much better showing.

Share this post


Link to post
Share on other sites
Viro dropped me an email about re-running the benchmarks. Thanks.

Firstly, a quick comment about running the benchmarks. If you pass a
parameter when calling them, the benchmarks will be run for 10x
longer. Eg, if you do "java life x" (or "java life -heavy x"), then
it''ll take 10x longer to run. This makes the results a bit more
accurate (and means you don''t get 0 scores for some of the larger
tests actually). The same applies to the C benchmarks.

This can also make a different to the infinilife benchmark, because
it''s the only one that does dynamic memory allocation during the timed
part of the benchmark. The memory pools for data with Sun''s JVM will
adjust themselves to better optimise for a particular pattern of
memory usage, so the longer you run them, the more time they have to
adjust.

The results don''t actually surprise me at all. Incidentally, a few
days ago, I had an email from a Sun engineer working on the server
version of the JVM. (side note, he gave me this results a few hours
after I emailed him a copy of the code. It''s not like he had time to
add optimisations specifically for the code.) He''d run some tests with
their 1.4.2 JVM - which is currently in beta inside Sun and will
become publically available (as a beta) in a few months:

Sun''s C compiler (-fast -x04 -xspace):
Averages: 157,152,154,155,136,113.1,110.9,247,246,284,0

JDK1.4.2-b04 (-server):
Averages: 203,201,202,202,189,186,166,398,412,517,0

This was the "life" benchmark run on a 750MHz UltraSPARC-III (a CPU
Sun designed). Sun''s own compiler is pretty darn good - much better
than GCC (on SPARC). The above test doesn''t make use of profiling or
other optimisation options, but is pretty realistic for "real world
code" - Sun''s JVM itself is compiled with something similar.

I don''t know if there''s anything new in the 1.4.2 JVM that''d make such
a difference - ie I don''t know how 1.4.1 would compare. I hope to find
out.


An interesting technical report I came across the other day is from
Sun''s research arm (Sun Labs):
http://research.sun.com/techrep/2002/abstract-114.html

They basically investigated performance for a text-to-speech engine,
originally written in C (based on an open source implimentation).


One noticible difference between the above comparison and my original
comparison is that mine was more of a base language comparison, and
made minimal use of libraries (this was deliberate). For pure language
comparsions, it''s clear to me that Sun''s JVM is improving faster the C
compilers are, and by a fairly significant amount. By the time Sun get
to their 1.5 JVM (18 months or so), for my set of benchmarks, maybe
the server version of Sun''s JVM will beat (or come close to) GCC in
all tests, no matter what options you use for GCC. There''s still the
commercial compilers of course...


PS I do plan to do a follow-up to my "binaries Vs byte-codes"
article. However, I probably won''t be starting it until next year.
It''ll be more of a "low level" (and back to basics) article
though.

Share this post


Link to post
Share on other sites
Updated the benchmark results. Ran all of them 10 times each, and this took about 2 hours to complete. Also, Chris''s comments were added

Time to post this on the Javalobby

Share this post


Link to post
Share on other sites
quote:
Original post by Viro
Those results you got for Java look like the results from the ClientVM. From the docs at Sun, the ClientVM emphasizes fast startup times, and forgoes some of the nice optimizations, like loop unrolling and instruction scheduling.

I just ran them with /opt/jre/bin/java class, so I guess that would make it the client VM. I didn''t know how to access the server VM (and I was too lazy to find out at that time, heh).

I wanted to get back to using my CPU for something else (that tries to use 100% CPU while running, so I couldn''t run it while testing, even though I''ve niced it to a low priority), or I might have spent more time playing around.

One test environment I''d be interested in seeing these tests run in is one with low RAM (is there a Java VM specialized for low RAM settings? not to imply that the ''main'' one is bad at it, since I''ve never tested it). I have all of my RAM on a single module, so I couldn''t pull any of it out for such a test. Maybe when I get the VGA convertor for the old Mac Performa on the other side of the room I can try such a test .

quote:
Original post by Chris Rijk
One noticible difference between the above comparison and my original comparison is that mine was more of a base language comparison, and made minimal use of libraries (this was deliberate).

From the short glimse I took at Viro''s code, it appeared that he doesn''t really use the C runtime for timed code. I haven''t looked over the Java code yet. That''s why I didn''t previously mention that my C runtime is only optimized for i386.

Just because C did better in my test, please don''t assume I''m trying to suggest that GCC is doing a better job than Sun''s VM . I''m running the tests again including the -server switch, which does appear to help the Java VM a lot.

Also, I uploaded a PDF of your document here, for all to view .

Share this post


Link to post
Share on other sites
I''ve put up a copy of the executables compiled with Visual Studio .NET. The command-line I used to compile them was:

cl.exe /Ox /G6 /Fefft.exe fft.c (etc)

and you get download them here: http://www.codeka.com/tmp/vsNETExes.zip

I''m running the tests myself as I type this, but since I''ve got a dual Athlon 1800MP, it''d probably be better if you did it yourself, Viro (assuming you''ve got a Windows machine). I''ll post my results as soon as they''re done. I''ll also get a copy of the cygwin compiler so I can compare with GCC, but that''ll have to wait until later...

There''s another test I''d be interested in running, and that''s a multi-threaded one. I might also port the tests to C# and see how that compares.

If I had my way, I''d have all of you shot!


codeka.com - Just click it.

Share this post


Link to post
Share on other sites
Here''s my results: results.gif.

There''s a few things to note about this. The most obvious is that fib.c didn''t seem to work properly with Visual C++.NET and I got really weird results...

All the other tests had VC++.NET performing similarly to GCC did in Viro''s tests, except my Life under heavy load had VC++.NET keeping up with the Server VM a lot better. In the light load one, the Server VM did a lot better than in Viro''s tests, and I''m not really sure why that is...

If I had my way, I''d have all of you shot!


codeka.com - Just click it.

Share this post


Link to post
Share on other sites