D?

Started by
70 comments, last by Aldacron 15 years, 10 months ago
Speaking of large scale benchmarking is always a good point citing: java 6 vs cpp misured on the alioth open source benchmark. They runs a lot of different tests in order to cover a lot of different computations that can be performed in resource intensive medium/large scale programs.

BUT we cannot assume that the results are the pure truth considering this entry in their FAQs (stuffs related with dynamic and JIT compilation of java modules).
--"Low level programming is good for the programmer's soul" -- John Carmack
Advertisement
Quote:Original post by Hnefi
If you are benchmarking the cost of an algorithm, you wouldn't benchmark different languages to begin with!


Well, d'oh! My point is that people pick algorithms out of a hat and use microbenchmark results as proof that one language is faster or slower than another. Often, it's pointless.

Quote:The entire point of benchmarking different languages and comparing the results is to see which language (or language implementation) is faster for actual use. You can't just arbitrarily shave off bits and pieces of the execution routine for comparisons like that.


Microbenchmarks tell you very little about the real world performance of a language. And in Java, the performance of the same benchmark will vary depending on what point it is executed in a program's lifecylce as well as across different VMs. What, then, do these microbenchmarks actually measure? If you are benchmarking an algo during the JVM warmup phase, is that indicative of Java's performance? Or would it be more valid to measure performance during the runtime analysis phase? How about after runtime compilation, when the particular block of code is no longer interpreted? It's not an arbitrarily shaving off of bits and pieces.

Quote:
Microbenchmarks are useful for benchmarking the performance for microprograms. They tell us how good Java would be for implementing programs like ls or grep. If we want to know how efficient Java is at large-scale programs, we use large-scale benchmarks.


This I can agree with. But more often than not, people aren't considering the performance of a Java or C++ grep when they hold these benchmarks up to support their claims, one way or the other. They're using them as proof of a language's overall performance. It's a pointless exercise, but if people are going to do it they need to understand exactly what they're benchmarking.

This topic is closed to new replies.

Advertisement