• FEATURED

View more

View more

View more

### Image of the Day Submit

IOTD | Top Screenshots

### The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

# my c++ d c# benchmark!

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

71 replies to this topic

### #41Kambiz  Members

Posted 07 October 2006 - 11:06 PM

### #42Anonymous Poster_Anonymous Poster_*  Guests

Posted 07 October 2006 - 11:49 PM

Quote:
 Original post by KambizI thought that JIT compilation can produce more efficient code because it can optimize for the target machine.
But it has less time to compile. Obviously it can't spend a minute optimizing some small program that would run in 20 seconds without full optimizations. But C compilation can take as long as necessary to make the best output code.

### #43h3r3tic  Members

Posted 07 October 2006 - 11:55 PM

Kambiz, I encourage you to add the GDC D compiler to the charts :)
http://gdcwin.sourceforge.net/

### #44PlayerX  Members

Posted 08 October 2006 - 01:02 AM

Out of curiosity I implemented it in i386 machine code - it ended up 2 seconds slower than the C++ compiled version. At a guess (apart from shoddy code of course) I'd say it's because the compiler was more willing to unroll loops than I was. :)

### #45cody  Members

Posted 08 October 2006 - 01:49 AM

nice benchmark. but of course it doesnt cover all performance relevant aspects.

i would like to see a memory allocation benchmark. with lots of news and deletes. i guess the managed version will be better there.

### #46Kambiz  Members

Posted 08 October 2006 - 03:40 AM

Quote:
 Original post by h3r3ticKambiz, I encourage you to add the GDC D compiler to the charts :)http://gdcwin.sourceforge.net/
I can not compile using gdmd I get this error message:
object.d: module object cannot read file 'object.d'
I have never used those GNU compilers, can you tell me the correct command line arguments?
I have tried “gdmd pi.d -O -release” with “C:\MinGW\include\d\3.4.5” added to the PATH variable.
-I have to work now later I will try this again.-

Quote:
 Original post by codynice benchmark. but of course it doesnt cover all performance relevant aspects.i would like to see a memory allocation benchmark. with lots of news and deletes. i guess the managed version will be better there.
I agree.

### #47h3r3tic  Members

Posted 08 October 2006 - 05:28 AM

Quote:
 Original post by KambizI can not compile using gdmd I get this error message:object.d: module object cannot read file 'object.d'I have never used those GNU compilers, can you tell me the correct command line arguments?I have tried “gdmd pi.d -O -release” with “C:\MinGW\include\d\3.4.5” added to the PATH variable.-I have to work now later I will try this again.-

The error message means that it cannot find gphobos, the GDC standard library. It could mean a MinGW or GDC installation problem, but I'm not sure what exactly in this case. You could try adding 'C:\MinGW\include\d\3.4.5' to your path. I have mostly used DMD so I can't really help much in this case. On a last note, maybe it's a problem with GDC based on GCC 3.4.5 ? I've got the one based on 3.4.2 and it seems to work fine
As for the command line arguments, both:
> gdmd -inline -release -O pi.d
and
> gdc -frelease -finline -O3 pi.d
worked for me

### #48Stachel  Members

Posted 08 October 2006 - 05:56 AM

Quote:
 Original post by codynice benchmark. but of course it doesnt cover all performance relevant aspects.
Here's a benchmark showing that low level code generation doesn't always matter, sometimes higher level features matter much more:

http://www.digitalmars.com/d/cppstrings.d

Scroll down to see the benchmark.

### #49clayasaurus  Members

Posted 08 October 2006 - 06:46 AM

I believe you mean http://digitalmars.com/d/cppstrings.html

### #50DaveJF  Members

Posted 08 October 2006 - 07:29 AM

Quote:
 Original post by KambizI'm just surprised a little: I thought that D shouldn't be much slower than c++ and I thought that c# would be much faster. Maybe there is some optimization option I have not used(?)What do you think about the results?

Which version of DMD? With v0.169 of DMD (the latest just posted) and the .NET 2.0 runtime I got:

Intel 2.2 Ghz:
CPP 2.28 (cl /Ox /TP)
D 2.26 (dmd -O -inline -release)
C# 4.52 (csc /optimize+ /checked- /unsafe+)

AMD 3200+
CPP 1.43
D 1.46
C# 3.25

Check this for some other comparisons:

Computer Language Shootout

### #51DaveJF  Members

Posted 08 October 2006 - 07:43 AM

Quote:
 Original post by PromitLastly, realize that if someone were to break out the MMX intrinsics, everything except C would be seriously screwed. You'd be looking at times for the C code that were about 1/4 to 1/3 what they are now. DMD, C#, and Java wouldn't be able to touch that -- and we can't even do autovectorization like that statically, let alone in a JITter.

D could do the hand-tuned MMX (in a compiler and op. sys. portable way) as well:

D inline assembler

### #52DaveJF  Members

Posted 08 October 2006 - 08:01 AM

Quote:
 Original post by KambizI thought that JIT compilation can produce more efficient code because it can optimize for the target machine. Maybe I should just wait for the next version of those compilers. C/C++ is a mature language and it is not surprising that there are excellent compilers available for it.

Ahhh, the marketing folks have been at it again... That's been promulgated for years by Sun and the Java lobby, and for a few micro-benchmarks, it may even be true <g>

Real-life code (especially game code) usually doesn't seem to follow.

Here's even a set of micro-benchmarks that Java still sucks in:

Shootout Benchmarks

Probably several 10's or 100's of millions of $'s and 10 years has been spent trying to make Java run like C++, and the different language semantics shouldn't prevent that for the most part. Yet it just isn't happening. Given the emphasis on Java performance, I figure there has to some computer science reasons behind it (besides available time to optimize -- Sun's Hotspot -server takes all the time it needs) and I'd be willing to bet that statically compiled code and langauges will be around for a few more decades because of it. From what I've seen, I think D kind-of builds a bridge between C++ and Java. ### #53Rockoon1 Members Posted 08 October 2006 - 03:11 PM Quote:  Original post by DaveJFProbably several 10's or 100's of millions of$'s and 10 years has been spent trying to make Java run like C++, and the different language semantics shouldn't prevent that for the most part.

We can agree that the code representation (ie: language) isnt an issue in most cases, including Java, C#, and VB.NET.

Quote:
 Original post by DaveJFYet it just isn't happening. Given the emphasis on Java performance, I figure there has to some computer science reasons behind it

While this may be true, this is the first time I have hear the idea mentioned. I don't see why the act of JITing is any different than regular compilation.

What I do see different is that all these JIT languages use an intermediate language designed with the intention of porting. Most compilers do compile to an intermediate language as a first stage. What could easily be different is that in those cases, porting isnt a consideration.

I am uncertain as to if the strive for porting has restrained the design of the intermediate languages used in JIT's. I am completely unfamiliar with java's and I am only minorly self educated in MSIL. Both the Java IL and MSIL are stack based while perhaps the 'good' stand alone compilers do not use a stack based IL.

The issue in question however does not seem to be related at all. It simply seems that the .NET JIT neglects to make an "obvious" optimisation specific to the x86-style div instruction (in that it returns the modulus as well) - clearly the case at hand isnt a deficiency of the language or of jitting in general.

### #54Aldacron  GDNet+

Posted 08 October 2006 - 06:52 PM

Quote:
 Original post by DaveJFAhhh, the marketing folks have been at it again... That's been promulgated for years by Sun and the Java lobby, and for a few micro-benchmarks, it may even be true Real-life code (especially game code) usually doesn't seem to follow.

Not true at all. You can write Java benchmarks and run them differently to see the effects of JIT compilation. Even running with different versions of the JVM will show improvements. The problem is that most people who write Java benchmarks are benchmarking the wrong thing.

JIT compilation doesn't happen instantly. The JVM first runs code in interpreted mode. It will only start compiling after a certain number of executions. Benchmarks that don't take this into account are flawed. If you run a timed loop for a few thousand iterations, you might be getting only interpreted mode -- which is nowhere near being a realistic benchmark. Maybe you will get a mix of interpreted mode and compiled mode, but in that case your results will be skewed by the compilation time.

### #55Stachel  Members

Posted 08 October 2006 - 07:53 PM

Quote:
Original post by Aldacron
Quote:
 Original post by DaveJFAhhh, the marketing folks have been at it again... That's been promulgated for years by Sun and the Java lobby, and for a few micro-benchmarks, it may even be true Real-life code (especially game code) usually doesn't seem to follow.
Not true at all. You can write Java benchmarks and run them differently to see the effects of JIT compilation. Even running with different versions of the JVM will show improvements. The problem is that most people who write Java benchmarks are benchmarking the wrong thing.
I'm not so sure of that. It's not about writing a benchmark that favors the way Java works. It's about writing a benchmark that functionally resembles a real application.

Quote:
 JIT compilation doesn't happen instantly. The JVM first runs code in interpreted mode. It will only start compiling after a certain number of executions. Benchmarks that don't take this into account are flawed. If you run a timed loop for a few thousand iterations, you might be getting only interpreted mode -- which is nowhere near being a realistic benchmark. Maybe you will get a mix of interpreted mode and compiled mode, but in that case your results will be skewed by the compilation time.Read more about how to properly benchmark Java in this article and this article.
I read the article. What I inferred from it is that Java benchmark results are erratic and often unpredictable, and that the erratic and unpredictable parts always seem to result in slower programs. Only under the very best of ideal conditions can Java hope to approach static compilation results, and for that the Java app has to run for hours first.

Even worse than what that does to app speeds, is the corollary that the programmer is going to have a hard time optimizing the algorithms, because he can't get repeatable timings from one run to the next. He can't tell if his algorithm changes are making things better or worse.

### #56Anonymous Poster_Anonymous Poster_*  Guests

Posted 08 October 2006 - 09:00 PM

Quote:
 Original post by Stachel Only under the very best of ideal conditions can Java hope to approach static compilation results, and for that the Java app has to run for hours first.
Um. Kambiz ran his test for 15 seconds and it was as fast as the statically compiled C version (The C version didn't have decimals shown so the result can be on range 15-15.99). So much for "approaching" the results or "very best ideal conditions" (just a random test written originally for D[*]) or even "running for hours first"...

[*] And slightly altered by Promit for .NET

### #57Stachel  Members

Posted 08 October 2006 - 09:26 PM

Quote:
Original post by Anonymous Poster
Quote:
 Original post by Stachel Only under the very best of ideal conditions can Java hope to approach static compilation results, and for that the Java app has to run for hours first.
Um. Kambiz ran his test for 15 seconds and it was as fast as the statically compiled C version (The C version didn't have decimals shown so the result can be on range 15-15.99). So much for "approaching" the results or "very best ideal conditions" (just a random test written originally for D[*]) or even "running for hours first"...

[*] And slightly altered by Promit for .NET

The running for hours bit comes from the article cited: "Timing measurements in the face of continuous recompilation can be quite noisy and misleading, and it is often necessary to run Java code for quite a long time (I've seen anecdotes of speedups hours or even days after a program starts running) before obtaining useful performance data." The author, Brian Goetz, is an expert in the field.

Secondly, my cited post was not about that particular benchmark, but about Java benchmarking in general, and it was based on the Goetz article. I stated that quite clearly.

### #58MaulingMonkey  Members

Posted 08 October 2006 - 09:55 PM

Quote:
Original post by Stachel
Quote:
Original post by Anonymous Poster
Quote:
 Original post by Stachel Only under the very best of ideal conditions can Java hope to approach static compilation results, and for that the Java app has to run for hours first.
Um. Kambiz ran his test for 15 seconds and it was as fast as the statically compiled C version (The C version didn't have decimals shown so the result can be on range 15-15.99). So much for "approaching" the results or "very best ideal conditions" (just a random test written originally for D[*]) or even "running for hours first"...

[*] And slightly altered by Promit for .NET

The running for hours bit comes from the article cited: "Timing measurements in the face of continuous recompilation can be quite noisy and misleading, and it is often necessary to run Java code for quite a long time (I've seen anecdotes of speedups hours or even days after a program starts running) before obtaining useful performance data." The author, Brian Goetz, is an expert in the field.

Secondly, my cited post was not about that particular benchmark, but about Java benchmarking in general, and it was based on the Goetz article. I stated that quite clearly.

The key here is that your original statement was overgeneralized. It does not matter that you were not refering to that specific benchmark, because it is contained within the boundries of the set to which the statement proportedly applies to ("the Java app", refering to [any] arbitrary application of "Java").

And while Brian Goetz may be an expert, he certainly didn't make this overgeneralization himself. Unless I've inadvertently missed it, he in fact makes absolutely no references to either compilation methods resulting in faster programs than the other. The "often hours" figure comes from reaching optimals within that language, which are never compared to the static version. For all you know, this is significantly faster than the equivilant C/C++ due to optimizaitons based on input data which absolutely could not be made in the equivilant staticly compiled program (because, as that first article mentions, Java is able to make profile guided assumptions, even when those assumptions may later prove invalid for the general case!). For all you know, it's been running faster than the static equivilant for all those hours!

Not that I think this is the general case, but I'm not even going to state that, as I would be woefully under-evidenced to the point of making such an undereducated assertion as mine on the subject completely worthless.

### #59Stachel  Members

Posted 09 October 2006 - 08:52 AM

Quote:
 Original post by MaulingMonkeyFor all you know, this is significantly faster than the equivilant C/C++ due to optimizaitons based on input data which absolutely could not be made in the equivilant staticly compiled program (because, as that first article mentions, Java is able to make profile guided assumptions, even when those assumptions may later prove invalid for the general case!). For all you know, it's been running faster than the static equivilant for all those hours!
I read about the profile guided assumptions, and the theory that JITs can therefore produce faster code. But the results always seem to be missing in action.

Here's one set of benchmarks comparing Java with C++: http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=java&lang2=gpp

Score 1 out of 17 for Java.

### #60dbzprogrammer  Members

Posted 09 October 2006 - 09:18 AM

Some seem to be attached to certain languages more than others.

I really enjoyed it.

IMO, nothing in life is fair. It's true that there are some flaws in the benchmark, but so many comparisons made today aren't fair...

Anyways, there's a more interesting point to it all.

What can we conclude from all this? What am I really seeing? What should you be seeing?

C/C++ is the best tool for intense calculations.

D is not, along with C#.

What are they good for?

Building larger projects that don't require so much computation. And also decreasing development time...

The key is, is that you should use the best tool for the project, something that meets your needs.

(I'm not implying any sarcasm through this whole thing, just an fyi)

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.