Managed DirectX and Bussiness Game Development

Started by
9 comments, last by zedzeek 17 years, 8 months ago
I have just watched the .Net SHOW.Some guys in it told us that Managed Code still have some performance hit.The Managed Code has still slower than native code and no developer will use that in FPS game.Somehow,the .net show was producted in 2003.These days,Microsoft .net Framework 2.0 has been released,and DX9 was improved a lot.So,Is the Managed DirectX efficical enough to develop bussiness game?Are there any games actully developed by Managed DirectX?
Advertisement
You can find a list of a few commercial titles developed using the .NET Framework over at http://www.thezbuffer.com.

Right now, lots of developers are using the .NET Framework for tools, and resistance to using it on the production side is slipping, but the lack of compatible copy protection systems and decent performant obfuscation and encryption are definitely hurting.
Michael Russell / QA Manager, Ritual EntertainmentI used to play SimCity on a 1:1 scale.
Managed directX reminds me of D3D retained mode. Don't use it because it sucks.
Got it!I have seen the website and actually found some of them.I think the Managed DirectX still a little bit slower.Because I tried to run the examples of DirectX SDK,the same program written in c# Managed DirectX is 20 percent slow in fps(Frames per second) than the native code written in c++ on my machine(Nvidia GForce MX400 P41.7 Old enough right?).The .NET SHOW DirectX expert said that the code he written in Managed DirectX could achive 98% as native code.At least,according to the performance of my machine i could't believe that is true.Maybe it's because my videocard is too old and enlarge the performance distance.
Don't trust the numbers you get from SDK samples and other "tests." Any performance figure over about 85-90 FPS is going to be useless in terms of measuring real performance. Consider the difference between 300 and 270 FPS, for example. At 300 FPS, it takes about 3.33 milliseconds to render a frame. At 270 FPS, it takes 3.7 milliseconds. The difference is 0.37 milliseconds - 380 microseconds. There are 1 million microseconds in 1 second. Think about that for a minute.

The upshot of this is that performance testing is only meaningful under the kinds of real-world load you see in actual games; simple tests actually run too fast to provide reliable indicators of how a language/platform will perform in real use.

Unfortunately, I don't have any offhand figures for how MDX compares to vanilla DX, but judging from some of the games out there using MDX, the performance difference is literally meaningless.


Developer time is infinitely more valuable than CPU time; saving 8 months of work at the "cost" of 10 FPS is an utter no-brainer decision. However, this is a two-way street; developers with a very large codebase in, say, C++ face a very expensive change if they decide to move to MDX, because they have to recreate their codebase. (Of course there are stopgaps like C++/CLI, but there's really not much point to C++/CLI and MDX - it's just adding an extra hoop to jump through at no real benefit.)

This - combined with issues like copy protection and obfuscation as mentioned earlier - is what's keeping MDX from taking hold faster. It has basically nothing to do with the merits of MDX itself.


There's also D3D10 to consider; with the massive improvements coming in the next release of DirectX, it would be foolish to invest in retooling a development studio to use MDX, and then have to do it all over again to take advantage of D3D10. A lot of shops will make the shift at the same time in order to minimize downtime - again, assuming they're willing to take the hit on copy protection and the risk of having their game decompiled.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

Quote:Consider the difference between 300 and 270 FPS, for example. At 300 FPS, it takes about 3.33 milliseconds to render a frame. At 270 FPS, it takes 3.7 milliseconds. The difference is 0.37 milliseconds - 380 microseconds. There are 1 million microseconds in 1 second. Think about that for a minute.
(or 60million microseconds)
the above whilst true its not the method of how to measure performance, i remember a webpage some msvp wrote detailing this, unfortuanly the logic is flawed. time is not indicator u should use but percentage

Quote:The .NET SHOW DirectX expert said that the code he written in Managed DirectX could achive 98% as native code
well in that case u can take it for grantted its slower (after all hes gonna want to portrat net in the most positiive light)

Quote:Developer time is infinitely more valuable than CPU time; saving 8 months of work at the "cost" of 10 FPS is an utter no-brainer decision
i agree completely in this senerio, then again u mention d3d10 which is gonna havbe lower cpu overhead than d3d9 thus 10fps could be 50fps
Quote:Original post by zedzeek
Quote:Consider the difference between 300 and 270 FPS, for example. At 300 FPS, it takes about 3.33 milliseconds to render a frame. At 270 FPS, it takes 3.7 milliseconds. The difference is 0.37 milliseconds - 380 microseconds. There are 1 million microseconds in 1 second. Think about that for a minute.
(or 60million microseconds)
the above whilst true its not the method of how to measure performance, i remember a webpage some msvp wrote detailing this, unfortuanly the logic is flawed. time is not indicator u should use but percentage



Says who? Percentage of what?

Every competent programmer I know measures performance in terms of time-to-complete-operations. For games, the most common metric is time-per-frame. This is in direct contrast to the incorrect (but popular) use of frames-per-second, which actually can be used very easily to produce misleading or useless statistics.


If the MSVP page you're referring to happens to be Tom's DirectX FAQ, he agrees with me.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

Quote:Original post by ApochPiQ
This - combined with issues like copy protection and obfuscation as mentioned earlier - is what's keeping MDX from taking hold faster. It has basically nothing to do with the merits of MDX itself.


This has very little to do with adoption in the world of AAA titles. Any cross-platform project has its decision made without even getting into any issue beyond the fact that .NET is not available on any console currently. In a world of economies of scale, any gain in efficiency on one project in .NET is more than countered by any loss due to a loss of an economy of scale.

In addition, given the lofty quality requirements of the console manufacturers, there's no chance in hell any publisher is going to use a VM without a guaranteed real-time garbage collector; otherwise, the game has very little hope of getting approved as a the pause of the garbage collector will almost certainly result in failure on a QOI test.
Quote:If the MSVP page you're referring to happens to be Tom's DirectX FAQ, he agrees with me.


found it here
http://www.mvps.org/directx/articles/fps_versus_frame_time.htm

percentage difference is what should be compared, not time ( btw fps is intrinsically linked to time anyways )

whats more impressive?

A/ changing code so that instead of taking 1 hour it takes 59minutes to complete (saving of a massive 1 minute)
B/ changing code so that instead of taking 1 sec it takes 1 msec to complete (saving of only 0.999 secs)

I think you're misunderstanding the point made by that article. The author does mention percentage differences at the end, but as an method of illustrating why FPS is a bad metric. He closes with:
Quote:
So take that as food for thought if you are currently using an FPS counter as a measure of your performance. If you want a quick indication of performance between profiling sessions, use frame time instead.


Emphasis mine. In other words, he is not advocating the use of percentage difference directly. Percentage difference is often awkward as a metric because a pure percentage lacks any kind of contextual information. The bottom line of any kind of performance measuring is determing the duration that an end-user will have to sit and wait for your algorithm to complete. You can use a percentage difference (assuming it is a difference of a valid direct, linear measurement, i.e., time-per-frame and not frames-per-second) but that doesn't really tell you what you want to know. If I make a 15% increase to my main game loop, how much time did I actually save?

Quote:
( btw fps is intrinsically linked to time anyways )


So is percentage difference. FPS is not a linear measure, which is why it's a poor metric. Percentage difference is noncontextual, which is why it can be a poor metric. Direct timing of an operation is the general-case ideal metric because it can be intuitivly converted to pretty much any other metric.

Quote:
whats more impressive?

A/ changing code so that instead of taking 1 hour it takes 59minutes to complete (saving of a massive 1 minute)
B/ changing code so that instead of taking 1 sec it takes 1 msec to complete (saving of only 0.999 secs)


Niether. This is a contrived example designed to make it seem like time-based measurements are invalid. The first example saves a "massive one minute" of real time but is a 1.6% increase in performance. The second saves "only" 0.999 seconds but is 99% faster. But they're not the same algorithm, so comparing their performance metrics as percentage difference is just as invalid as comparing their time-based metrics. Apples to oranges.

[Edited by - jpetrie on July 31, 2006 9:44:54 AM]

This topic is closed to new replies.

Advertisement