• Advertisement
Sign in to follow this  

future of .NET

This topic is 4726 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Just a general question, the next release of Windows is supposed to include the .NET framework already integrated into the OS, right? So I'm assuming that performance wise, we should see an increase (maybe just slightly) in the speed for which .NET programs execute? The reason that I ask is because I am planning on possibly developing my 3D game in C#, and there are currently a lot of naysayers about doing this that I have seen. But by the time that my game is done, Microsoft's Longhorn should be out, with it's integrated .NET runtime (they might even have 2.0 imbedded by then), so I assume that there should be some performance increases and it should not be so bad to have a full 3D game developed for .NET. Opinions, predictions, ect. welcomed.

Share this post


Link to post
Share on other sites
Advertisement
If you're making Doom 3 or Half-Life 2 then stick with C/C++ (for now). For smaller arcade games that don't require so much power, I suggest you use C# as the performance tradeoffs are well worth the productivity gains.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
The problem in my opinion is not performance of managed code per se, because the rewards can be arguably worth any slight degridation.

The bigger problem is the non deterministic nature of the CLR's Garbage Collection. Personally, a non deterministic GC has absolutely no place in the architecture or codebase of any potentially commercial title. That means absolutely not in my engine code.

Others may differ on this extreme view and point out that with a lot of care it can be worked around. Personally I see this as a waste of time and any paranoid developer should want to avoid having to wonder when a GC will kick in during the middle of gameplay, causing animation to stutter.

If forthcoming versions of .NET can offer deterministic GC, then I think you can be much closer to a viable .NET based commercial game architecture.

Share this post


Link to post
Share on other sites
Well, I knew that Doom 3 and Half Life 2 games are very much beyond the capabilities of the current .NET framework, which is understandable. I was just curious to see if future versions of .NET would even get halfway there to being able to handle such projects.

As for the garbage collection, I knew that it was there, I just didn't know that it was prone to just jump in and work when it wants too. That would be very annoying for that to occur in a 3D app. I did not think of that.

Good comments!

Share this post


Link to post
Share on other sites
Quote:
The bigger problem is the non deterministic nature of the CLR's Garbage Collection. Personally, a non deterministic GC has absolutely no place in the architecture or codebase of any potentially commercial title. That means absolutely not in my engine code.


I think you're drastically overestimating the frequency with which the GC runs and the amount of resources it uses while running. Both numbers are quite low (though admittedly not insignificant). If your GC is running frequently, then there's definitely other factors at work.

Share this post


Link to post
Share on other sites
Quote:
Original post by Anonymous Poster
Others may differ on this extreme view and point out that with a lot of care it can be worked around. Personally I see this as a waste of time and any paranoid developer should want to avoid having to wonder when a GC will kick in during the middle of gameplay, causing animation to stutter.


I hate to say you're wrong.. but..

Considering there are already commercial games out there running on the .NET framework, plus the fact that I have first hand experience in commercial C# game development, I can say that your worries about the GC are legit but that you vastly overstate the issue with the GC.

Sure if you have bad code than you can bog down the GC, but well written code shouldn't bog the GC unless you are doing some very heavy object creation, etc. I mean we haven't run into a problem at all I would really like to know where you are bogging down and hitting the GC heavy honestly.

Sure you wouldn't write Doom3 or HalfLife2 using the framework.. but you wouldn't without a company of 15+ and existing tools/basecode anyways. Using the .NET framework to write commercial games is perfectly viable, and only gets better as Longhorn is released, the JITer starts performing better optomizations, and the GC is improved.

Share this post


Link to post
Share on other sites
Actually I've run into the opposite problem: sometimes it does not run often enough :-)

ChiefArmstrong - as with evaluating all technology I suggest you prototype. Try to cook up a simple renderer that uses something close your expected polygon/texture budget, maybe a big room with a bunch of bouncing cubes, and make the cubes bounce off of each other to simulate simple collision detection. See if you can get it to run smoothly with your target technology. If you can then add more complexity to the renderer or to other resources that are critical for your game and iterate.

Pretty quickly you should be able to tell if the technology will or will not be able to support your game. If it doesn't then you can decide whether or not to continue with the technology or change your game to allow you to keep using the technology.

I would not count on .Net getting much faster in the future. It may but I would not count on it. Given the current state of Microsoft's plans it looks like WinXP is going to be around for quite a bit longer with several Longhorn technologies grafted on it. So if you're planning to produce a .Net game in the next few years I'd target .Net 2.0 on WinXP as your main platform target.

Share this post


Link to post
Share on other sites
It amazes me that people are okay with the GC pausing their apps while it collects.

Totally unacceptable.

Share this post


Link to post
Share on other sites
Quote:
Original post by antareus
It amazes me that people are okay with the GC pausing their apps while it collects.

It amazes me that people are okay with the OS pausing their apps while executing other tasks.

Share this post


Link to post
Share on other sites
As far as the garbage collector, there are really only two collections that you need to worry about: GC(0) and GC(2).

GC(0) is the most common GC in the framework. It occurs on a regular basis and sweeps out any recently allocated objects. It's a little more expensive than a page fault.

GC(2) is rarely called. If your application is minimized or system memory is running out or if you're doing a ton of large allocations, GC(2) will be called. GC(2) can take a couple of seconds, depending on how many objects are "live" in your system.

As a side note, any object that gets created that is over 80 kilobyes in size is automatically a GC(2) object.

So, easy GC performance tips: create your large objects at application startup and maintain references to them, and drop references to small objects quickly so that they'll stay in GC(0).

Share this post


Link to post
Share on other sites
For now, unless GC is optional, I'll stay away from the framework. It's like having a compulsory maid! I'm more than capable of doing my own house-cleaning.

Share this post


Link to post
Share on other sites
First, equating the OS's multitasking with the GC pausing your app is innacurate. The time devoted to the latter can and often will be longer. Furthermore, if a gamer is running any significantly taxing processes other than the game, they are inviting the perf hit and can resolve it by shutting down uneeded apps. I don't know many gamers that play Halflife 2 with active apps in the background.

Secondly, many of you here who in one sentence say "you're over estimating GC perf hit" then go on to say in the next breath "sure I wouldn't use .NET for a Halflife 2 or Doom 3 engine". Well, you've made the original detractor's point exactly. There is a calibur of engine technology that cannot afford the possibility of the GC effecting their runtime framerate.

I know from speaking with one of the leads on the XBox Amped snowboarding title that even with the prescribed suggestions here (hold onto your refs, allocate pre-level, etc.) you are still just asking for trouble. You end up having to go through all your code and make sure you're not missing anything and this costs efficiency of your time.

Another thing to consider here is that the word commercial is subjective. THere are commercially downloadable puzzle games that will not be held to the standards of consoles or other games like Halflife 2. By all means .NET is appropriate. But again, if you're building the foundation of your technology for a high calibur game, I prefer to have as near to 100% control over these types of issues, and this includes allocations. You cannot override the way .NET does allocations, you cannot control your own memory address space, and so on and so forth. Granted these are advanced techniques, but they become relevent when writing technology that has to work on multiple systems (XBox, with unified memory limited to 64MB) and when every ounce of performance and determinism is critical.

Share this post


Link to post
Share on other sites
I find half life 2 locks up for a second or two quite a bit while it loads resources like sounds, if they haven't fixed it by now, it's usually right after a loading screen or going into a new room.

Saying you shouldn't make a half life 2 in C# is like saying you shouldn't win the olympic gold medal in ski jumping: most people couldn't win the gold medal if they tried. Most people here are not going to be able to make a half-life 2 anyway, maybe use all the CPU and GPU power half-life 2 does, but they would probably be used wastefully and inefficiently.

Share this post


Link to post
Share on other sites
Quote:
Original post by Trap
Quote:
Original post by antareus
It amazes me that people are okay with the GC pausing their apps while it collects.

It amazes me that people are okay with the OS pausing their apps while executing other tasks.

Existing applications don't have to suspend all threads while dealing with resources, why should GC get a pass on this? The UI should *never* be blocked for more than 0.1 second!

I find it amusing that the .NET GC is really only good for memory, while more precious resources (database connections) are easier to leak than a RAII wrapper. IDisposable/using is a poor substitute for RAII, it'd be better if they'd made use of the auto keyword, like D does.

Share this post


Link to post
Share on other sites
Only a full collect can stop the app longer than 0.1 s. If your app does full collects it's your fault. In non-gc apps you have to manage freeing memory, in gc-apps you have to manage allocating memory. No full collects will be necessary if you don't allocate memory after loading your data.
The GC is more forgiving though, it only stops you app for some time, a missing free/delete will kill your app over time.

Whats wrong with IDisposable and using?

Share this post


Link to post
Share on other sites
It's not so much the speed that keeps me from using C# but the following:

1) C# and .NET are still young technologies. The next version of C# is bringing
alot of new changes to the language.

2) It still isn't available on many machines and having to include a .NET installer is something I don't want to deal with.

3) Most of my code is already established in C and C++.

In about 5 more years I will probably completely switch to C# for everything :-)

Share this post


Link to post
Share on other sites
Quote:
Original post by Trap
Only a full collect can stop the app longer than 0.1 s. If your app does full collects it's your fault. In non-gc apps you have to manage freeing memory, in gc-apps you have to manage allocating memory. No full collects will be necessary if you don't allocate memory after loading your data.
The GC is more forgiving though, it only stops you app for some time, a missing free/delete will kill your app over time.

Whats wrong with IDisposable and using?


Like I said, its not just GC, its the ability to control allocation, address space.

.NET would have done well to have included an option for reference counting, unobtrusively, unlike AddRef.. in COM.

Share this post


Link to post
Share on other sites
Quote:
Original post by bnf
I know from speaking with one of the leads on the XBox Amped snowboarding title that even with the prescribed suggestions here (hold onto your refs, allocate pre-level, etc.) you are still just asking for trouble. You end up having to go through all your code and make sure you're not missing anything and this costs efficiency of your time.


You're comparing apples and oranges. You're comparing a console title running exclusively in ring 0 with a PC title running preemptively in ring 3. You're comparing a native code application with a custom allocator running a modified version of Lua on a platform with no virtual memory system to a .NET application that's JIT'ting each method (or running an NGEN image) and where a page fault is more expensive than a gen 0 GC.

On Amped and Amped 2, they knew exactly how much memory they had. They knew that the frame buffer would take X megabytes. They knew that their executable took Y megabytes. They knew that the XTL's would ask for a little more. After all that, they knew that they had Z amount of memory left, so they allocated it and let the Lua scripts do their things in that memory space.

Even with that, there were still minor hiccups during LOD transitions, GUI transitions, etc. The biggest pauses were during song transitions, and that's just because you can't work around DVD seek times.

On the PC, we've got to do a hell of a lot more than on a fixed-system like the Xbox to ensure that everything runs and is stable. If that means that on level load prior to allowing interactivity that I have to do a full GC, so be it. It's infinitely better than wading through logs dumped to the hard drive by your custom allocator so that you can find the one allocation that's leaking 32 bytes per minute.

[edit: left a sentence incomplete]

Share this post


Link to post
Share on other sites
Quote:
Original post by MikeyO
I find half life 2 locks up for a second or two quite a bit while it loads resources like sounds, if they haven't fixed it by now, it's usually right after a loading screen or going into a new room.


What does loading a resource from the hard drive have to do with automatic garbage collection which happens in RAM?

Quote:
Original post by MikeyO
Saying you shouldn't make a half life 2 in C# is like saying you shouldn't win the olympic gold medal in ski jumping: most people couldn't win the gold medal if they tried. Most people here are not going to be able to make a half-life 2 anyway, maybe use all the CPU and GPU power half-life 2 does, but they would probably be used wastefully and inefficiently.


Is it just me or does that not make much sense?

Share this post


Link to post
Share on other sites
Quote:

Even with that, there were still minor hiccups during LOD transitions, GUI transitions, etc. The biggest pauses were during song transitions, and that's just because you can't work around DVD seek times.


No, not "even with that", the point is it took them a lot of time and sweaty paranoid evenings to make sure that the biggest problems were during song transitions. The dev I spoke to bemoaned the GC in Lua for that reason. So if its a problem in a controlled evironment, then I'm failing to see how it gets easier to deal with on a PC?

But again, you are clinging to the GC issue, and avoiding the points made about address space management and custom allocation. Look, if your app doesn't need it, then that's fine. And moreover, I think the common sense approach here is that if you are not shipping a high calibur title then .NET is fine for now. It will probably get beter.

But its definitely a matter of opinion as to whether its "infinitely better" than tracking down memory leaks. I know that I don't have to work very hard anymore to avoid memory leaks in C++ with all of the tools and techniques. Its easy to start doing things in .NET where allocations are happening behind your back and this rule is harder to enforce.

Judging by the reactions to this thread, there is a mixed opinion on this. So go and use what works for you.

Share this post


Link to post
Share on other sites
Quote:
Original post by MENTAL
Quote:
Original post by MikeyO
I find half life 2 locks up for a second or two quite a bit while it loads resources like sounds, if they haven't fixed it by now, it's usually right after a loading screen or going into a new room.


What does loading a resource from the hard drive have to do with automatic garbage collection which happens in RAM?

Quote:
Original post by MikeyO
Saying you shouldn't make a half life 2 in C# is like saying you shouldn't win the olympic gold medal in ski jumping: most people couldn't win the gold medal if they tried. Most people here are not going to be able to make a half-life 2 anyway, maybe use all the CPU and GPU power half-life 2 does, but they would probably be used wastefully and inefficiently.


Is it just me or does that not make much sense?


#1: Agreed. I fail to see the connection between disk access times for HL2 and GC in .NET.

#2: It's not you.

Share this post


Link to post
Share on other sites
Quote:
But again, you are clinging to the GC issue, and avoiding the points made about address space management and custom allocation.


No, I'm not missing the point at all. After all, I also worked on Amped 2.

On a fixed system where you are utilizing every single byte of memory like they did on Amped and Amped 2, a fixed allocator is definitely preferred. Not because of the performance, mind you, but because you need to know exactly where (and when) each allocation and deallocation is going to take place.

On the PC, you don't. Hell, even if you allocate memory on the PC, you aren't actually getting all the memory you ask for right then and there. You're just telling Windows, "Hey, I need this much memory," and you get it when you write to the memory due to the memory allocation system in Windows. If that memory isn't available right then, then sorry but your write will just have to wait until Windows can swap the memory out to disk and your thread will just have to stall.

I've worked on over a dozen shipping titles, and up until recently, the Xbox titles were always the hardest...precisely because we were pushing that black box to the limits. If we weren't pushing it as hard, we would have had more of a memory buffer, and the Lua GC would not have been the minor frustration that it was. And trust me, that GC was minor compared to some of the other headaches we had...

Don't be surprised to see generational garbage collectors and the like in more engines and on the newer consoles. Not because they make things faster, but because they reduce development time. This industry is trying everything it can to reduce costs, because a $20 million bet to roll the dice in hopes of being one of the 20% of titles that make money is rapidly becoming less and less worth it.

Share this post


Link to post
Share on other sites
Quote:
Original post by Anonymous Poster
The problem in my opinion is not performance of managed code per se, because the rewards can be arguably worth any slight degridation.

The bigger problem is the non deterministic nature of the CLR's Garbage Collection. Personally, a non deterministic GC has absolutely no place in the architecture or codebase of any potentially commercial title. That means absolutely not in my engine code.

Others may differ on this extreme view and point out that with a lot of care it can be worked around. Personally I see this as a waste of time and any paranoid developer should want to avoid having to wonder when a GC will kick in during the middle of gameplay, causing animation to stutter.

If forthcoming versions of .NET can offer deterministic GC, then I think you can be much closer to a viable .NET based commercial game architecture.


A friend of mine wrote a game engine in C#, and at first, he saw the GC problem. However, once he moved everything to structs, that problem disappeared.

Share this post


Link to post
Share on other sites
The only place where GC would make sense in my situation would be in the GDI api. I use MFC and it catches my new/delete mismatches but it doesn't catch GDI memory leaks and those are disasterous. However at least the STL I use cuts down on my memory leaks which is nice. So I've been leak free for some while, other than one gdi leak I had. Now I'm more carefull about gdi coding.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
Original post by RomSteady
Quote:
But again, you are clinging to the GC issue, and avoiding the points made about address space management and custom allocation.


No, I'm not missing the point at all. After all, I also worked on Amped 2.


Well I cannot debate you on Amped if you worked on it other than to say the individual I spoke with expressed a much deeper resentment towards the ill effects of the Lua GC on the first title in that franchise. Not that I don't believe you but let's put that aside for now.

Quote:

On a fixed system where you are utilizing every single byte of memory like they did on Amped and Amped 2, a fixed allocator is definitely preferred. Not because of the performance, mind you, but because you need to know exactly where (and when) each allocation and deallocation is going to take place.

...

...the Xbox titles were always the hardest...precisely because we were pushing that black box to the limits. If we weren't pushing it as hard, we would have had more of a memory buffer, and the Lua GC would not have been the minor frustration that it was.


I agree. That was part of my list of reasons for avoiding .NET, if you see my points above. I think we are in "violent agreement" on this point, as the saying goes.

[qoute]
Don't be surprised to see generational garbage collectors and the like in more engines and on the newer consoles. Not because they make things faster, but because they reduce development time.[/quote]

With the newfound complexity in systems such as the next XBox, the PS3's architecture and even Revolution, I still don't see something like the CLR being the main runtime for games any time soon. I can see it running for some thing or another at times, and maybe even for certain types of titles. hese new systems all have vastly different architectures and a lot remains to be soon in terms of how to optimally take advantage of them. I am skeptical that any one vendor could implement something like the CLR to run efficiently and predictably on all 3 systems, which would almost be a pre-requisite for many mainstream titles that rely on middleware.

I think it will be at least another generation of consoles before veterans begin to trust various runtimes to do things on their behalf. Best case determinism is still the holy grail of soft real-time systems like games and the tools and checks for ensuring solid native code are constantly improving to the point where I don't feel a need for a GC in my C++ code tree to help me out. As someone else pointed out above, the CLR GC does nothing to help avoid resource leaks and memory is only one part of this equation. Smart pointers, the STL and C++ metaprogramming have taken the vast majority of pain out of native code development in my own experience.

[qoute]
This industry is trying everything it can to reduce costs, because a $20 million bet to roll the dice in hopes of being one of the 20% of titles that make money is rapidly becoming less and less worth it.
[/qoute]

Well I think that using C++/native code is not the main reason this industry has to make $20 million bets. Its a whole other topic that is not even technical and I am not going to get into it here. In the end the pro .NET for games folks in this thread have made some fair points I think, and some silly ones, but I still believe its something to be evaluated on a case by case basis and for most high calibur titles or engine foundations, its to be avoided for the time being. I'll leave it at that.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement