Jump to content
  • Advertisement
Crayz92

C# Game Server, should I keep fighting the garbage collector?

Recommended Posts

My game server is written in .NET 3.5 (Unity3d) with Lidgren.  I've been trying to write it in a way that allocates little to no garbage, but at this point it's inevitable.  In my application one server can host multiple games, and games can host multiple zones.  Each game runs in its own dedicated thread, and trying to pool/re-use objects between threads is probably going to cause a lot of headache and I'm not sure it's worth it.  Up until now I've been re-using objects quite often, the lifetime of these objects can put them in generation 1 or 2 but they only live as long as the game (or zone) is open, and games (or zones) will be opening/closing often.

 

So my question is, should I keep trying to fight the garbage collector, or should I make an effort to keep objects short lived?  My servers execute logic at 30 steps per second.  Will it be a problem if my server is allocating megabytes worth of short-lived objects each step?  I'm shooting for the best performance I can possibly get.

 

Another option I am considering is compiling my servers to be standalone so that I have access to a more modern .NET & garbage collector, but this will reduce my workflow substantially

Share this post


Link to post
Share on other sites
Advertisement

Improving performance means reducing computations, and memory management is computations.  But if you know that you only need 30 steps per second, you have a well-defined budget for computations:  You have 33ms worth of time to do whatever you want, and if your memory allocations and GCs still fit within that budget, then you're fine.

That said... megabytes per update seems a bit high to me.  If you look in unity's profiler, what are the majority of those allocations?

Share this post


Link to post
Share on other sites

I'm also concerned about the allocations. Find out where they are first, before deciding if they need to be removed.

C#'s collector is excellent if you're not being wasteful. The metrics can be tricky to interpret because even though it takes time, it may be time that your program isn't using.

I've worked with many games that have relied on garbage collection in several languages. Unless you trigger some of the bad "stop the world" situations, C# garbage collection runs in the background. (C++ and other languages tend to clean up objects in the middle of work, unless you take steps to stop it.) It only interrupts processing if you have exhausted your memory pools with garbage, and that's a situation you can generally avoid.  

So even if you can see garbage collection takes a few microseconds, those tend to be microseconds your process is idle anyway.

Also, be careful of pooled objects. Sometimes it is better to create and destroy the objects often so they don't survive generations. Object pools sometimes help, sometimes hurt, you need to measure to be sure.

Share this post


Link to post
Share on other sites

The megabytes per step was just a theoretical, I'm curious what the limit could be in memory allocations per step.  At the moment my only allocations occur when generating zones and spawning heroes/monsters/items (up to 10mb per zone).  All (or most) AI and update logic is processed without garbage.

That 33ms buffer would be a good time to clean up.  From all the reading I've done, manually calling the GC is frowned upon.  Will it be okay to call it manually if I know it's during that 33ms downtime?

I read an answer on Stack Exchange that stated their game server was allocating 50mb/s under heavy load.  If 50mb/s is viable then I must be overthinking this

Edited by Crayz92

Share this post


Link to post
Share on other sites

One thing to watch out for is the LOH (Large Object Heap) for objects that are 85k+.  Unlike the other heaps it is not self-compacting. This can lead to fragmentation on the heap and then you may run into an OutOfMemoryException which is pretty fatal in .NET land.  

We've had this happen at work when processing many large text files over and over with the end result being an OutOfMemoryException even though the application had not come anywhere near an actual out of memory situation.

You can manually compact the LOH, but it's not recommended (especially due to using GC.Collect) and it's incredibly slow.

Of course, this is only a problem in specific scenarios and it may not apply to your situation. But, it's always good to be aware of these kinds of issues, especially in a server environment.

Share this post


Link to post
Share on other sites

First thought - if each game runs in its own thread, and there's very little shared between them, could you perhaps use separate processes instead? That way, when a game finishes, the memory returns to the OS.

If you really do need a single multithreaded process, then I would echo the advice above - profile, then act if necessary. You're already coding with this problem in mind so you're 90% of the way there.

Share this post


Link to post
Share on other sites
22 hours ago, Crayz92 said:

The megabytes per step was just a theoretical, I'm curious what the limit could be in memory allocations per step.

Memory bandwidth on modern x86-64 class hardware is usually a few gigabytes/sec.  But keep in mind that the more of that bandwidth you use for allocations, the less you can use for actual work.

I've seen 'surprising' things that on the surface don't LOOK like they would use a lot of memory bandwidth in C#, such as concatenating strings with the + operator.  I ran into one case where someone was creating a 100K string one character at a time and estimated that the combined memory access was in the dozens of gigabytes.  The operation was mostly RAM-speed-limited due to this and took about 10 seconds.  Changing to a StringBuilder instead limited the operation to less than a meg of memory access and took less than a millisecond.

Share this post


Link to post
Share on other sites

Memory bandwidth on modern x86-64 class hardware is usually a few gigabytes/sec

Xeon E5 memory bandwidth is 59 GB/s, assuming you can make use of entire cache lines when you read them (streaming, or aligned objects, or such.) One of the benefits of compacting garbage collecting allocators is that they actually tend to allocate objects that are used together, together in memory, which is good for cache and TLB. (For desktop enthusiasts, rather than server folks, the Core i9 with 2666 RAM touches 80 GB/s I'm told.)

concatenating strings with the + operator

That's the classic newbie O-n-squared trap, yes :-) It's a great example of why algorithmic understanding is actually a necessity for programmers, even with modern languages and runtimes to help them.

The question of "how do I think about performance and resource limitations for servers for games" is actually quite deep.

In general, though, there are two options:

  1. Pool all the things! Never cause the garbage collector to run. In fact, pre-allocate the maximum amount of objects/memory you will ever need, and if you ever run out, decline accepting the new load. (Disconnect the user, deny the spell cast, or whatever.) This is the "hard guarantee" model of system thinking, and it has the benefit that, once you account for ALL the resources that are scarce, you can actually make guarantees, and stick to them. The draw-back is that it's a lot of work, and you will on average run at less than full system utilization, because you always budget for the worst case.
  2. Let the runtime take care of it! It's useful to reduce and re-use objects where it makes sense -- don't use wanton allocation without any thought -- but live in the managed heap world, and live with the garbage collection overhead. Ideally, you can stop-and-collect with a known frequency, such as once per tick (!) or once per second or whatever. If you collect once per second, the server will essentially run 30 ticks in the time of 29 ticks, and then collect, and then repeat, so you end up with an additional latency because of the collector of one tick or so. When it comes to budgets, keep pushing things in until you blow up. Most of the time, you will run at high system utilization. Once in a while you'll guess wrong, and some memory won't be there when you need it, which means you will have to blow up some entire instance and let the players re-connect again.

If you're a pragmatic developer who mainly cares about delivering "good enough" performance to consumers with minimum amount of work, option 2 is your choice.

If you're a systems programmer or out of the real-time community or just care about knowing that you deliver exactly what you say you will deliver, then option 1 should be your way.

 

Share this post


Link to post
Share on other sites

For us the main problem with C# Game Server - memory leaks. We build server instances via Unity3d. And it gradually eats more and more memory )=

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!