# Scaling server side questions

This topic is 2290 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I have a bit of an issue deciding on how to code my server side to easily scale if/when my games starts attracting players. My game is a turn based trading card game. Right now my architecture looks like this.

Client-side connects and logs in to my main server. It then sends addresses to other services back to the client so it can initiate extra connections. For example the chat server. This allows me to have the chat server on a separate machine or on an entirely different network.

The main server handles trading, card buying, match making and some other stuffs, but nothing requiring major bandwidth or CPU. The actual games are on my game servers, which players connect to when starting a new game.

So far it seemed like it would scale nicely, because the 2 major cpu/bandwidth consumer will be the chat server and the actual game and they are handled separately. However, what if my main server could not support all the traffic anymore for say, 10000 players. I thought about moving the matchmaking or trading section to a service by itself, but I feel it only postpone the problem.

So I'm wondering what games usual do. I wanted to have only 1 server/service to allow every player to see/talk/play every other player. Making different "realms" would defeat this feature.

##### Share on other sites
Trading and matchmaking shouldn't be a problem even with 100000 players as they don't take that much time (most of those players won't be hogging the matchmaking server, they'll be playing against someone), Keeping matchmaking and trading separate might make it easier to move one of them to its own server if needed though and will also make it easier to distribute those 2 tasks across different cores on the same server.

For matchmaking you'll mostly be CPU bound, the size of the pool of waiting players is the primary factor affecting cpu load on that end but using things like sorted binary trees can speed this up alot and you don't really need a big pool for efficient matchmaking. (100-200 or so should be a sufficient size to make good matches) (If you don't have a ranking system you don't even need a pool of players for the matchmaking, you just assign players to eachother in the order they connect) Edited by SimonForsman

##### Share on other sites
However, what if my main server could not support all the traffic anymore for say, 10000 players[/quote]

10,000 players at typical conversion rate of 1% means 100 paying customers. Let's say you charge $5. That's$500/month.

There are many choices, but at the lowest end, $500 buys you: - seven i7-2600 quad cores with 3TB storage and 16 GB RAM - a total of 7x15TB (105TB) of traffic per month So it's not really something to worry about. #### Share this post ##### Link to post ##### Share on other sites You guys just removed a heavy burden from my shoulders. I was thinking about all the work I would need to have multiple processes on different machines to handle tasks and the synchronization needed for all of this. For now I won't bother with this and keep my current architecture, as it will not be a problem until I reach a very high player count. On a side note, I am developing my game on a windows machine (only thing I have). Should I think about moving on linux at least for server-side ? I heard it is hard to find Windows server to rent to run my dedicated servers. I'm a beginner with Linux but I know I can manage to do it, just wondering if it's really worth it. The extra development time could be better spent on the game itself. Thanks for your answers. #### Share this post ##### Link to post ##### Share on other sites I heard it is hard to find Windows server to rent to run my dedicated servers.[/quote] It's not. It just costs ~$20/month extra. Just about every dedicated server provider offers it.

Linux is preferred for various reasons, mostly because of lower entry level costs. (there's also cost of client-side Windows, Visual Studio, possibly scaling up the MSSQL, ...).

Windows ecosystem is mostly annoying because of licensing. While MS doesn't exactly haunt every Tom Dick and Harry, there's just extra hassle in trying to remain fully compliant to licenses. For example, your current license might not allow running a copy of Windows inside a VM, making it difficult to test. Or it might not allow concurrent logins into server. Or it might be locked down to 4 processors (or whatever the restrictions are these days).

It's just extra friction.

Of course, Linux comes with its own problems.

##### Share on other sites
the 2 major cpu/bandwidth consumer will be the chat server and the actual game[/quote]

Do you have any evidence of this? A profile or measurement of some sort?

Don't get me wrong; I think vertically partitioning services (like chat, trade, etc,) and horizontally partitioning per-player things (like game rooms) will prepare you for some of the scaling if your game takes off. But the bottleneck you run into is very seldom the bottleneck you actually think it will be -- prepare to be surprised :-)

Also, I assume that you have your game servers ask the trade server for the card inventory of each player each time a new game starts. This might end up being at least as much load as the card trading itself.

Finally: make sure to have great back-ups, and that you can actually restore a working system from your back-ups! You should test this on a regular basis. The best way of testing, is to do it in production. Keep two machines, restore a back-up on a machine and swap that in as the "active" machine while you rotate out the other machine. Do this every midnight, and you know that it will work when you need it. (This assumes you have perfect up-to-the-minute backups -- or some other, equivalent kind of replication/redundancy)

##### Share on other sites

the 2 major cpu/bandwidth consumer will be the chat server and the actual game

Also, I assume that you have your game servers ask the trade server for the card inventory of each player each time a new game starts. This might end up being at least as much load as the card trading itself.
[/quote]

This could probably be solved by mirroring the card inventories across multiple servers (so the trade server can update the master inventory which in turn updates the mirrors that the gameservers query)

##### Share on other sites

the 2 major cpu/bandwidth consumer will be the chat server and the actual game

Do you have any evidence of this? A profile or measurement of some sort?

Don't get me wrong; I think vertically partitioning services (like chat, trade, etc,) and horizontally partitioning per-player things (like game rooms) will prepare you for some of the scaling if your game takes off. But the bottleneck you run into is very seldom the bottleneck you actually think it will be -- prepare to be surprised :-)
[/quote]
I'll keep this in mind. I did get some numbers on how much bandwidth I would need for an average game but it was aproximated. I'll have to get more precise measures later on when I get closer to release and starts optimization.

Also, I assume that you have your game servers ask the trade server for the card inventory of each player each time a new game starts. This might end up being at least as much load as the card trading itself.

I doubt this is an issue, the main servers sends to the game server only the cards the player will use for this single game. It is an int32 for the card type id and 1 byte for the card count. Worst case scenario I need to send about 200 bytes of data per player / per game (plus overhead) at startup. It is negligable considering players start games once per 10+ minutes. Edited by Chindril

##### Share on other sites
It is an int32 for the card type id and 1 byte for the card count. Worst case scenario I need to send about 200 bytes of data per player / per game (plus overhead) at startup. It is negligable considering players start games once per 10+ minutes.[/quote]

The question is not how much bandwidth it is, but how much load this puts on your database and CPU. Reading all the necessary data from disk to calculate what the data is to send is likely to run out of oomph much sooner than the bandwidth available. But again, prepare to be surprised :-) The best thing you can do is insert a *lot* of instrumentation, so you can easily tell if you're CPU limited, I/O limited, bandwidth limited, RAM limited, ... and also *why*. High-resolution graphing of what's going on in your application is as important as having that application actually running in the first place, once you're at operational scale. Luckily, there are lots of solutions out there: Zabbix, Munin, Graphite, OpenTSDB, ...

##### Share on other sites
Thanks, I'll definitely check those solutions for profiling.

1. 1
2. 2
3. 3
JoeJ
12
4. 4
5. 5

• 12
• 16
• 13
• 20
• 12
• ### Forum Statistics

• Total Topics
632176
• Total Posts
3004590

×