Simple Server design

Started by
12 comments, last by hplus0603 14 years, 5 months ago
Hi, I've made posts here a few times and you guys have always been extremely helpful. Currently my server is structured this way (WinSock, C++): - Check if any new connection has been made (non-blocking) - Every frame call recv() (non-blocking) on each player socket - Update players and nearby NPCs - Update any other housekeeping stuff This obviously uses 100% of the computer processor 100% of the time. I'd like to do this a better way. From what I have read, a good alternative is to do the updates/housekeeping only when I receive data. In addition, the client would send 'heartbeat' packets every second or so, which would force an update regardless if they were simply standing around. Now my issues come up: how to do this effectively? Should I look into IOCP? Would that meet my requirements (minimal threads, low CPU usage when there's a low amount of players)? Thanks for your time :)
Advertisement
Quote:
This obviously uses 100% of the computer processor 100% of the time.

And why is this a bad thing? If you were running another heavy process then it would take some of that time. But otherwise why would your OS not give your application all the processor time.

I wouldn't move towards something like IOCP when I already have something working well without a very good reason. I am not sure whether IOCP would make a huge difference for a small number of players (less than 100).
The problem is in "every frame". If you simply process frames ASAP, then it will always run at 100%.

Usually, select() would specify timeout parameter which would wait for a little while if nothing happens.

Regardless of which networking API is used, the main loop will look something like this:
while (running) {  readNetworkPackets();  while (lastTick < currentTime) {    updateState();    lastTick += dt;  }  sendNetworkPackets();};
The only different is then in how the read and write methods are implemented.
Thank you for the quick response. The reason why I want to change models is because I don't LIKE it using 100% of the CPU. I am running a server for a game I am developing, which is very often empty. This uses power (I could be wrong, but a computer running at full processor uses more energy than an idling computer, right?) and doesn't give me a good understanding of how much CPU I am using at a certain level of players.

@Antheus:
I am obviously doing some things 'wrong' (though it also has been working for me). I process recv() calls immediately for each socket, which will send data to them immediately as well. This has not caused problems so far, but I can see where it would cause extra overhead.

In the while loop you presented, it seems the server would still consume 100%, unless I am using select() (which I guess is what you meant?). So, to make sure I understand this: select() would poll data on a list of sockets, and wait say, 500 ms, and then update the game state to 'catch up'. So if no one is sending data, it will have a 500ms delay between updates? Of course the 500ms is just an arbitrary number I am using.
Both, select() and GetQueuedCompletionStatus() have timeout parameter which means they will block until either timeout expires or some event occurs.

On the passive side (server) this means that read() would block for some sensible time. For a 100Mbit network 5ms is a reasonable time (roughly the time to transfer 64kb, or common maximum size of since socket buffer).

The rest of the loop is there to ensure consistent logical time, so that state is updated 10 times per second. The if is there to not update after every event, but in batches. So read simply tries to get data from network stack ASAP, but then stores it into queue to reorder or otherwise organize data for simulation tick.
You can always throw in a Sleep(1) call too. That will force your server to yield some CPU time to other processes.
Quote:Original post by Windryder
You can always throw in a Sleep(1) call too. That will force your server to yield some CPU time to other processes.


The timeout parameter in functions mentioned above fulfills this role, and has the advantage of immediately returning when something happens on network.
Quote:
And why is this a bad thing? If you were running another heavy process then it would take some of that time.


Because running a CPU at 100% when it's not actually accomplishing any useful task with those cycles causes a massive increase in power consumption over what you need.

Many data centers charge users based on power consumption. Electrical companies bill households for their power consumption. Laptops have batteries with limited amounts of power. No matter where this game server is going to run, wasting power is going to cost a lot of money that wouldn't need to be wasted if the app didn't throw away CPU cycles for absolutely no good reason because the app was written incompetently.

There's a reason that AMD and Intel are competing more on power consumption than performance these days, even in the server space. Power consumption and its cost is starting to break a lot of companies, and in today's economy, a lot of home users as well.

Do NOT be lazy and waste cycles you don't need to. You are being actively harmful to your users if your games or servers busy-loop like that. Games have absolutely no need to run faster than the refresh rate (triple buffering solves any concerns about skipped frames) and servers have absolutely no need to run faster than they're receiving actual network events. The OP is totally on track in wanting to get rid of the busy-loop in his code.
I put a sleep(1) in the servers I have coded. It drops the CPU utilization from 90% always to 1% when idle with no connections. Unless you are making an MMO or game with extreme server-side physics, the average dedicated game server doesn't need much CPU. If you have a separate match-making server app that just sends/recieves open game sessions, you can make it sleep(100) to be even nicer to the CPU. =)
+1 for Antheus' response. I also just use the timeout parameter in the select() call to throttle CPU. You get a stable server update cycle, and even a moderately busy server (many connections on multiple maps) uses very little CPU.

Crazyfool: in my case, if select() returns immediately, I'll process it right away and then call it again with a reduced timeout. So if there is a lot of socket read/write activity I'll process it immediately, and only block on the whole timeout if there is no read/write.

You can see my server loop here: SourceForce link, line 664. Line 798 is where select() is (ultimately) called, and you can see I'm passing in a timeout value.

This topic is closed to new replies.

Advertisement