Maximising server output

Started by
1 comment, last by hplus0603 11 years, 7 months ago
I mean as in support of players and map. Say i had 10 intergers containing position, health etc. and they use an average of 10 mathematical operations per frame. And then say it is 30 fps(as in packets per second). Would that use 3000 cpu cycles per second per player? and saying that each number is and average of 3 bytes would that send 30 bytes a packet, which is there for 900 bytes a second.

Am i barking up the wrong tree? Thank you for you time
Advertisement
CPU cycles are a moot point, unless you go into a very large number of players. Things that you don't run on players (AI) would cost more. Code performance is more sensitive to your design than the actual number of operations. Cache misses, critical sections, poorly optimised floating point libraries, for example.

As for bandwidth usage, yeah. You take how much data you need to send per players, the frequency, and that gives you a rough estimated bandwidth cost. Also not forgetting overheads, like UDP overheads for every packet you send, which in practise cost a fair chunk of your bandwidth as your send frequency increases (sometimes more than the actual game data). TCP also has an extra associated cost if you end up having to deal with a lot of packet loss (meaning, resends).

Everything is better with Metal.

No, cycles don't work like that, for several reasons.

The most important one is that math is pretty much "free" these days; the cost is how fast you can get the needed data into and out of cache where the CPU can get at it.

Another important reason is that the algorithms you use will typically grow by the order of n-squared in the number of players involved. If you have 10 players, then each of ten players need to check collision against 10 other players, and you also need to send data about 10 players to 10 players, so, 100 data "items" to send. If you have 100 players, then each of 100 players need to check collision against 100 other players, and you also need to send data about 100 players to 100 players, so, 10,000 data "items" to send.

The final important reason is that it's unlikely for a server to have more than 100 Mbit of Internet bandwidth for itself. Yes, there are bigger pipes to be had (we have them at work :-) but those pipes are generally served by a larger amount of servers. So, with 100 Mbit of Internet bandwidth, that's about 10 Mbyte per second of data you can send and/or receive. With a 50 Gbyte per second memory bandwidth in a typical server, you can touch each byte 5,000 times for each time you send it on the network.

When it comes to performance tuning and capacity planning, the most important thing is to characterize your load, typically using measurements. Build what you need to build, then measure it under different kinds of load, along a number of axes (CPU usage, memory usage, network usage, latency, power, etc.) Once you have that data, you can do suitable capacity planning for whatever you're trying to accomplish.

If you have to design for a particular cost or performance target, then it's important that you specify the specific target along all axes up front. Then plan for what you can deliver within that performance goal (typically requires lots of experience to get this part right.) Then continually measure yourself against that target using realistic benchmarks. After each code checkin and build, run a target system at the planned target load (typically synthetically generated,) and make sure it performs according to target specifications (number of physics steps/second, amount of RAM consumed, amount of network bandwidth used, etc.)
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement