Maximising server output,
Members - Reputation: 220
Posted 11 September 2012 - 02:24 PM
Am i barking up the wrong tree? Thank you for you time
Crossbones+ - Reputation: 2193
Posted 11 September 2012 - 10:33 PM
As for bandwidth usage, yeah. You take how much data you need to send per players, the frequency, and that gives you a rough estimated bandwidth cost. Also not forgetting overheads, like UDP overheads for every packet you send, which in practise cost a fair chunk of your bandwidth as your send frequency increases (sometimes more than the actual game data). TCP also has an extra associated cost if you end up having to deal with a lot of packet loss (meaning, resends).
Everything is better with Metal.
Moderators - Reputation: 10337
Posted 12 September 2012 - 09:53 PM
The most important one is that math is pretty much "free" these days; the cost is how fast you can get the needed data into and out of cache where the CPU can get at it.
Another important reason is that the algorithms you use will typically grow by the order of n-squared in the number of players involved. If you have 10 players, then each of ten players need to check collision against 10 other players, and you also need to send data about 10 players to 10 players, so, 100 data "items" to send. If you have 100 players, then each of 100 players need to check collision against 100 other players, and you also need to send data about 100 players to 100 players, so, 10,000 data "items" to send.
The final important reason is that it's unlikely for a server to have more than 100 Mbit of Internet bandwidth for itself. Yes, there are bigger pipes to be had (we have them at work :-) but those pipes are generally served by a larger amount of servers. So, with 100 Mbit of Internet bandwidth, that's about 10 Mbyte per second of data you can send and/or receive. With a 50 Gbyte per second memory bandwidth in a typical server, you can touch each byte 5,000 times for each time you send it on the network.
When it comes to performance tuning and capacity planning, the most important thing is to characterize your load, typically using measurements. Build what you need to build, then measure it under different kinds of load, along a number of axes (CPU usage, memory usage, network usage, latency, power, etc.) Once you have that data, you can do suitable capacity planning for whatever you're trying to accomplish.
If you have to design for a particular cost or performance target, then it's important that you specify the specific target along all axes up front. Then plan for what you can deliver within that performance goal (typically requires lots of experience to get this part right.) Then continually measure yourself against that target using realistic benchmarks. After each code checkin and build, run a target system at the planned target load (typically synthetically generated,) and make sure it performs according to target specifications (number of physics steps/second, amount of RAM consumed, amount of network bandwidth used, etc.)