Jump to content
  • Advertisement
  • entries
  • comments
  • views

Server emulation

Sign in to follow this  


In the past days, i've been performing a network stress test, by emulating a server and a lot of clients connecting to it. In the above screenshot, you can see the test window as well as many statistics, which i'll explain in a short while.

The whole server is centralized around two components. The first one is, of course, the network module. I decided to code my own a year ago instead of using an established one ( like Raknet: i heard it didn't support a massive amount of connections, due to its per-packet overhead ). The result is INetwork, an implementation of the RDP protocol ( reliable-UDP ) with a few custom optimizations. It has a low cpu processing overhead, it automatically determines the quality of each connection in the system ( the "ping" ), can concatenate packets together to save header space, or even integrate acknowledgment packets into data packets. It supports many types of reliability, from fully unreliable ( pure UDP packets ) to pure reliable packets ( much like TCP ), with reliable but out-of-order packets in-between. I/O completion ports are experimental under Windows, but i found that the library performance was still excellent without it.

An interesting thing affecting the server performance is the quality of the network card. I already mentionned it in a previous entry, like 6 months ago, but some systems simply do not seem to be capable of handling a large amount of clients; the CPU usage rises extremely quickly, to struggle at around 100 connections. Fortunately, my test computer seems to have a good network card, and i went up to 2500 connections without too much troubles.

The second "core" component in the server is called IClustering. This is a small library which is responsible of determining which entities "see" which others. An entity is determined by a position in space, and a radius. This radius determines the minimum distance at which the entity is being visible by other entities. For example, the entity could be a tree and have a visibility radius of 1 km, which means that any other entity ( like a human ) that is closer than 1 km can "see" the tree. Note that the contrary isn't true: if the human has a visibility radius of 500 meters only, the tree will not "see" him. Each entity can be in 3 states: fully static, semi-static and dynamic. Only semi-static and dynamic entities can be moved inside the space ( or change their visibility radius ). Only dynamic bodies are informed about what they see or not. Technically, the implementation is based off a recursive regular grid, made of NxNxN cells ( each cell being a sub-grid ).

So, with these two components, i "emulated" a MMO server. To do that, i used many concepts:
- zones ( approximately 250 ): entities in different zones cannot see or interact with each other
- places of interest ( the green circles in the screenshot ): these are areas that mostly attract clients. In a Fantasy MMO, these would be cities, buildings or dungeons. In Infinity, they represent planets or stations.
- the clients: the yellow or cyan dots in the screen. They can be in active or idle state.

How it works:
- the server initially creates a number of zones ( 250 in my test ), with a set of places of interest in each one. The places can be recursively formed around other places ( to simulate moons orbiting planets in Infinity, or buildings in cities in other games ).
- a client emulator connects a defined number of clients per second ( between 10 and 100 in my test.. yeah that's a lot of connections/second, but the goal is to stress it after all ).
- each connecting client is affected to a zone. Zones have different probabilities of "interest" ( to simulate zones more busy than other ones ). The most active zones can gather ~30% of all the clients.
- the client is set into active state, and chooses a destination point. There are many probabilities for that. Around 30% to choose an existing place of interest. Around 50% to warp into another zone. Around 20% to choose a location where an existing client is. These numbers are for Infinity, where warping into different systems happens often.
- when a client arrives at destination, he is set into the "idle" mode. If he is warping into another zone, this zone is chosen based on zone probabilities, and is moved to a random point in that zone.
- when a client is in idle state, he has a 0.5% probability to be awaken and to choose a new destination point.

- there are normally 2 "update" packets per second ( to simulate sending position + orientation ), which are around 200 bytes each ( this is randomized a bit too ). These are sent in unreliable mode. They are sent to all the clients an entity ( another client ) is seen by.
- if the client is in idle mode, there are no updates.
- if the client is not seen by anybody, the update rate is lowered by a factor of 4 ( so one update every 2 seconds ).
- two updates per second can look low, but that's what Guildwars is using. And do not forget for Infinity, that ships cannot change of velocity quickly. This should be more than enough even during fights, thanks for dead-reckoning.
- when a client starts or stops to see another entity, a "creation" or destruction packet is sent in reliable mode ( ~300 bytes ).
- the clustering visibility computations are updated 10 times per second

- they are currently a bit biased, because i have a dual core machine, and the client emulator takes away a lot of resources from the server.
- some zones get a high amount of clients ( up to 150/zone ), most zones get a low amount of clients ( 0-5 ), the average being around 20 clients.
- the visibility radius of the clients is very small compared to the zone and places size. In consequence, most of time there are only packets sent when some clients are in the same place.
- for 1000 clients, the upload bandwidth is around 100 KB, and the download bandwidth around 200 KB. In average, 40% of the clients are idle. The average amount of bandwidth sent by a client to the server is 400 bytes, and the server returns ~500 to 1500 bytes. When many clients (10-15) are in the same place, this can rise to 5000-8000 bytes.

This post is becoming long, i'll add more informations/results later.
Sign in to follow this  


Recommended Comments

The visibilty radius is interesting - are you considering other sensory mechanisms (like scanners) or effectors - such as warp points - in Frontier you could track people by examining their warp points and in some cases (if you had a fast military drive) beat them to their destination... If you can keep your load / generation times very short and artificially lengthen them based on drive specifications this would add a nice 'tense' feeling to travelling when someone's chasing you.

I've got a thread rambling on about sensory (event) propogation and attenuation lying around in the Multiplayer & Network forum... but it seems the general response is 'huh? wtf?'. ;-p

Also are you planning on shifting your zone representations around a grid to load balance? If you're expecting disparity between zone populations - likely along 'known trade routes' this is probably worth considering.

Share this comment

Link to comment
Did you emulate the planets yet? Because using visibility sphere is trivial, but determining the visibility with 20 ships in a canyon.... thats not so easy.

Share this comment

Link to comment
Yike... that's a nasty problem.

From a server point of view, I think it's probably prohibitively expensive to generate dpvs planes based on the aabb of a moving ship and an arbitrary canyon edge. Client wise you might get time to do this, but I suspect not. Occlusion in this instance is therefore likely to be left to the z or stencil buffers, depending on how the terrain is drawn.

The problem is not technically the occlusion - since you are dealing effectively with aircraft, there will often not be anything other than the curvature of the planet itself to provide occlusion (especially if Very Large ships are allowed).
The cases where terrain could provide occlusion can probably be taken as so rare as to not be the main point of contention.

Assuming that complete free-flight is allowed (it's what the graphics engine is designed for) we're left with the problem of potentially many visible entities that will require updating. Reducing update frequencies at range seems to be the only viable method of reducing network traffic within a zone. This may make large scale, close range dogfights between many combatants lag fests, if a fixed zone definition is used, with other craft in it.

A totally flexible clustering method could be a solution, with the ability to pass off sections of a zone arbitrarily to less loaded servers - most likely as a simple criteria of client population density, referenced against range. This is a similar mechanism to that described in the journal entry. The issues I can see causing difficulties with it are the generation of a specific portion of the game universe on the receiving server (box) rapidly, or the transfer of the generated data, and the informing of other servers in the grid of the change of authoritive server for the transferred region.

For a single planet, I've heard of this kind of problem being approached (you know who you are) by a modified quadtree representing 'segments' of the planetary surface - this might tie in with how the planetary surface patches are generated. The problem is you have an unknown number of 'loaded' planets at any time within your grid. Quadtrees would therefore need to be generated along with the planets, which might prove prohibitive. However, the principle behind adapting the quadtree to a sphere is a sound one that can potentially be used in this case.

Given that we have a large, central occluding body, I'd consider the use of polar coordinates coupled with altitude to specify player positions, relative to the planetary body they are in low-orbit or the atmosphere of. We can calculate relatively quickly the polar spread of their sensory range, and also the minimum altidude at other polar positions that are not occluded. Where polar spreads overlap (2d circle-circle on edge-wrapping area collision) you have the basis for considering a cluster shift.

I'll give this some more thought, as my current project is intended to handle all this sort of stuff for people.

Share this comment

Link to comment
Keep in mind that the server will also have to generate the canyon, and generate it dynamically.
He said that he will cache some of the data, but I still suspect that this will not be enough, because the logical thing to do if a ship is out there to get you is to go to the nearest planet and hide in canyons and stuff like that. So big dog fights can happen very close to the surface, in some cases on different sides of a planet. It will take a lot of memory and CPU time to generate those features dynamicall and store them in the cache.

Share this comment

Link to comment
I don't think that at the level of individual canyons that it will matter a lot. It's a tradeback between accuracy and memory or a few more kb in bandwidth.

Share this comment

Link to comment
Yeah, hence my suggesting polar spreads as the basic for segregating groups of clients. Local geometry would still need to be generated on the receiving server. Depending on the complexity of the generating algorithms, directly transferring partially generated information may be a better alternative.


To the above - no, not quite. You have to bear in mind that the clustering also performs the task of aggregating relevancy checks - if the entire cluster is out of range no data needs to be sent to any member of it. This can save a potentially large amount of bounds checking. CPU usage is also freed by offloading congregations to lower-population servers. Memory isn't the issue, and bandwidth tends to be limited by the external pipe to the server-grid, not the internal network itself.

Share this comment

Link to comment
You can't let the client do the checks, the server has to do them, or else you get wall hacks and even worse, the possibility to fire through mountains and enter montains and stuff like that.
Anti cheating methods are useless, as someone can put a proxy in between, and rerouting that traffic that way.
The only way to prevent cheating is to do it on the server. There is no other way.

Share this comment

Link to comment
I will be doing the checks on the server, of course, but at a much lower rate than the client. If performance is a problem, the checks will only be performed on a sub set of the clients, chosen randomly.

Share this comment

Link to comment
Performing visibility tests on the clients (I mean, randomly, to see if they are cheating) is impossible.
The only check you can do on the clients (randomly) is to see if they dissregard a mountain or something while firing at another ship. Or if they go astray and hide inside mountains and such.

Share this comment

Link to comment
Looks a lot like the sphere tree based network load tool for Alloy we used when I was with Silver Platter Software working on MMO middleware.

Share this comment

Link to comment
How about this....

Have the server continue to poll each individual user for their location, but have players in area's close to each other update each user located within the viewing radius rather than the server... than to add an extra layer have the server randomly be the one jumping in and sending a specific player everyones coordinates to avoid exploits.... Than for larger battles have very specific radius's, so like players 2km from eachother update every second their position, but players 15km from eachother update every 3 seconds...

Sooo, for a 500 player battle you have the server only tracking 500 user positions, entity info, ya di ya, like they could be anywhere... All the players in that location realise that they are all located in the same area and send out their coordinates to 500 users and receive 500 different coordinates, rather than receiving it from the server(unless its that one random check).... So 500*200bytes is 100Kb/second, but than reduce that using some form radius to update weight, also throw in the variable of who your shooting/targeting(they need to be updated the most).....

So your shooting at one player and 20 players are directly near you, you could say 200b/1s than 200b*20/3s... than lets say theres 400 players within the next radius step, have the updates per second randomized over 3 to 8 seconds, than the final chunk of 79 updated every 8 to 10 seconds... So the number of players to update every second could be 1 + (20 / 3) + (400 / (8s - 3s)) + (79 / (10s - 8s)).... Which = 51kb/s upload for having a battle of 500 users around you.

obviously some major tweaking could be done to make is drastically lower... all this with close to the same bandwidth on the server...... this could work in general too I would think

Share this comment

Link to comment
Urm, no. Never, never, never let an MMO client determine combat.
And NEVER let more than one client think they're resolving combat. You'll end up with contradictory updates coming IN to the server and all sorts of other horrible things. These could result in anything from false hits (a minor problem) to outright cheating, to bringing down the server if incoming information isn't validated properly at every point.

Allowing the client to determine occlusion for graphical purposes (HSR) in the case of canyons etc. is one thing, but anything that should be authoritative to other clients is a no-no.

Share this comment

Link to comment
In some network applications with large amounts of packets being moved around you can get killed by the sheer number of interrupts the NIC has to make to get the CPU's attention that it has sent/received the latest packet. Smarter NIC's/drivers are able to do a certain amount of interrupt coalescing(sp?) to mitigate this. Some particularly crappy combinations also do not do filtering based on the target IP and you will end up with an interrupt for every single packet that goes by on the wire, regardless of whether it's meant to go to your machine or not.

Share this comment

Link to comment
I didn't say let them determine combat, the server still would handle players shooting at eachother, I mean for tracking positioning.... and the way it would work(in my idea) is that every client should still get a pass(a list of locations of other players nearby) sent to them by the server instead of players every 40 seconds or less anyways........

Share this comment

Link to comment
Unfortunately in this case positioning and combat equate to the same thing - given that the flight model is freeform, the server must know authoritative positions, as must clients. The frequency of updates is a factor of the manueverabilty of the craft, or more precisely the predictability of movement, in a very similar way to an FPS. If the ships are nimble, the frequency must be high, otherwise you'll end up with players shooting at stuff that isn't where they think it is.

The NIC interrupt issue is another factor to consider, not only in how many updates to send, but which machine in the grid should send them. If there's a single gateway server, rather than a hardware router this becomes a major problem. Assuming that each box in the grid effectively has its own external 'pipe' and that the network hardware (switch / router) can handle the player load, this issue becomes one of load balancing.

Share this comment

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!