hplus0603Member Since 03 Jun 2003
Offline Last Active Yesterday, 02:30 PM
- Group Moderators
- Active Posts 10,993
- Profile Views 20,792
- Submitted Links 0
- Member Title Moderator - Multiplayer and Network Programming
- Age Age Unknown
- Birthday Birthday Unknown
Redwood City, CA
Posted by hplus0603 on 20 August 2014 - 01:43 PM
However, that doesn't actually matter. What you want to know is "how early should I send commands so that they are at the server at server time T," and "When I receive the state of an object stamped at time T, what time should I display it at?" Both of these questions can be answered with relative measurements, rather than trying to absolutely pin down the server/client send/receive latency. And both of those answers typically include some amount of latency compensation (de-jitter buffer) that make an absolute measurement less useful anyway.
Posted by hplus0603 on 18 August 2014 - 01:10 PM
Given that your network throughput (10 kB/s ?) is about 10,000 less than your disk throughput (100 MB/s ?) it's unlikely there will be any perceptible impact by the logging. Make sure you use a buffered I/O mechanism. fwrite() is fine, or collect into your own 4kB buffer and flush when full. The kernel will in turn asynchronize the write, so there will be no stall in the writing thread.
Setting it up so that you can also play back the full stream is the pay-off for that logging -- this will let you very easily reproduce rare bugs.
Posted by hplus0603 on 18 August 2014 - 09:14 AM
A typical game runs physics at a fixed tick rate (30-120 Hz, 60 Hz typical) runs graphics at however fast it can (with intra/extrapolation from physics, although 60 Hz frame locked is common) and runs networking at another fixed tick rate (10-50 Hz, 20 Hz typical.)
All messages that need to be sent to a specific client from a server, or to the server from a client, is put in a queue, and that queue is drained and a single packet sent each time the network tick comes around.
If you send too much data, so the queue or network stack backs up, then the client can't keep up, and you need to either drop that client, or you need to optimize your network traffic so that you send less data. If you send more than 10 kB/second, you're likely doing it wrong (many action FPS games can get by with 2 kB/second or so per client.)
Other than that, there are two main approaches to networking: Use lock-step simulation (everyone simulates the same thing,) or use predictive display with server tie-breaking of differences. Lock-step is very robust once you get it working, and uses a very small amount of data on the network, but introduces a lag time between command and effect that you need to hide somehow. It also de-syncs if you have any kind of consistency bug. Predictive display allows you to immediately show an action on the local client, and isn't as sensitive to de-sync, but ends up with lag-induced problems like "how could you shoot me when I just dove behind this cover?"
Posted by hplus0603 on 17 August 2014 - 01:41 PM
Each entity has some type and can be programmatically created; each entity is introspectable so that all properties can be extracted.
This system can also be used for save-game and level editors. btw.
Additionally, events should similarly be introspectable and observable. Any destination for an event should use some kind of identity that's not a pointer. And, in general, sub-components of entities and of systems should generally be identified by ID, not pointer, to make talking about them on the network easier.
Then, the network system can install itself as an observer on the entity set. When an entity is created, it observes this, and introspects the entity to extract its type and properties, and send an appropriate "create this entity" message to all listeners.
You can also implement a route for events. This means that if the client wants to create an entity, the "create an entity" endpoint object in the client would route that message to the server, rather than do it locally.
Additionally, the "set" operation for properties may need to support injection of behavior. "Set" for "Position" on a locally mirrored entity might want to do over-time interpolation, rather than straight-jump, for example. A sufficient amount of template metaprogramming and smart defaults for "client" versus "server" behaviors can make this simple to express for the entity creation, but the underlying systems need pretty careful attention to detail and implementation for it to work out.
You then need to express your entire game logic in terms of these introspectable, observable, properties and types. That's a lot of re-factoring if you already have a game that uses more traditional "big struct of stuff" approaches. Although the observable interface can be expressed on top, by having each object register observable properties using pointer-to-date-storage.
when you have all of this set-up, you can choose whether to use networking or not (and what kind of networking) by selecting what you observe and what you inject.
For what it's worth, most games don't actually do this. They instead use the "Blob" design pattern (also known as "The Big Ball of Mud") where each entity knows about the network, and Does What It Takes to make it all work. And, honestly, most games have a simple enough object model that I don't blame them; that may very well be the right choice for many such cases.
Posted by hplus0603 on 15 August 2014 - 12:29 PM
Game networking is the part that happens on top, where your particular game rules collide with the realities of the internet.
It largely has to do with planning your game rules very carefully to work around the fact that there will be latency, that is sometimes unpredictable.
Example player statements that show the problem:
"How could you shoot me; I had jumped behind that wall!"
"What do you mean the monster hit me before I cast the Freeze spell; I could see on my screen that I cast it first!"
There isn't really a library that does game rules at this level over a network, because how you implement this depends very much on how you choose to deal with the problem of latency.
Various game engines will have some kind of networking built in -- Unreal Engine 4, C4 Engine, Torque 3D, Source Engine, etc, all make different choices to various degrees.
DirectPlay is at least 10 years out of date, and was generally a bad idea even back when Microsoft still supported it.
The most common library used in networked games is likely the RakNet library, which just recently was open sourced -- yay!
You might want to start there. However, RakNet still doesn't solve the "how do I make my game rules work in a network with significant, variable latency" problem. That's up to you.
Posted by hplus0603 on 13 August 2014 - 09:45 AM
1) They use a native download client (written in something C-like, perhaps with a scripting language like Lua or Python added) and they use plain TCP sockets for communicating game data to/from the servers. Note that Unity plug-in games also fall in this camp, because the plug-in itself is written in something C-like and doesn't use the surrounding web stack.
2) They use a technology like Flash or HTML5, and use a more relaxed "MOG" definition, where immediate real time is not needed. Examples include anything from Farmville to Neptune's Pride. These kinds of games use HTTP batch-style communications, although with proper double-buffering, you can get latencies down to 100-200 milliseconds or so using this method.
Websockets aren't yet the "major" use case, that's optimized the most -- but it's certainly up and coming. We use it at IMVU for some real-time interaction, and it works okay (not as well as straight TCP, but better than HTTP.)
What you want is a nice library for websockets on the server side, because the negotiation and set-up of the server side is a little bit fiddly. We used some websockets library for our Erlang based server, although it had significant challenges for us and we ended up changing a lot of that code.
For Java servers, a quick Google says that TooTallNate's Java-Websocket library is the top hit (in addition to some native Java websocket support in the JDK from Oracle.) I have no idea whether they're any good or not.
Posted by hplus0603 on 05 August 2014 - 09:47 AM
-Jitter and packet loss causing character movement to not look fluent
-High latency causing situations similar to the infamous shooting-around-the-corner problem
-Connection loss causing players to be disconnected
-Latency spikes / short periods of no packets coming through
In some sense, "welcome to the internet." :-(
For the last week, I've had extreme connection problems between my and the Guild Wars 2 servers, with frequent disconnects and client crashes. This is a major title, with pretty reasonably skilled operators, using controlled servers, connecting from a major metro area (Silicon Valley.) Once you introduce user hosts (many of them, for peer-to-peer,) then the problem just gets worse.
The fact that you think it's a problem that high latency causes the shot-around-the-corner problem means that you are thinking about the design in terms of local LAN multiplayer. Unfortunately, the internet is nothing like that. You may, very well, not be able to achieve what you want to achieve through technical means. You seem to be looking for "some way to make the network less terrible," rather than looking for "some way to make the game and game design more robust in the face of a terrible network." I totally understand the urge, but I'm not optimistic you will make great strides in that direction.
If I had to look in that direction, I would attempt to set the QoS field of the packets, and perhaps switch to a UDP port that's frequently used for other real-time needs (such as IP telephony or IPTV,) and see if some networks pay attention to these things. Chances are, most users would be unaffected, and perhaps some home gateways would barf on the QoS bits.
The real solution to the problem of the internet is more likely to include some combination of:
- visualization of detected quality and problems to users in real time
- education of users for what they can do, and what they should expect
- change the game design to be more tolerant of latency (yes, this makes some particular game designs impossible!)
- change the technical implementation to be more tolerant of changing network conditions (adaptive de-jitter buffer, smoothing in position display, tolerant of temporary black holes, etc)
Posted by hplus0603 on 04 August 2014 - 04:11 PM
1) The rendering frame rate.
2) The simulation/physics frame rate.
3) The network packet frame rate.
These are typically all de-coupled from each other. You can separately use multiple mechanisms, such as "display latest" or "interpolate between two latest" or "extrapolate from latest," to go from network to simulation, or simulation to display.
For each network tick, you will typically collect all the player input that has been seen for all simulation ticks since the last network tick. You may also collect the latest state dump of objects being updated, and/or events that have been generated since the last network tick. Then send all of those messages in a single packet (with appropriate per-message timing information.)
The network packet frame rate and the simulation frame rate are typically fixed, whereas the display frame rate depends on the performance of the computer running the particular game instance.
Posted by hplus0603 on 04 August 2014 - 09:04 AM
our hypothesis that sending less might create new problems is actually true then
In general, you should see lower server load, and more players able to play well, when using less data and fewer packets.
The one area where fewer/less may hurt you is if you previously had redundancy, and now you don't, and you see occasional-but-not-infrequent packet loss.
In general, most well-designed router and transmission systems I know of would perform equally-or-better with fewer/smaller packets.
However, there may be all kinds of ill-conceived devices in the way between a player and the packet destination, not least of which is the player's wifi router or internet gateway. That may be a 10 year old device that thought that prioritizing larger/bigger streams would lead to better throughput benchmark numbers -- who knows?
If you can quantify what the packets are, and how you use them, and how much "loss" or other "degradation" you are seeing, it would be easier to make a better judgment on what's going on.
Btw: 50 to 120 packets per second is still a whole lot. I presume that you send packet-per-command instead of bundling multiple messages into a single packet and sending on a fixed schedule?
Posted by hplus0603 on 30 July 2014 - 06:51 PM
1) Batch services -- login, inventory, player stats, etc
2) Matchmaking services -- lobby, finding another player to play with, etc
3) Game services -- actual real-time gameplay data exchange while playing
All services in option 1) should probably be built as a horizontally scalable webapp. Use a round-robin load balancer in front of the web servers, and spin up as many web service instances as you need. Amazon has Elastic Load Balancer as a service. Or you could spin up a single instance or two of HAProxy or similar, and call that good (but beware the lack of redundancy!) Typically, all such services talk to the same database back-end, and that back end is horizontally sharded. Using in-RAM databases (like Redis,) and/or key/value stores (like Redis or Cassandra,) and/or separate caching (like memcache) may improve scalability. Just don't use MongoDB. Friends don't let friends use MongoDB :-)
Services in option 2 and 3 may need to be in the same instance, if you're going to be doing advanced things. Or you can simply have the matchmaker spin up a new process on some of a number of "game" servers, and identify those "game" servers with IP-address-plus-port, and give eash participating player an identification token and the ip+port information. Each of the two players then connect to the same ip-address-plus-port and provides their identifying token, and the game server can know that they go together. (In fact, you can have multiple player pairs in the same process -- you shouldn't need more than one process per core, and one port number per process.)
If you want large amounts of players (over tens of thousands) chatting/matchmaking at the same time, you will need a scalable solution for that. If all you're doing is chat and matchmaking for up to 10,000 players at the same time, a single-instance will be significantly simpler and cheaper; beware of single point of failure for this matchmaker, so you may want to have a second copy already running but idle, and be prepared to swap over to that if the first one dies.
And, if you're in Amazon, you may want to replicate all customer data between two different availability zones, and have all server roles running in two different availability zones, to avoid the problems that happen when one particular zone (or data center) dies, which seems to happen about once a year.
Finally, the matchmaker service, and the game services, can end up calling your web service back-end for things like verifying game tokens, granting players special things, etc.
If you build the system like this, you can start with very small amounts of server hardware (maybe even only in a single availability zone, if you're "small and indie.") Then you can add hardware instances as needed when and if you grow.
Posted by hplus0603 on 26 July 2014 - 06:25 PM
Thus, with 1000 players and 1000 static objects, and 32 bytes per snapshot, and 120 snapshots total (for 2 seconds at 60 Hz,) I get that to less than 8 MB.
Additionally, you only need to snapshot the states that are actually sent as network ticks, and 60 Hz network ticks is not a good idea for games with 1000 players in the same area.
Also, if 1000 players fight each other, you're going to have other problems than snapshot memory cropping up much earlier. Physics, collision detection, rendering frame rate, client video RAM (assuming 3D), size of each packet you send, ...
Posted by hplus0603 on 24 July 2014 - 06:59 PM
if the packet gets lost on its way to the server, and the server then skips the jump-action, the player will on his client snap back to the ground where he left and, because he held forward, walk off a cliff he meant to jump over
Welcome to the Internet!
should i just send the jump and anything alike separately from the normal movement packets (and then reliable) ?
You will get the same kind of problem with any kind of movement. Perhaps the player dodged to the side to avoid a bullet? Perhaps the player moved forward to avoid a falling piano? If you lose the command, you lose the command.
You can make it less likely that the command is lost, by sending a RLE compressed set of command bits for the last X steps in each packet. This will not add a lot of space, because movement commands typically RLE compress very well. Then, if one packet is lost, you have to rewind and re-play physics from that point.
This will of course cause snapping of your player on the screen of all other players, and if another player somehow had you in the cross hairs, they will now be annoyed at the lag causing that snapping.
You can also make it client authoritative -- let the client tell the server where the client is and what it's doing. Similarly, when a client shoots another client, tell the server whether hitting or not. This, of course, ends up opening holes for cheating, so the server needs to be able to re-create any client viewed state ("I at time T0 shot player X at time T1 when my last update from him was at time T2") which ends up being the Source network model. This is still cheat-able, but you now have a slider for how much time tunneling you will allow (which will effectively exclude any player with more lag than that from the game.)
Posted by hplus0603 on 24 July 2014 - 02:49 PM
For Amazon and other virtual-server providers, if you rent a virtual server with Windows OS, you can run an EXE on their server, and expose that to the internet with a static IP, and it will work fine. (You also have control over firewall rules.)
Other providers of virtual servers include interserver.net ($8/mo), rackspace.com (pricing is hard to tell), azure.net ($15/mo), and many others.
Note that a Windows virtual server will cost more than a Linux virtual server, because of the OS licensing cost.
Also, if your game is highly latency sensitive (FPS type games, not RPG type games) then you will likely find that virtualization introduces some amount of scheduling jitter that may be a problem for players. In that case, you need to rent an actual server; known as "root server hosting" or "self managed server hosting." Those kinds of prices start at $50/mo for low-end servers, and goes up. Check serverbeach.com, interserver.net, or many others for this option.
Posted by hplus0603 on 24 July 2014 - 02:43 PM
1) Either speculate on what the player is doing, and be prepared to temporarily show something that's not true (such as falling off a cliff) and having to correct that.
2) Or wait until you know for sure that you have accurate information, and only display this, which means there will be a delay before you display the movement/action.
Note that even with the speculation, when a player first starts moving (or stops moving,) there's no way you can speculate that they will do that, so you will have to snap/correct that kind of display no matter what.
In general, option 2 has much less technical problems, although many developers and some players really do not like the idea that you have to delay the display of the results of an action.
Posted by hplus0603 on 21 July 2014 - 08:56 PM
The mechanisms you'd choose and the systems you'd need to build are different.
Also, what technology are you currently using in the game? How far have you gotten on the game? Have you already structured things like encounters, loot, and character progression as server authoritative?