• Content count

  • Joined

  • Last visited

  • Days Won


hplus0603 last won the day on October 28 2017

hplus0603 had the most liked content!

Community Reputation

11390 Excellent


About hplus0603

  • Rank
    Moderator - Multiplayer and Network Programming

Personal Information

  • Interests


  • Twitter
  • Github
  1. Forge Networking Sync Animations

    I assume every animation has a "current play position" parameter, to allow seeking? When animations start playing, and while they're playing, you should probably keep sending packets of "animation X is playing, at position Y when the game time is T." On the client, receive those, and make sure the animations sync up to animation-position / game-time. When animations stop playing, of course send a packet saying they aren't playing anymore (and, if you're using lossy packets, keep sending until you're sure the client knows.)
  2. Client websocket libraries for Unity and mobile-native may not be as robust as those built into a browser. The only reason to use Websockets is really that it's hard to get real-time communications out of browsers in other ways. When the platform is not a browser, other options open up. If you're writing native code, how about using nodejs, but with UDP sockets? They are fully supported in nodejs. Or, if you don't like to build some kind of replication protocol on top of UDP, you could use plain TCP sockets (although they are not as good on lossy networks as UDP about latency.)
  3. The question seems like it is entirely answered in this thread. What is that that is not working for you? Can you post code for setting the linger option and closing the socket? Can you describe what you expect to happen in the code? Can you describe what you observe actually happening, and why you think that's not what should happen? We can't read your mind, and thus can't answer your question without information.
  4. Server battle predictions?

    Easiest is to run the simulation on the server, once, and record the movement and actions of all entities. Then let players view this recording, a bit like a videotape. You can hide the fact that the outcome is already "determined" through UI. Second easiest is to develop the simulation code such that it is deterministic. The exact same inputs, simulated through the same simulation engine/code, should lead to the exact same output. Initialize all random number generators with know seeds. Make sure all inputs are provided in the exact same order. Step the simulation the exact same way, with the exact same simulation step size. Ideally, use integer math rather than float/double math. This will let you start the simulation, and then view it, as many times as you want. To run it "fast" on the server, simply step it forward as fast as you can. On the clients, you'd just step it forward once per render frame, to show a "real time" playback. Regarding "random" numbers used in games (and the default random library,) they really aren't random. If you seed a random number generator with a specific value, it will generate the same sequence of numbers in order, every time you re-seed it. If all other simulation bits are the same (the same objects, taking the same damage, going through the same code paths, ...) then the random number generator will generate the same random numbers. Perhaps a good resource for this is the "1,500 archers on a 28.8 kbps modem" article series from Gamasutra. It talks about how Age of Empires implemented deterministic gameplay to support input-synchronous gameplay for an RTS. It sounds like your simulation engine has very similar requirements, with the added simplification of not allowing any real user input after the simulation starts.
  5. Snapshot Interpolation

    You might want to look at the Entity Position Interpolation Code library: https://github.com/jwatte/EPIC
  6. Accounting for lost packets?

    You can still establish whatever tick rate you want, as long as you realize that some ticks may have zero inputs, or more than one inputs. Simply keep your own timer, and advance it by 0, 1, or more ticks each time in the main loop, based on what the time actually is, and what your tick rate is. Note that you can't send inputs for tick T from the client to the server until the client sees the beginning of tick T+1, because otherwise the next time through the main loop may still be within the time period of tick T. The other drawback, if the vsync is not well matched to your desired tick rate, is that you will get some jitter. Not all screens are 60 Hz. Some screens are even variable-rate (G-sync comes to mind.)
  7. Typically, when a client first connects to a socket, the server doesn't know who that client is (what account/user.) After the client/server agree on software and protocol versions, typically the first thing that happens is that the client authenticates to the server. This typically means sending a username ("this is who I claim to be") and a password ("this is how you know that I am actually who I say I am.") The server then looks up name/password in a database (typically after hashing the password using scrypt() or bcrypt() or mcrypt()) and if the database comes back with "this player exists," then the server knows that the given connection, has the given player on the other end. For TCP sockets, it's common to assume one TCP session is the duration for that authentication. If the client needs to disconnect and re-connect, name and password can be sent again. There are also various means by which you can first log in once, and get issued a secret token (only known by the server and you) which is good for some amount of time, identifying you as who you say you are. This allows the client to forget about the password the user typed in, sooner, which is slightly safer if the client gets hacked somehow. For UDP sockets, there is no "session," so you typically have to build your own session handling on top of the protocol. Using the remote IP address and port number of the UDP packet is a good start; typically the server will also issue a secret value/id ("strong random session id") to the client, who will then include that ID at the head of each UDP packet to keep proving that the packet is part of the given session.
  8. Accounting for lost packets?

    Some amount of de-jitter delay on the server is often a good thing. 100 ms seems a bit much, but might be OK on your system. What's important is to make sure that you simulate in discrete "ticks." A "tick" might be 1/100th of a second, 1/60th of a second, 1/30th of a second, or whatever. For each simulation tick on the client, there is some player input (which may just be nothing, or a repeat of what the previous tick was, or may be a new input state.) You need to send these inputs, in that order, to the server, so the server knows the relative spacing of those inputs. The server should then attempt to apply the inputs in the same order, at the same relative tick number. When something's lost, you can ignore those inputs, and the client will get corrected. When something arrives late, that's the same as "lost," although you should have a little bit of de-jitter buffer to account for this. Also, it's common to pack multiple input/simulation ticks into a single network packet -- simulate at 60 Hz, network at 15 Hz, packing 4 inputs in turn per packet. The packet size, transmission delay, and receive buffering will all add to input delay between client and server; this is unavoidable. Then, when the server sends state back to players, the players always accept this state, with the caveat that the state may be "old" for the player character (because the player already simulated ahead since it sent the inputs.) Thus, you typically want to keep a log of old player state, so you can compare what you get from the server, and apply the delta when you receive the updates. On screen, you can choose to smoothly lerp between positions, or just "cut/jump" the character to the new spot, if the difference is too much. But the simulation state, itself, must be derived from what you get from the server at all times. Some more additional bits: To support a simulation rate that is not an even multiple of the frame rate, you may wish to support interpolation for rendering. Or you can quantize to the "nearest" simulation step at each render frame. If you simulate at 200 Hz, and render at 60 Hz, that can work OK, for example, but with simulating at, say, 100 Hz, and rendering at 60, there will be jitter that some players will notice during movement. Snapshots should never be sent from client to server, only from server to client. The server can also forward the inputs for each other player more often than snapshots, if you want to save bandwidth. Each client can re-simulate the state of each other player based on the snapshot data and subsequent control inputs. It's not uncommon to simply send snapshots on a rotating basis; spread all entities over, say, a 3 second window, and snapshot them all during that time. If you send 15 Hz packets, that's 45 packets to send snapshots in, so if you only have 5 players, there's 8 packets without snapshot, for each packet with a snapshot. The packets also contain other-player input, as well as particular game events (say, explosions, or somesuch, that can affect gameplay for the player directly.) When there are NPCs or more players, there will be more snapshots to send. You also typically want to send a snapshot when a player/player interaction happens, even if it's "out of order" with the scheduled updates. When two players collide on the server, you know that they will not have seen the same thing on each client, so it's best to send immediate updates to each of the affected players to make the delta time between collision and adjustment as small as possible.
  9. Accounting for lost packets?

    There are two ways to do networked games: 1) You (the client) are provided an input state, and then, in order, each command given by each other player (including when the command was to be executed.) Your code is written to be deterministic; every player will run the exact same simulation with the exact same input, and will derive the exact same end state. This is not particularly common in FPS games, but very common in RTS games, and somewhere in between for RPGs. The main draw-back is that debugging de-sync is a pain, and there is significant command latency between giving a command, and seeing the result (because you need all player's inputs for time T to actually show the output at time T.) RTS games hide this latency behind the "yes, sir!" animation. 2) You (the client) are provided a stream of object events -- "start showing object," "object updated," and "stop showing object." The server figures out which objects are important to you, and tells you about them. It also figures out how wide your network pipe is, and updates changing state about objects every so often. The client then does what it can to display a "good enough" state of the world (this may include speculatively simulating physics for the objects.) However, when the server gives you an update that is different from what you speculated, you need to "fix" the state of the object to be consistent to what the server says. For small changes, this is easy to hide using interpolation and such. For bigger changes -- either in time, or in things like "did I get shot or not" or "did I trigger the bomb or not" -- this may be perceived as "lag" by the player. Your implementation sounds like it's a variant of 2). Yes, you will de-synch, almost all the time, but usually very little. Your job on the client is to try to hide the small corrections, and at least make the game still possible to play when you get big corrections.
  10. about mmo persistent player data

    Databases aren't necessarily faster than file systems. Instead, they provide features file systems don't, like block coalescing for small data items, and multiple indexing, and transparent management of millions of items, and transactional semantics. Try creating a directory with a million files in it. Most file systems will choke and slow down tremendously. Also, to actually get transactional update of a file, you need to: 1) write to a temp file 2) call fsync() and fdatasync() and close() 3) call rename() to replace the old file with the temp file 4) call sync() And sync() is asynchronous, so you don't REALLY know that the rename was actually persistently committed. But at least you'll be in one of two states: old file, or new file, without any half-files. (This is on UNIX -- Windows has similar system calls available)
  11. And what do you do when the object that the ID references, no longer exists? Storing an ID may prevent the dereference of a stale pointer, but it doesn't prevent a bug, and in many ways, the crash from a bad pointer is better than the silent failure of a stale ID, because finding the former so you can fix it, is easier!
  12. The core problem is that "players disconnecting" is non-deterministic. You don't have control over that. What you can do in your code is have a clear interface to the non-deterministic parts of the program (player connection/disconnection and player commands) and funnel between non-deterministic and deterministic domains (code modules) in your program at pre-determined times (typically, once every main "tick" or simulation step.) In many ways "player X disconnected" is no different from "player X walked forward" and is another event that your system needs to react to and do the right thing for. Typically, your network connection handling code will forward events from the connection, which includes things like "player gave a command," and thus also "player disconnected," and the rest of the code simply reacts to those commands. Your world, in turn, typically will emit events such as "bomb B exploded at position C" or "player Q picked up health pack H" or whatever. This will also include "player X disconnected" so that other players and entities who need to know this, find out about it. Yes, this means that you have to sequence the shutdown of players, because other players will be referencing the original player entity after the network has disconnected but before everything is properly torn down. Typically, you solve this by having "network connection" be different from "player entity" in your object model. And, typically, you actually end up with an even finer granularity because different objects need different lifetimes. For example, "player controller objects" are often as thing, as are "player world effects" (spells, etc) Each of these objects have a carefully managed, well defined lifetime, and the role of the game engine is to make sure that objects live during their lifetime, emit events to those who need to see those events, and then reap the objects when they are supposed to go away.
  13. Hardening a production server The question came up: How would I go about 'Hardening' the host server? In theory, hardening is a simple mixture between good configuration, good software, and good operational practice. Don't run any services you don't need to. Typically, this means just running a SSH server and your specific game server. Other services, like a local caching DNS server, is generally not a good idea! Don't allow connections to management interfaces from the greater internet. For example, if you have a private network, don't allow SSH connections from public addresses, only from private addresses. Enforce access rules both in configuration files (assuming your servers have configurations for which addresses/interfaces they bind to) as well as using host-level filtering (iptables on Linux, Windows Firewall on Windows.) Be fast about patching as soon as security updates are available for your OS and for servers you didn't write yourself. For software you write, make sure that you check the size and validity of each packet. If a field of a UDP packet says "there are 312 bytes of user name" but you only received 270 bytes total, that's a bad packet. Beware signed values (char is signed and may be negative!) and test all code with all extreme edge case values. Religiously read the logs from your server. When there is a new kind of log message you haven't seen before, research what it is and why you see it. Classify log messages into exactly two classes: "can be ignored at runtime, but useful for post-mortem debugging" and "needs human attention pronto." If some log is not useful and doesn't indicate a problem, AND you know what it means, then configure/filter that specific message out. Keep a constant eye on metrics such as CPU load, RAM usage, network ingress/egress, CPU temperature, disk space, disk I/O rate, swap usage, and so on. Set alerts when they fall outside normal bands. Inside your private network, still enforce access with SSH keys, use TLS where possible, etc. Only allow cross-host connections that make sense. For example, if you have application servers, and game servers, and database servers, the game servers should not be allowed to access the database servers, but should have to go through the application servers. Enforce these rules with network filtering. Another good example is SSH command line access; typically you have a "bastion" host that you can SSH into, and SSH to other hosts is only allowed from that host and some spare/backup host. SSH directly from a game server host to an application server host would indicate a non-standard use, which should be disallowed. Keep track of what traffic you see internally. If there is a new protocol or new pair of hosts talking that you haven't seen before and don't expect, alert and investigate! When you actually get to implement this in practice, you will find that certain practices or requirements will conflict -- maybe the ability to quickly patch servers means that your servers need to be able to SSH outside of your network to grab those patches, but you don't generally want to allow SSH out from the servers to the greater internet. Or maybe there is a traditional "features versus security versus schedule" conflict. How you deal with those determines your culture. I can say that starting to do these things early is, in the long run, much easier than trying to come from behind later. If you have a mess of internal connections and millions of unaudited log messages per day, trying to bring order is a massive undertaking. On the other hand, if your game sucks because you spent all time on hardening and not on gameplay and polish, then all that hardening work was wasted (other than as a learning excercise.)
  14. For games like Farmville, or Backyard Monsters, or similar, yes, you can calculate everything you need to know when the player connects and makes the request "what is the current status?" Typically, anything that grows or evolves, will have some "grow duration" and some "start time." The game state simply consists of all the items, with their stats, and start time. Once the current time is > start time + grow duration, the object goes from "growing" to "completed." This can easily be calculated both client- and server-side. To work with time zones and clocks out of sync, you'll want the server to respond with both "this is the time the thing started" and "this is the time the server thinks it is now," and then have the client calculate a delta between what it thinks the time is, and what the server thinks the time is. If you need real time interactions between players (attacker and defender both make moves that react to each other) then you will need something with more permanent connections than the request-based PHP programming model, but for the basic games, resolving everything only when the player asks for state is totally fine.
  15. This is why you want to do it with a web interface, perhaps on top of web sockets!