• Content count

  • Joined

  • Last visited

Community Reputation

211 Neutral

About sufficientreason

  • Rank
  1. State Snapshot Delta Compression and "Slippy Floats"

    The downside is that the client then needs to keep a rolling history of snapshots it's received to then deflate the delta. This is something I can currently avoid doing, at the cost of some extra bandwidth.
  2. State Snapshot Delta Compression and "Slippy Floats"

    If you know what you sent in the packet that is last-acknowledged from the client, then just use regular "!=" to the current value. If your simulation is doing something to the position (sliding along a wall, slightly slipping down a slope, or whatever,) then you should probably send that update.     Yeah, you and Zipster are right. I'm overthinking it. Going ahead with bitwise float comparison which should fix the problem.   Thanks everyone!
  3. State Snapshot Delta Compression and "Slippy Floats"

      Right, that's what I'm doing. I'm sending full values, not offsets. The problem stems from determining equality between snapshots using epsilons for floats.
  4. State Snapshot Delta Compression and "Slippy Floats"

      Yeah, this is probably closest to the solution I'll probably end up with. If I pre-discretize the floats before storing them in the server-side snapshots (say by converting them to a 24/8 fixed point) then I can do explicit comparisons from that point onward and I won't have this problem. It does remove the dead reckoning type benefits of only sending values when they've changed beyond a certain point, though.     I'm not sure it's quite that easy in this case. Comparing against the last sent doesn't give me information about what the client actually has. I could do both, where if the auth value is different enough from the client's last acked or our last sent then we send the new one. This is still susceptible to packet loss though. Also, I was trying to avoid having to store an extra snapshot per client if I could.
  5. I'm using a traditional Quake-style delta compression system for my state snapshots. The general process is this: Server keeps track of the past N world snapshots. Server keeps track of the tick-stamp of the last snapshot the client received (sent by the client). When the server wants to send a snapshot to a client, it... Retrieves the snapshot at the tick last acked by the client (if we can, otherwise send a full snapshot) Compares the latest snapshot to that retrieved snapshot Sends a delta This is pretty straightforward, and it's nice because all the client has to send back is the tick of the last snapshot it received. It works because we can assume that, even though we're only sending deltas, that the client has the latest data for all values at the tick it has acknowledged. ...Except with floats. After months of successfully using this system, I've now run into a problem with floats that only change very slightly tick to tick, such as with damping velocities. Because I can't use strict equality with floats, I'm running into the following problem on low-latency connections: Client acked at 48, Server send 49. abs(floatValue@48 - floatValue@49) < epsilon, don't send Client acked at 49, Server send 50. abs(floatValue@49 - floatValue@50) < epsilon, don't send Client acked at 50, Server send 51. abs(floatValue@50 - floatValue@51) < epsilon, don't send And so on. Because the float changes so little between ticks, the value is never sent, even though after a while of this it can get very out of date due to this kind of epsilon-slipping. At high latency this is much rarer because instead of comparing ticks 49 and 50, I'm comparing something like ticks 32 and 50, and the difference is big enough to overcome the approximation epsilon and be sent. Anyone have any ideas for how I fix this problem? I'd like to keep it so the client is only sending an ack tick. I've thought about periodically forcing a "keyframe" snapshot with full data, but that could be expensive for many players. Wondering if any of you have encountered this problem before with delta compression. Thanks!
  6. C# UDP Socket Receive - Identifying Sender via EndPoint?

    Right now I'm focusing on making a fun game first. I'll be worrying about malicious activity a little later. One thing I should add is I'll be using the PlayFab matchmaker, which gives you an authentication token that needs to be passed to the server anyway on connection to validate the user's identity. Of course, it would be possible to hijack a player's session if I send that matchmaker token in plaintext. A rough sketch of what I will eventually do is as follows: 1) Client connects independently to PlayFab via HTTPS, receives token. 2) Client connects to my server via either HTTPS or TLS. 3) Server provides some sort of fast key (symmetric?) to client over HTTPS/TLS. 4) Client disconnects from HTTPS/TLS with the key. 5) Client begins sending UDP packets to the server, including the PlayFab auth token. 5a) These UDP packets begin with a 4-byte CRC of the packet's header/data bytes and the network protocol version hashed together, followed by a header and the packet data. 5b) The client uses its key to encrypt the packet+CRC and send it to the server, including one packet with the PlayFab auth token. 6) The server does the same for any replies. I'm not exactly sure the best way to do the initial key exchange, but I'm some way away from having to worry about that. Ideally I'd just do it over UDP but I'm not so sure how easy that will be vs. HTTPS/TLS. What this scheme should give me is tamper protection (via the CRC), secure client identification (if the CRC isn't right when the packet is decrypted, the client has the wrong key), version checking (to make sure an outdated client isn't connecting), and overall encryption for just 4 bytes.
  7. C# UDP Socket Receive - Identifying Sender via EndPoint?

    Okay. It looks like I won't need to attach an ID to my packets in this case. If/when I start looking into encrypting/checksumming my packets I'll get unique client identification anyway based on the keys used.
  8. C# UDP Socket Receive - Identifying Sender via EndPoint?

    That 0 isn't for binding the socket, it's just an empty initialization. TryReceive will fill out that object with both the IP and port received. The socket has already been bound by the time I call that.   In my system the first thing a new client sends to a server is a ConnectRequest packet, which it repeats at some frequency until it gives up or receives a ConnectAccept. Until it receives a ConnectAccept it won't respond to ping/heartbeat packets from the server. Similarly, if the server receives a ConnectRequest packet from a client it thinks is already connected, it will ignore it.   In the situation you describe, the client would drop and stop responding to server data or heartbeats. The server would ignore the restarted client's connect attempts until it timed out the old client connection. Then the client could re-establish and both client and server would know that this was a fresh start.   However, Is it possible for two users using the same NAT gateway to have the same IPEndpoint?
  9. Hi there,   I'm reading from a UDP socket in C# as follows: EndPoint endPoint = new IPEndPoint(IPAddress.Any, 0); int receivedBytes = this.socket.ReceiveFrom( this.receiveBuffer, this.receiveBuffer.Length, SocketFlags.None, ref endPoint); This provides me an EndPoint of the packet's sender. Does this EndPoint object uniquely identify the sender? If not, what would be a good scheme for doing so? A naive approach would be for a connecting client to generate a random uint and include it in all of that packets (handling the very very rare chance of collision), but perhaps there's a smarter way?
  10. Calculating Packet Loss

    Just wanted to post an update on what I decided to do.   My packet header is now 8 bytes containing the following: private NetPacketType packetType; // 4 bits private ushort processTime;       // 12 bits private byte sequence;            // 8 bits private ushort pingStamp;         // 16 bits private ushort pongStamp;         // 16 bits private float loss;               // 8 bits Ping:   Each peer sends its latest stamp in each message and stores the last received ping it received from the other end (a.k.a. pong) as well as the local time that pong was received. I only store a pong in this way if it's new since packets may arrive out of order, so our stored pong is always the highest timestamp we've yet received. Ping/pong values are in miliseconds, stored in 16 bits which gives us a little over a minute of rollover.   When it's time to send, we compute the process time (the time now minus the time we last received a pong) and include that and the pong time in the packet. The recipient then computes the RTT (current time minus pong time), and subtracts the process time to get the true ping RTT. The process time is stored with 12 bytes giving us a four second cap -- if that isn't enough, there are bigger problems elsewhere.   Packet Loss:   Each peer sends an 8-bit sequence id with every packet. On the recipient end we keep a sliding 64 (+1 latest) bit window of all received sequence ids. The packet loss is then computed as the percentage of set bits in that window. If we haven't received a message in [2] seconds, then we report 100% packet loss. This gives you the packet loss for received packets. When it's time to send, we also include our own packet loss (compressed from a float to a byte) so the remote peer can know the packet loss for their sent packets.   Note that you could use these sequence windows to detect duplicate packets and prevent processing them, but I handle that at a higher level and so this low-level implementation doesn't address that. I don't personally trust the 8-byte sequence id and a 64-bit history bitvector to robust enough to decide when to drop or keep packets.   If you're interested in seeing how this works in detail, the low-level UDP library I'm working on is here. Still in progress, but reaching maturity (with an intentionally small feature set). Thanks for the help!
  11. Calculating Packet Loss

        That works. I could bin by the second and record the number of packets recorded in that second in the appropriate bin using a cyclic buffer. Then a smooth, more or less level graph would indicate good connection quality, while a fluctuating graph would indicate poor connection quality. I could also easily get an average ping per second using more or less the same bins.   Unfortunately, "wasn't available by the time X was needed" is actually complicated in my system. I do a per-entity dejitter, with different send rates for each entity, so knowing whether an arbitrary packet arrived with data just in time is actually a per-entity decision and not really something I could express in a quality meter.   EDIT: I'm actually going to try another approach. Will try to update once that's done.
  12. UDP - Custom Checksum or Built-In?

    As in, I shouldn't rely on it?   So far I wasn't planning on doing encryption for my game state packets since I didn't figure it was worth the computational cost. If anyone has any resources on what to do with a C# byte array in terms of protecting my packets though, I'd love reading material. Everything I've read so far is very conflicted.
  13. Calculating Packet Loss

      The one problem I'm having with that is that I'm using a dejitter buffer, so an out of order packet actually isn't necessarily bad and could still be safely processed at a higher level. Is there an alternative that takes this into account? In that snippet a newSeq of 132 vs. a lastSeq of 134 would return 0 but both could safely be passed to the dejitter buffer and processed in time.   I basically just want this as a way of reporting to the player why things might be behaving poorly if they're playing on a Starbucks wifi next to a microwave. Doesn't have to be very precise, but I'd like it to be as accurate as possible.
  14. Calculating Packet Loss

    Tried doing some searches here but I couldn't actually find and answer to this. Let's say I'm putting in a 1-byte rolling sequence number in each of my packets. On the receiving end, what's the cheapest way to calculate packet loss purely for statistics reporting (don't care about reliability, etc.)?
  15. UDP - Custom Checksum or Built-In?

      I'm thinking about the checksum mostly for hardware issues along the way. Malicious tampering would be beyond the scope of the checksum's protection. In the case of a bad checksum, I would just drop the packet (since I have a fairly robust reliability layer). I'd rather drop the packet (repeatedly, if necessary, ultimately resulting in that user timing out) than allow strange values to corrupt my simulation.     I wasn't aware the checksum only covered the header. However, if corruption in the wild is extremely rare, I won't worry about it. Eventually if I want to add shared-key encryption or something (is this actually practical for most moment-to-moment game data?), I'll probably add an encrypted checksum as part of the protection.   Are there any other data safety things I should be adding to my packets? Currently I send them more or less "naked". My five header bytes are just the message type, two bytes for a ping timestamp, and two bytes for a pong timestamp. The rest is just raw game state data. Eventually I'll have an initial handshake packet that checks against a known hail message, and probably includes a protocol version number once I'm a little more stable.