Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 21 Mar 2003
Offline Last Active May 25 2016 07:16 AM

#5292782 Understanding Peer 2 Peer

Posted by on 21 May 2016 - 02:29 PM

You just need better authentication / encryption.


Basically, never work with raw sockets and IP addresses directly, if you can avoid. Negotiate P2P secure connections, preferably via a secure server, which can also help with the NAT punch-through and packet reflection. 


Even then, don't trust the data send by clients. But it's more about game hacks and exploits, and how your game handles cheating, rather than a straight-up security issue.


Roughly how it looks in APIs such as XBox Live, PSN ...


1) player A creates a game session. That game session then resides on a secure server. 


2) player A registers himself with the game session (probably part of the game session creation process anyway).


3) all communications between game server and player A are uniquely encrypted and secure.


4) player B searches for the game session.


5) player B finds the game session, then registers himself with the game session.


6) all communications between game server and player B are uniquely encrypted and secure.


7) player A gets notified by the game server that player B is registered with the game session. Received security information on how to connect to player B.


8) player B gets notified by the game server that player A is registered with the game session. Received security information on how to connect to player A.


9) Player A, or player B, or both then attempt to connect to each other. Their communications will be encrypted, and only decipherable by them (ideally). 


and so on...


The game layer sees secure addresses being established under the hood, with payloads coming in. The game code doesn't really deal with raw IP's any more (on steam, it's just player Steam Ids). Which makes it a little bit tricky as far as debugging is concerned, but hey.

#5285876 Syncing Client/Server

Posted by on 08 April 2016 - 12:52 PM

A coarser grain is detrimental. After all, you need to send those 'bitfield headers'.

Assuming you do delta on single bytes, for the sake or argument, the gain will only 1/8th compression at best.

A vector is usually atomic. One component changes, it's safe to assume the other components also changed to some degree as well.

If you run into long lengths of bitfield headers (lots of little components changing all over the place), it can mean you need to optimise those components in groups based on their frequency and times of change. Thus reducing the header size, and the frequency of delta transmissions.

You can even write meta-data statistics about it, and estimate how you could improve your traffic by bundling up components more efficiently.

Something i actually experimented with before. The delta-encoding was on fixed-lenght byte buffers.

I run a crude statistical analysis on the frequency of byte changes. I remapped the byte order in order of frequency of change. That gave me runs of contiguous bytes that would change and not change at roughly the same time. The static data that never changed, for example, up front.

Then I ran a simple RLE to build up my delta packets. That gave quite a decent compression. Not optimum as more application-aware delta-compression algorithm, but hey.

#5285287 Syncing Client/Server

Posted by on 05 April 2016 - 11:07 AM

The server shouldn't be building caches. Or to be more precise, he should be building caches for each client. SO, no client, no cache.


1) client connects.

2) The cache for that client is empty. Therefore the delta against an empty cache is a full state. Therefore the server sends the full state. 

- NOTE : the server would eventually keep sending full states, until the client starts acknowledging something.

- NOTE 2 : can have a blocking full state : Send the first / full state reliably, then wait for the client to ack the first state before sending him delta updates. 

3) server can send delta to that client from now on, since he received a ACK from the client. 


It's easier to see the client as an observer of the game state. What you send to the client is his view of the game. And subsequent updates are changes to that view. The SQN / ACK numbers are only there for 1-to-1 transmission, and are not a global number used by all clients. Although you can do that too, but from experience, unnecessary and quite messy. 

#5280393 Issues with client prediction

Posted by on 09 March 2016 - 11:05 AM

I can do a quick pseudo-code :

It's basically a dumb down version of the standard host-authoritative client prediction.

struct Vector
    float x, y;

struct Inputs
    int mouse_vx;
    int mouse_vy;
    int mouse_buttons;

struct Player
    Vector pos;
    Vector vel;

    void update(Inputs inputs)
        // .....
        // ....
        // ... 
        // ..

// inputs for each frame.
struct InputsPacket
    Inputs inputs;
    int    frameId;

// player updates on server.
struct SnapshotPacket
    Player player;
    int    frameId;

struct Client
    // list of inputs since last server update.
    std::list<InputsPacket> inputsQueue;
    // predicted player position on client.
    Player player;

    // the current frame id.
    int frameId;

    // initialise to a player position.
    void reset(Player _player, int _frameId)
        player = _player;
        frameId = _frameId;

    // update for frame.
    void update(Inputs inputs)
        // increment frame id.

        // create input packet.
        InputsPacket inputPacket(inputs, frameId);

        // cache input packet.

        // update player.
        // send latest input packet.

    // remove inputs from frame id, down to the oldest. 
    bool acknowledgeFrameId(int frameId)
        // iterate inputs frames that haven't been acknowledged yet.
        for(std::list<InputsPacket>::iterator it = inputsQueue.begin(); it != inputsQueue.end(); ++it)
            // found matching inputs frame.
            if(it->frameId == frameId)
                // erase older inputs. We won't need them anymore.
                inputsQueue.erase(it, inputsQueue.end());
                return true;

        // couldn't find the inputs for that frame. 
        return false;

    // host send us a player update for that frame id.
    void receiveFromServer(SnapshotPacket snapshotPacket)
        // remove inputs up to the frame id.
        if(acknowledgeFrameId(snapshotPacket.frameId) == false)
            // frame id is invalid. don't do anything.

        // rewind player to the server frame id.
        player = snapshotPacket.player;

        // replay inputs that haven't been acknowledged, from oldest to newest.
        for(std::list<InputsPacket>::reverse_iterator it = inputsQueue.rbegin(); it != inputsQueue.rend(); ++it)
            // update player.

1. Client stores input for each frame and sends the input along with the frame number.

2. When client receives a world update from server, it sets the clients m_frame variable to what the update says (provided that it's the most recent update, otherwise client ignores it.)
3. Client rewinds the player position to what the update says. Removes old frame from the cache, since we won't need them anymore.
4. replay all inputs from after the server frame id, up to the latest.

And that's about it, basically. No messing about with latency and whatnot. It's all taken care of automatically, you'll just have more frames to replay when the latency increases. At least, I don't think you have to do anything.

The rest of the standard client-side predictions are just optimisations, not really affecting the general algorithm.

EDIT : BTW, if your transport is limited to UDP (aka not reliable), instead of sending a single inputs packet per update from the client, send the entire inputsQueue, which is in effect the list of inputs that haven't been acknowledged by the server. Or just ignore packet loss all together smile.png send single frames, or a couple of frames at a time.


EDIT 2: bugs!

#5280302 Multiplayer without port forwarding

Posted by on 09 March 2016 - 12:33 AM

If it's a Steam game and you're using the Steam SDK it's all done for you. Steam does the tunneling and packet reflection for you. It uses libjingle for nat-punch-through, and dedicated relay servers.

#5279763 Slow seeming Replication and how to accurately test networking latency

Posted by on 05 March 2016 - 08:25 PM

nope, not that one. Probably better searching for Network Emulation Toolkit.

I think I last downloaded it from here :


As I said, it's not exactly out there. I believe it can be part of visual studio, but always used (preferred) the stand alone version.

Wireshark :


#5279645 Slow seeming Replication and how to accurately test networking latency

Posted by on 05 March 2016 - 06:00 AM

I would expect the lag to be around 1/2 the rtt time, up to the full RTT time in some condition. Not 1/2 a second.

What are the send rates? You also have to account for that. Usually, around 20 FPS, which will add a ballpark 50 millisecond S of perceived lag (I.E on top of the RTT measured at the socket).

Maybe Unreal is throttling your traffic internally to some ridiculously low level, meaning that the send rates and update rates will decrease to reduce bandwidth usage.

I still use NEWT for cheap internet simulation. It's an old XBox 360 SDK tool. Can be hard to find, but it's stand alone.

Can also use wireshark to analyse your traffic (measure bandwidth, and do all sorts of analysis).

#5158340 Network Game Design / Architecture Question

Posted by on 05 June 2014 - 04:26 AM

The simple way :

- Client sends inputs to the host.

- host processes inputs.

- host calculates the game state (positions of objects).

- host sends game state to clients.


Example : The original Quake networking (for LAN games).

Advantages : simple.

Disadvantages : lag becomes a serious issue. Round-trip latencies will make the client input laggy.




The way it is corrected :

- client reads his inputs, and calculates a local prediction of the player position.

- client sends inputs to the host, as well as his local prediction calculation. The input+prediction packet is marked with a unique number (aka Sequence Number, aka SQN).

- client also stores the input+prediction packet in a buffer.

- Host receives the client inputs+prediction+SQN. 
- host calculate the real player position, using those inputs.
- host then compares the player location with the client prediction.
- if a difference is detected (within an arbitrary margin of error), the host sends a correction packet back to the client (containing corrected_position+SQN).
- client receives the correction packet.
- client looks up the original input+prediction packet in his buffer, using the SQN. 
- client substitutes the predicted position with the corrected position.
- client then replays his input stack from that point onwards, to adjust his current prediction with the server correction.
Example : Counterstrike, Quake 3, Unreal Tournament, Team Fortress...
Advantages : Nullify the input latency, by allowing the client to locally predict the effect of the player's inputs. 
Disadvantages : client will experience jitter and rubber-banding when a discrepancy between the client prediction and the server calculation is detected (i.e. two players jumping on top of each other). 
looksy here.

#5152313 Why is it that some say that open source multiplayer games are less secure?

Posted by on 08 May 2014 - 08:05 AM

Open source makes finding exploits a lot easier. If you play it fast and loose, it'll be a lot quicker for hackers to find exploits within the code or even your design assumptions. If your game isn't particularly secure but the code cannot be reversed engineered easily, it can take more time. Ultimately, if your game is popular, you will be found out either way. 

#5150331 Your most valuable debugging techniques for Networked games?

Posted by on 29 April 2014 - 07:37 AM

- Make sure you also do some unit testing. smile.png


More stuff, maybe :


- Internet conditions simulation. Either basic, for a quick test, or pro-level stuff, with dedicated hardware. Will not replace real-world conditions but I find it always useful.


- Since you're dealing with binary data (most likely), some form of checksum / type-checking / range-checking verification. This mostly applies to debugging things like serialisation / deserialisation and marshaling.


- Some form of binary -> readable text conversion on packets. Usually packets are tightly packed binary streams and virtually unreadable. So, some form of analyzing packet content can be useful if you run into trouble with your packets turning up as garbage.


- Recording packets, game states, inputs, and replay features. Pretty high level, and a lot of work in itself, so you may want to skip that. 


Checksum and replay especially useful if you have the intention of running a deterministic lock-step engine.


At a game level : 


- A basic versioning system. If your packet / message format isn't backward compatible, you'll end up with garbage data. 


- Dependencies check. Making sure whoever connects to your server has all the custom content it needs to being able to play the game. Either the right version, the map data, ect... 

- A replay system can also help you catching strange game exploits. And if your game supports replays, then why not smile.png

#5148952 Your most valuable debugging techniques for Networked games?

Posted by on 23 April 2014 - 07:26 AM

logging everything. 

#5148567 Fast-moving 2D collisions?

Posted by on 21 April 2014 - 12:22 PM

to summarise :


1) approximate to a point and do a raycast test. 

2) use a swept sphere test (solving second order equations).

3) use substep collision detection. Divide your frame timestep into smaller timestep (based on the speed and size of the object), and do multiple checks along the path.


1) can be a good approximation for 'bullet hell' type of games. do a segment intersection between from the start position to the end position of the bullet from frame to frame.

2) is probably the most accurate, more math intensive. You can also do more complex swept tests with polygons, or testing fast moving objects against each other, and do proper physics as well (points of contact, accurate deflection and collision response, ect...). 

3) is the easiest to do (since it's just multiple collision detection tests). but not the cheapest computationally, or the most accurate. But can also helps with fast spinning objects, like blades. 

#5146782 Catching illegal movement with partial data

Posted by on 13 April 2014 - 06:13 PM

What you have isn't really server-authoritative, therefore not secure. You need to send all the inputs of the clients to the server, as well as the client's predicted position every now and then, so the server can correct the client if needed. If you send partial inputs, you leave yourself opened to exploits. If you send all the inputs, then you know exactly what the client tried to do, where it landed, and if you need to send a correction. Then the client can 'rewind' his input stack, and correct himself.




Client->server bandwidth is at a premium but luckily, if the client only has to send inputs, commands, and sometimes positions, that bandwidth usage is actually very low, even at 100hz. Do the maths, see how much data that actually is. e.g. 100 fps x 100 byte / input = 10kbytes / sec = 80 kbits / sec (plus TCP packet overheads). Secondly, since you will send your inputs reliably, you can use compression techniques to further reduce that bandwidth. 


And since you use TCP, if you start sending too much too fast, the TCP frame will drop (i.e. the send() command will not complete, or will block). But in any case, short of a network issue, you should be easily be within the bandwidth limitations of home broadband. What you can do in case your TCP stream starts breaking up, is pause the game on the client side, to let the stream recover enough. 

#5145294 How we solved the infamous sliding bug

Posted by on 08 April 2014 - 05:23 AM


FWIW: That's basically what GGPO does, as well as various other networked physics engines. It's not that hard to build if you can either store a log of physics state for each object, or you can allocate all needed physics state in a single block of RAM. You just have to make sure your engine is set up for it ahead of time, rather than trying to retro-fit it.

Does GGPO also do the full determinism? I know it does the rewind/resimulate thing, but I never heard it also does full determinism.



In general, If you think you need prefect determinism, you should be able to adapt your algorithms to use more 'loose' tolerances. 


What I'm thinking of :


1) some error tolerance on your objects (object position, velocity, ect...), beyond which a de-sync is detected.


2) rollback objects affected by the de-sync, re-apply the fix that reduces or eliminate the error.


3) replay those de-synced objects up to current time.

#5144844 How we solved the infamous sliding bug

Posted by on 06 April 2014 - 04:53 PM

It's a very difficult problem, actually, especially peer to peer. Lag is the enemy, even minute. That's why most games run through an authority (server), that controls the game state, while clients merely predict locally what's happening and get corrected by the host. Which causes jumping, re-sync, ect... when two characters collide (see, the original counterstrike). 


There are possibilities. Some of them involve using loose tolerances for collisions, making them more soft and forgiving, and letting the collisions resolve themselves on each client. Which can generate rubber-banding physics when you get two players colliding. Players will loose sync as well. What you see on one machine will not match what you see on the other machine, and will take a while to resync. Not nice at all. 


Something I always wanted to try, was using in-sync, RTS-style game states. Every player has the same global, deterministic view of the game, with their inputs running at the same ticks, and sequence numbered. Every tick, the game state is saved on each machine, and you sort of 'predict' each remote player's input ahead of time, while you run the local player inputs ahead. Once you receive a remote player inputs, you can rewind the entire game state, and replay the game from that point on to arrive at new locally simulated game state. This means you will have a deterministic game on every machine, and consistent physics across the game. The problem now becomes, what happens when the prediction diverges from the local simulation. And that, I don't know until I can run an experiment myself. 


It's a bit how RTS games handle networking. Think of it like Braid, but in a multiplayer context (except Braid of course, breaks the time continuum as part of the game mechanics). 


You can also look up Gaffer's networked physics, but in general, that kind of problem you encounter is often resolved ad-hoc, with hacks, basically breaking your internal laws of physics (turning collisions) off for a short while. That's one reason I hate Peer to Peer topology. 


It's still valid, we're making games, not shooting rockets at Mars, but it's a bit 'dirty'. Latency is in general, nasty, and a difficult problem. That's why we have all sorts of techniques to hide lag as best we can, like client-prediction, server correction, lag compensation. And even then, it's still in general, crap. Can't bend time, unless you have consistent sub-frame latency, you'll always suffer lag.