Jump to content

  • Log In with Google      Sign In   
  • Create Account

simonlourson

Member Since 27 Mar 2008
Offline Last Active Jul 23 2013 03:13 AM

Topics I've Started

How to handle oject creation while replaying past gamesates

25 June 2013 - 06:13 AM

Hi.

 

I'm developing a fast paced networked action game. In my game, the server keeps a circular buffer of past gamestates, so I can replay client input in the past (for client prediction)

 

My question is, how do i handle object creation in the circular buffer?

 

For example, let's say the server has received an input from client A, telling it to fire bullets (my bullets are ballistic types, not instantaneous) at frame 10. 3 bullets are created, and I send these objects creation to each of the clients.

 

When I'll be replaying my circular buffer during the next timestep, those bullets will be re-created, and not necessarily with the same ids, which is a problem, because I don't want to resend the creation of these bullets to each client!

 

Is my question clear enough ? How can I handle the fact that some actions need to be definitive (object creation), while at the same time replaying my circular buffer of past gamestates according to new user input?


How to implement a circular buffer of previous gamestates

14 June 2013 - 08:54 AM

Hi!

 

I am trying to develop a fast pace networked action game. I've read some articles and tutorial on the subject, and I've come across a problem in my implementation.

 

I've successfully implemented client side prediction, but I've run into a performance issue.

 

Both my server and my clients maintain a circular buffer of previous gamestates.

 

The server has this buffer so it can replay client input that are received late because of lag.

 

The client has this buffer so it can replay it's input (the ones that have not yet been acknowledged).

 

Now here comes the performance issue : I am coding in C#, and the way I went about implementing my circular buffer is this:

 

Each GameState is store into a LinkedList. The first item in the LinkedList is the most recent, the last is the older.

 

When I need to replay part of my circular buffer, I do it like this :

 

1) I clone the last GameState.

2) I store the clone into Last.Previous

3) I update Last.Previous

 

This "works", the client side prediction works flawlessly, my simulation is deterministic, and I the entity I control never "snaps".

 

However, with all the cloning going on, the c# garbage collector is on it's knees, and it's starting to show!

 

Is there a best practice (more efficient than what I am doing currently) to implement a circular buffer of past gamestates, and replaying them?


Interpolation vs Client side prediction

09 August 2010 - 03:04 AM

Hi.

I am developping an action game to be played over internet.

Here's what I have so far:

My basic client architecture:

Clients send their inputs to the server at 10Htz, server sends updates to the clients at 10htz also.

The client interpolates between two frames using cubic splines, and I'm pleased with the results, except that I have to wait a full RTT before an object starts to move when the player presses a button.

Now I wanted to implement client side prediction, so I implemented the technique linked in the FAQ to gafferongames.org, which is to store an array of the player's last inputs, and replay them when I receive a server update. So for the object controlled by the player, I use client side prediction, and for the other objetcts, I use cubic splines.

The problem is when the object controlled by the player takes an "unexpected turn" (ie because of an unpredictable collision), the this object will "snap", because I'm not using the cubic splines smoothing algorithm anymore...

I can't quite get my head around somehow mixing those two algorithms.

Any insight?

[Edited by - simonlourson on August 9, 2010 10:32:04 AM]

Additive distortion mapping [Pixel Shaders] [2D] [Post Processing]

23 June 2010 - 01:38 AM

Hello all.

A prettier copy of this post is located here:
http://forums.xna.com/forums/p/55922/340732.aspx#340732

I have found a interesting way to store displacement data in a texture, so I'm posting it here in case it helps anyone.

What I mean by "displacement data", is a texture I use to store my 2D distortion effects (like a shock wave, heat haze, or bubbles, in which you take the color of an pixel to put it in another)

Here is my "test" grid : http://img18.imageshack.us/img18/9602/gridq.png

And here is the grid distorted using the pixel shader I'll be using : http://img682.imageshack.us/img682/5864/griddistorted.png

Now the way I used to do this is to store the x and y displacement coordinates into the red and green values of my texture, 0.5 for each color being no displacement at all.

Here is what the displacement map looks like when I use this method : http://img37.imageshack.us/img37/9108/shadersourcerg.png

Now there are a number of problems with this, number 1 problem being that I have found no easy way of blending two of these distortion map together (Should two shock waves occur near each other, for example)

This is because I'm forced to use 0.5 as my "no distortion" color, because with this method I can have negative displacement coordinates.

The solution I have found is to use 3 axis instead of two.

This is a picture explaining the coordinates system : http://img295.imageshack.us/img295/6601/trigo.png

As you can see, with my 3 axis (rgb) coordinates system, I don't have negative values anymore!

Look at the end of this post to find the HLSL code to convert from xy coordinates to rgb coordinates.

Here is what the displacement map for the previous shader looks like when I use this method : http://img156.imageshack.us/img156/1906/shadersourcergb.png

Now for the advantages of this method over the previous one :

* The "no distortion" color is now black, that means I can blend two distortion images with a simple SpriteBatch, using additive blending. I can now have two overlapping shock waves.

* Because the "no distortion" color is now black, I can modulate the power of my effect by tinting the texture when drawing it. If I multiply the texture by Color.Black, there will be no distortion at all. If I multiply it by "new Color(0.5f, 0.5f, 0.5f), the distortion will be half as powerful.

* Because I can dynamically modulate the "power" of the distortion, I can also scale down my effects' size. (If I scale down my effect's size without scaling down its power, the effect will not look the same, because the effect will be smaller, but the displacement coordinates will still be the same.

* I can create complex effects using Photoshop and a Red / Green / Blue brush. for an example, click here for the texture map, and here for the resulting distortion:

http://img707.imageshack.us/img707/5725/shadersourcesmudge.png
http://img824.imageshack.us/img824/7334/griddistortedsmudge.png

Now, on with the HLSL code:

First we need a function that takes a u and v vector, an "oldCoord" vector. The float2 it returns is the coordinates of the oldCoord vector in the coordinates system using u and v:

1 float2 ChangerRepere(float2 u, float2 v, float2 oldCoord)
2 {
3 // This is standard matrix usage, I won't be explaining it here
4 float a = u.x;
5 float b = v.x;
6 float c = u.y;
7 float d = v.y;
8
9 float div = (a * d) - (b * c);
10
11 float inv_a = d / div;
12 float inv_b = -b / div;
13 float inv_c = -c / div;
14 float inv_d = a / div;
15
16 float2 r;
17
18 r.x = (oldCoord.x * inv_a) + (oldCoord.y * inv_b);
19 r.y = (oldCoord.x * inv_c) + (oldCoord.y * inv_d);
20
21 return r;
22 }

Now, to transform our xy coordinantes into rgb coordinates:

1 float2 difference = // The offset, in screen coordinates, of the pixel we want to take the color from. All my distortion shader use this, so it's a good value to store.
2
3 // The vectors representing our new axis.
4 // The values for these vectors are obtained using simple trigonometry.
5 float2 r = float2(0, 1);
6 float2 g = float2(-0.86602540378443864676372317075294, -0.5);
7 float2 b = float2(0.86602540378443864676372317075294, -0.5);
8
9 // The coordinates of "difference" along the red and green axis
10 float2 rg = ChangerRepere(r, g, difference);
11 // The coordinates of "difference" along the green and blue axis
12 float2 gb = ChangerRepere(g, b, difference);
13 // The coordinates of "difference" along the blue and red axis
14 float2 br = ChangerRepere(b, r, difference);
15
16 // Used to multiply the output color so it is visible
17 float colorMult = 5;
18
19 // If the coordinates along the red green axis are both positive, then the point is in the upper left quadrant
20 if (rg.x >= 0 && rg.y >= 0)
21 {
22 color.r = rg.x * colorMult;
23 color.g = rg.y * colorMult;
24 color.b = 0;
25 }
26 // If the coordinates along the green blue axis are both positive, then the point is in the bottom quadrant
27 else if (gb.x >= 0 && gb.y >= 0)
28 {
29 color.r = 0;
30 color.g = gb.x * colorMult;
31 color.b = gb.y * colorMult;
32 }
33 // If the coordinates along the blue red axis are both positive, then the point is in the upper right quadrant
34 else if (br.x >= 0 && br.y >= 0)
35 {
36 color.r = br.y * colorMult;
37 color.g = 0;
38 color.b = br.x * colorMult;
39 }
40
44 return color;

The inverse operation, ie transforming color from a texture into an offset we can use, is much simpler:

1 // Our axis. They need to be the same axis used before
2 float2 r = float2(0, 1);
3 float2 g = float2(-0.86602540378443864676372317075294, -0.5);
4 float2 b = float2(0.86602540378443864676372317075294, -0.5);
5
6 // The color mutliplier. This also needs to be the same value as before.
7 float colorMult = 5;
8
9 // Our offset, in screen coordinates
10 // distMap is the sampler containing the texture where we stored our distortion map.
11 // Tex is the coordinates of the current pixel being processed by the shader.
12 float2 difference =
13 r * tex2D( distMap, Tex.xy).r / colorMult +
14 g * tex2D( distMap, Tex.xy).g / colorMult +
15 b * tex2D( distMap, Tex.xy).b / colorMult;

I would gladly hear your comments, if there is a more efficient way of doing this, etc...

Client / Server Lag compensation

28 April 2010 - 01:30 AM

Hello all. I have designed and developped a small number of singleplayer games using C# and XNA, and I now want to take my hobby to the "next level", multiplayer. After a "naïve" implementation of a client server small game, (the server sends all objects positions to all clients each frame to the clients, and the clients send their input each frame to the server), I realized this was NOT a clever way to go. The game played fine over a lan, but was downright unplayable over the internet. And by unplayable, I mean each player controlled object would seem to move every other second on the clients. So I have done a bit of research, and I have come across some good articles about lag compensation, both client and server wise. The following is what I have gathered from those articles, and how I plan on implementing my game so it can be playable over the internet. I am posting it here because it is my first attempt, and I hope to receive constructive criticisism from you. I will be testing these concepts on a VERY simple 2D multiplayer game, 2 square boxes controlled by players using wasd... Each press on a key adds velocity to the boxes. Boxes can bounce off walls, themselves, and level design. == Architecture == Client / Server: Multiple clients will connect to a single server. The clients will send only their input to the server, and the server will send back usefull info to the clients, so they can draw the scene to the player. The server will update its game state 30 times per second. It will send the world info to the client 10 times per second. The server will not send ALL the world info each time. Static objects, such as the world layout, will only be sent once, reliably, unless they change. (delta compression). The server will send the following info about each player controlled object: * current position * current velocity == Lag compesation == Let's assume there is 200ms of roundabout lag between the server and the client. ie, the server receives the client's input 100ms after they are sent by the clients, and the clients receive the feedback of their actions 200ms after they pressed the button. Obviously, this is not acceptable. Because of this, we have to compensate for lag, both client wise and server wise. === Client side prediction === To compensate for lag client wise, the client must be able to do more than just read the object's position. The client receives from the server the position and velocity of the objects it must draw. Now, the problem is, when the client receives this info, it is already outdated, because it was what was happening on the server 100 ms ago. Because the client knows the value of its roundabout ping to the server (200ms), it knows approximately when the values were sended. (100ms ago). Now the client also knows how many updates per seconds the server does (30), so the client can compute an approximative position for the objects, by adding the velocity to the position 3 times. (because in the 100ms the data takes to get from the server to the client, the server will have run 3 more updates. To be able to do this, the client must be able to reproduce the code executed by the server. To achieve this I plan on passing a parameter to all my update methods, and in this parameter I would put a flag, either "SERVER", or "CLIENT". If the parameter is "SERVER", the method would run the full simulation, ie adding velocity, checking collisions, firing weapons, etc. If the flag is "CLIENT", the method would only add velocity. Of course this is only an approximation, so when the client receives new values for the objects positions, it should replace it's computed values by the "real" ones, send by the server. The quickest way to do this is to simply replace the values. But if I do that, the objects will seems to "snap" around. So I will be interpolating the "real" values with my computed ones. There is a great article on this site about cubic splines, so that is what I'll be using. With this method, we are able to calculate to position of all moving objects. But for the client controlling his own player object, we can do much better, because the client knows what keys are pressed in real time! So instead of blindly taking the position and velocity sended by the server, we can add directly the user input to them. That's it for client side lag compensation. But it doesn't change the fact that our input still takes 100ms to get to the server... That's where server side lag compensation comes in. === Server side lag compensation === As I said, the input from the clients arrives to the server 100ms after it was sent by the client. To do things right, the server should apply the actions send by the client to the world state as it was 100ms ago, in server time. The way to do this is simple: we store the world state in an array a. a[0] is the present world state a[1] is the world state one update ago. (33ms ago, if our server is 30htz) a[2] is the world state two updates ago. (66ms ago, if our server is 30htz) .... And so on. The more updates we store, the more "far back" we are able to apply the changes, but the more CPU and memory it takes. An example: the server knows its ping to each of the clients. When it receives a command, the server knows hows long ago the command was sent. In our case, when the command arrives 100ms after it was sent, the server would apply the command to a[3], then a[2], then a[1] to obtain the current position of the object -> a[0]. *** That's all I got. If I apply all these concepts to my game, I'm farly confident it'll run smoothly at accepteble pings. It would be awesome if you guys could correct anything wrong I've said because I haven't yet implemented those.

PARTNERS