Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Ollhax

Member Since 22 Jan 2012
Offline Last Active Jul 23 2013 03:10 AM

Topics I've Started

Figuring out the cause of jitter

12 May 2013 - 12:19 PM

Hi there! I'm having some trouble dealing with jitter in my networked game. I've created a simplified example that should explain the problem:

 

Client A updates its position, P, each frame. For simplicity, say that P is incremented by one each frame. The game updates 60 frames per second, and every 3 frames (20 updates / second) the client sends this position to Client B.

 

B receives the position and uses this value. However, since we expect less than 60 updates per second from A, B will try to predict the increment to P by adding one to it each frame. This works just fine - most of the time.

 

What I'm seeing is some jittery movement on B's end, caused by packets arriving at an uneven pace. For example, here's a printout of P at the end of each frame:

 

P: 4

P: 5

P: 6 <- Got update from A, value = 6. All is well since we expect one update every three frames.

P: 7

P: 8

P: 9 <- Got update from A, value = 9.

P: 10

P: 11

P: 12 <- Missed an update!

P: 12 <- Got update from A, value = 12. Since it was one frame late, P stands still and we get jitter.

P: 13

...

 

I guess I can't expect packets to arrive exactly on time, even though I'm running the test on a single computer and with all clients at 60 FPS. However, what to do about it? The obvious answer is smoothing, but I'd need a lot of it to cover up the jitter - the difference is often more than a frame or two. Are these results to be expected or would you suspect that something is wrong here?


Handling inputs server-side

06 May 2012 - 11:37 AM

Hi there,

I'm working on a 2d platform game using the quake 3 networking model. The client sends inputs 20 times a second, i.e. three inputs per packet. To avoid problems with "bursty" connections (see here), I process the received inputs directly on the server during a single frame. Since the inputs as well as the physics (gravity etc) of the game affects the player entities, I essentially run the entire physical update for the specific entity three times in one frame when the server gets the packet.

Now, to my problem. The above model worked very well before I added gravity to the mix, but since then I realized that I need to update the player entity on the server even when there aren't any player inputs queued up. Otherwise, lagging players would just hang in the air, as they do in my current implementation. Running physics without inputs has proven to be troublesome because depending on the timing when inputs are received, the server may be a few physics frames ahead or behind of the client. It may start off with another starting point , causing a misprediction when returning the updated position.

I've read a lot of articles and many dance around this subject, for example:
http://gafferongames...worked-physics/
- The physics is only updated when input is received, ignoring the problem what to do when inputs are missing for a longer period.

http://www.gabrielgambetta.com/?p=22
- No physics is applied in this example; all state change depends on input so the problem does not exist.

I see some alternatives:
1. Keep track on when the server starts "extrapolating" the entity, i.e. runs updates without fresh client input. When new inputs arrive, reset to the original position and re-simulate the entity with the new inputs for the duration in which it was previously extrapolated.
2. The server stops updating the entity if it runs out of client inputs. Instead, the other clients goes on to extrapolate the movement of the client that is missing inputs.
3. Something entirely different.

Number 1 is attractive since it seems the "correct" way to go about this, but I'm having trouble getting it exactly right because of the jitter; i.e. I can't fix the mispredictions entirely. Also, I feel it's somewhat over-complicated.

Number 2 is nice since it's basically the model I have, but with the additional extrapolation added on. A problem with this model is that the clients would see a very old position of the lagging player, and since all hit detection etc is done server-side the laggers would become ghosts.

Anyone got a number 3? How do you usually solve this?

EDIT: Actually, I realized while writing this that #2 is a pretty acceptable solution to this. I'll keep the post up in case someone has a better idea or someone wants to read this for reference.

Player prediction across clients, minimizing latency

13 April 2012 - 02:25 PM

Hi there! I'm working on my network code and I've run into a minor snag that someone perhaps can help me with. I went with the Quake 3 network model and I've used Gabriel Gambetta's excellent blog posts as a reference when implementing the finer points such as server reconciliation.

However, there's a situation that occur when there's more than one client in the picture. Lengthy example ahead:

Say that Client A runs three frames, then decides to send an update to the server. I.e, three input structures are sent to the server.

The server receives the update after a few moments, say SendTime(Client A) or ST(A) time units later. The three inputs are queued for Client A for three server update ticks in the future, meaning that the server will be fully up to date with the client after ST(A) + 3 ticks. This is all fine and dandy, since Client A's prediction and server reconciliation will hide all this latency for Client A.

What bothers me is when Client B enters the picture. One could argue that he should be able to know Client A's newest position after ST(A) + ST(B) time units, but if the system is implemented exactly as described above, the input may not show until ST(A) + ST(B) + 3 ticks. This is because the server would have to update the state in order for the input's effect to show. Exactly how much delay Client B would experience also depends on how often the server sends updates.

My question is, do I have a fault in this design or is this how it usually is? One improvement I can see now would be for the server to send over A's remaining inputs to B when doing an update, letting B predict some of A's "future" inputs also. Another thing to try out would be to let Client B just extrapolate A's previous input until the server sends fresher updates. Any more takes on this?

Custom Vector, Matrix, etc classes in XNA - A dilemma

04 February 2012 - 02:05 PM

I'm having trouble deciding how to solve a tricky situation, and I'm hoping for some input. Perhaps I'm missing something.

My situation is that I'm implementing script support in my game through C# code being compiled on the fly. The scripts are not to be trusted, so I'm sandboxing them in a separate AppDomain. The problem is, this AppDomain cannot access the XNA assemblies since XNA demands full trust, and I need at least Vectors, Matrixes and a few other base classes from XNA.

So, I clearly need my own versions of these classes, and no worries there. I'm just not sure how far to use these classes.

My first idea was to get rid of all XNA's versions of the classes and use my own over my whole code base. The idea is sound, and the main benefit is that I may use my math libs, helper functions etc in the scripts too. I'd get rid of some annoying dependencies to XNA. However, there is one big caveat: I need to convert them into their XNA counterparts every time i use XNA's functionality. This comes as a small performance cost, and a (IMO) huge ugliness cost.

The other version is to use the custom Vector, Matrix, etc only for the script and convert to XNA counterparts as soon as I get back the script output. There's the ugliness here too, but contained closely around the interface around the sandbox. I'd have to duplicate any helper functions that I've created - which is in a way nice, since I'd keep the things exposed to the script very strictly separated.

Méh. Any takes on this?

Different script solutions, performance and security

22 January 2012 - 03:00 PM

Hi there! As the title of the thread suggests, I'm currently trying out different ways of getting scripts to run in my game. The idea is to let users build their own game scenarios and such, similar to how Natural Selection 2 uses Lua for all its gameplay. I have some requirements:
  • The game is built with C# and XNA and the script should work with this setup.
  • The script should not be able to screw up the player's computer. I.e. no file access and such.
  • The script should run fast enough to not hinder my 60 fps target.
  • Hot-reloading the script would be nice. Not a 100% requirement, but it would help.
  • Good support for debuggers.
So, the main contenders so far are compiling C# on the fly and running Lua scripts. I'm not completely satisfied with either one.

Running C# scripts (i.e. just have .cs-files in a directory that are compiled on the fly when starting the game) sounded like the perfect solution at first. I like working in C#, and some quick testing revealed that the visual studio debugger can easilly attach to the script, the performance is great (equal to the rest of the code) and everything is peachy. Until the moment I realized that requirement #2 is violated, you can do anything with the scripts.

The solution I researched is to make a separate AppDomain, set minimal rights and run the script there. The problem is, the cross-domain performance is pretty awful. In my profiling tests, I get about 3500 calls to the script before 16 ms has passed.

So, I tried Lua. Using LuaInterface, I made the same kind of setup as before. My initial runs were positive, getting about 8x the performance of the C#/AppDomain scripts. Then I realized that I had to sandbox the lua calls too to make the tests fair, and after that the performance is much more evened out.

Stats when running 1000000 method calls, release build of course. Tests are:
  • simple: just a call to a function that returns a string.
  • noDomain: calling an identical function from a C# script. Most of the overhead is probably the reflection used to invoke the function.
  • domain: doing the same thing as above, but this time the script is run in a separate appDomain. I.e. real test for C# script.
  • lua: similar functionality for Lua instead.
Results in seconds:
  • simple: 0,0025514
  • noDomain: 0,6140648 (simple * 240,6776)
  • domain: 4,819073 (simple * 1888,795)
  • lua: 3,304015 (simple * 1294,981)
(1 / 60) / (domain's time / 1000000) = 3458 calls during 16.6 ms. Compare with ~5k calls from lua.

Question: Do these results seem reasonable? On average, how many script calls do you guys make per frame?

PARTNERS