Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Don't forget to read Tuesday's email newsletter for your chance to win a free copy of Construct 2!


fholm

Member Since 13 Aug 2011
Offline Last Active Jun 17 2014 11:35 PM

Topics I've Started

I just released my own networking solution, Bolt, for the Unity3D game engine.

31 May 2014 - 11:03 AM

Figured I would post a quick notice in here also, considering all of the asking and talking I've done here over the past two years or so smile.png. I just released my own networking solution, called Bolt, for Unity3D. You can find it over at http://www.boltengine.com

Simulation in time vs. frames (steps/ticks)

16 March 2014 - 07:23 AM

So, like always, I have been working on my FPS game. A while back I switched the networking from using time (as in seconds) to using frames (as in simulation frames/steps/ticks/whatever you wanna call them). All state that is sent from both server to client and client to server are stamped with the frame the state is valid for, all actions which are taken (like "throw a grenade") are stamped with the frame they happened, etc. 

 

This makes for a very tidy model, there is no floating point inaccuracies to deal with, everything always happens at an explicit frame, it's very easy to keep things in sync and looking proper compared when using time (as in seconds).

 

Now, the one thing that popped into my head recently was the fact that when you deal with explicitly numbered frames and the client start lagging behind or for some reason runs faster then the server, basically when you get into a state which the client needs to re-adjust its local estimate of the server frame (server time if you are using time instead). When dealing with "time" you can adjust/drift time at a very small level (usually in milliseconds) over several frames to re-adjust your local estimate on the client with the server, which gives next to no visible artifacts. 

 

But when you are dealing with frames, your most fine grained level of control is a whole simulation frame, this causes two problems: If you are just drifting a tiny tiny bit (a few ms per second or so), the re-adjustment that happens when dealing with "time" is practically invisible as you can correct the tiny drift instantly and there is no time for it to accumulate. But when dealing with frames you wont even notice this until you have drifted an entire simulation frame worth of time. Also like I said, when you are going to re-adjust the clients estimate you have no smaller unit than "simulation frame" to deal with, so there is no way of interpolating, you just have to "hard step" an entire frame up/down which leaves visual artifacts.

 

All of this obviously only applies when you are doing small adjustments, but they are the most common ones to do. If you get huge discrepancies in the time between the client and server, every solution is going to give you visual artifacts, snapping, slowdown/speedup, etc. But that is pretty rare.

 

So, what's the solution here? I have been thinking about a model which still keeps most of my original design with frames, but when the state/actions are received on the remote end it's easy to transform the frame numbers into actual simulation time numbers, and then as things are displayed on remote machines you can nudge the time back/forward as needed. I'm just looking for ideas and input on how people have solved this.

 

Edit: But then again, maybe I am over-thinking this because since "time" will also be measured with the granularity of your simulation step size. So maybe there is no difference when you compare the two solutions? For example, if you nudge your local time -1 ms to "align" with the server better, but this pushes you over a frame boundary then everything will be delayed by a whole frame anyway...

 

Edit 2: Or actually, I suppose the case above where you nudge time to get closer to the server and it happens to be right on a frame boundary would require you to drift more then one frame worth of time without detecting it (you most likely got a ping spike or packet drop), because if you just drift a tiny bit away/towards the server, and you re-adjust back closer towards the server, you would never cross the frame boundary.

 

Edit 3: Also, thinking more of this, maybe the reasons I am confusing myself with this is that currently the updating of remote entities are done in my local simulation loop, when maybe it should be done in the render loop instead? I mean, I have no control over the remote entities what so ever anyway, since they are... well remote. On the other hand it feels a lot cleaner (in my mind) to step all remote entities at the same interval as they are stepped on the peer that controls them. On the other hand, maybe this is why it's getting convoluted as I'm mixing up my local simulation of the entities I control with the display of the remote entities. 

 

Like you probably can tell I am pretty ambivalent on what route to go with here, really looking for some input from someone.


De-jitter buffer on both the client and server?

13 January 2014 - 10:35 AM

Is it common practice to place a de-jitter buffer on both ends of a server<->client connection? I have a de-jitter buffer in place on the client where it receives data, to smooth out packet delivery so i can de-queue them on-time, every-time (unless a lot of drop/ping fluctuations, but that's just something you have to live with).

 

My question is: Is it common to put a de-jitter buffer on each connection on the server also, that is the data which is sent from the client to the server. The reason I am asking is because of this case:

 

* Client A produces "MOVE FORWARD" commands to move it's avatar through the world by holding down the W key, every frame.

 

* Server A receives these commands, and processes them as they come in. Sometimes they are exactly on time, sometimes a little bit-early, and sometimes they are a bit late, sometimes when the move commands come in late the server simulates a step without moving the client.

 

* Client B receives the updated positions of Client A's avatar, as long as the move commands from Client A to the Server arrive early/on time, there's no problem. But when they are late, Client B will see this as the movements of Client A becomes a bit "snappy", it is very marginal but if you look hard enough you can see the speed of Client A's avatar vary just slightly for a few packets.

 

I have noticed the same networking artifact in AAA-titles also (BF4 for example), so is this a case that is generally just ignored? Or are there games which apply a de-jitter buffer on the server also?


Checking sanity of my tick synchronization

10 January 2014 - 11:07 AM

So I have been working on my FPS game and its networking lately and I have been thinking about a case I have not really considered yet. My server sends data to the clients on every third simulation frame, this ends up being 20 times per second with my 60 Hz simulation rate. Every packet has the servers simulation frame number (tick) attached to it.

 

Currently the client synchronizes it's own tick number to the first packet received from the server like this:

int clientTick = serverTick - 3;

And then increases it's own tick for every simulation-frame which passes locally after this. The reason that i subtract 3 from the server tick is because I buffer the packets locally on the client in a de-jitter buffer, and then de-queue them from the buffer based on the client local tick number. So we basically "rewind" 3 ticks in time to setup this local delay to handle packet jitter.

 

The end result is that we expect to have 1-2 packets in this buffer on the client, and that if we have > 2 packets in the buffer we have drifted behind the server more then intended and need to speed up our local simulation a bit. Now, this seems to be working and is the by far best solution to this problem I have come up with.

 

Is this a sane way to solve this problem? It seems sane to me, and seems to be working, but just wanna get some feedback.

 


Using several PhysX scenes at the same time?

05 November 2013 - 08:01 AM

I have been reading through both the manual and API docs for PhysX 3.x and I can't find any reference on how to set PhysX up so you can have several scenes at once in your app?

 

Do you need to duplicate both the PxFoundation and PxPhysics and the Scene object for each Scene, or can you just call createScene() on the PxPhysics object as many times as you want? Do you need a separate dispatcher per scene, or can they share dispatchers?


PARTNERS