Jump to content

  • Log In with Google      Sign In   
  • Create Account

Angus Hollands

Member Since 06 Apr 2012
Offline Last Active Dec 06 2014 07:56 PM

Topics I've Started

Concern over input jitter

11 May 2014 - 09:17 AM

Hi everyone. Recently I moved over to a sort of dejitter buffer which has been working wonderfully on LAN play tests, which due to slight timing problems wasn't so happy with my last implementation.

consume_move = self.buffer.popleft

# When we run out of moves, wait till we have enough
buffer_length = len(self.buffer)

# Ensure we don't over fill due to network jitter
buffer_limit = (self.buffer_length + self.buffer_padding)

# If we have 0 moves in the buffer
if not buffer_length:
	print("Waiting for enough inputs ...")
	self.buffer_filling = True

# Prevent too many items filling buffer
# Otherwise we may move to the past slowly and it causes long-term issues
elif buffer_length > buffer_limit:
	print("Received too many inputs, dropping ...")
	for _ in range(buffer_length - self.buffer_length):

# Clear buffer filling status when we have enough
if self.buffer_filling:
	if len(self.buffer) < self.buffer_length:
	self.buffer_filling = False

	# New debug feature, needs clamping at safe maximum
	# Ping is the RTT, and we add 1 to ensure the time is at least 1 tick
	self.buffer_length += WorldInfo.to_ticks(self.info.ping / 2) + 1

	buffered_move = self.buffer[0]

except IndexError:
	print("Ran out of moves!")

move_tick = buffered_move.tick

# Run the move at the present time (it's valid)

Here is my update code in the main update function for a player (controller)


When running over the internet (using my NAT's external IP and connecting from the machine), it isn't happy with 100 ms dejittering, and when I added the simple increment in the "if self.buffer_filling" branch, it seems happy at around 13 -> 16 ticks, which is around 400 ms. Surely this doesn't seem reasonable?


This seems far too high for a) my internet connection and b) most internet connections. I could have reason to suspect my provider as they're not the best in my opinion, but it seems unusual that so many packets are delayed, as they are each sent individually. 

I printed out the number of items in the buffer each tick and it would read something like:

No moves!

Also, I do seem to notice that every so often a packet is dropped. What would be an expected packet loss statistic for a 4 Mb/s, latency 60ms, internet connection in the UK?


I'm trying to determine if it is some deeper level network code issue (in my codebase) or just life.

Network Tick rates

18 January 2014 - 03:00 PM

I have a few questions regarding the principle of operating a network tick rate.
Firstly, I do fix my tick rate on clients. However, I cannot guarantee that if the animation code or other system code takes too long, that it will operate at such a frame rate. 
To calculate the network tick, I would simply use "(self.elapsed - self.clock_correction) * self.simulation_speed", assuming the correction is the server time + latency downstream. 

However, if the client runs at 42 fps, and the simulation speed is 60 fps (or, the server runs at 42 and the simulation is 60), I will eventually calculate the same frame successive times in a row if I round the result of the aforementioned equation. (This was an assumption, it seems unlikely in practice, but this will still occur when I correct the clock). How should one handle this?
Furthermore, should the simulation speed be the same as the server fixed tick rate, for simplicity?


One last question;

If I send an RPC call (or simply the input state as it would be) to the server every client tick (which I shall guarantee to run at a lower or equal tick rate to the server), then I believe I should simply maintain an internal "current tick" on the server entity, and every time the game tick is the same as (last_run_tick + (latest_packet_tick - last_packet_tick)) on the server I pop the latest packet and apply the inputs. This way, if the client runs at 30 fps, and the server at 60, then it would apply the inputs on the server every 2nd frame. 

However, if the client's packet arrives late for whatever reason, what is the best approach? Should I introduce an artificial buffer on the server? Or should I perform rewind (which is undesirable for me as I am using Bullet Physics, and thus I would have to "step physics, rewind other entities" and hence any collisions between rigid body and client entities would be cleared"? If I do not handle this, and use the aforementioned model, I will eventually build an accumulation of states, and the server will drift behind the client.
Regards, Angus

Algorithms / techniques to determine available bandwidth

10 December 2013 - 03:33 PM

Hi everyone,

As the question states, I was wondering how best to determine the available bandwidth of a connection. I intend to determine the update frequency of a networked entity based upon its network priority and the available bandwidth.

Any ideas?



Where to render object positions and animations

15 May 2013 - 09:01 AM

Hey everyone,


I'm a little indecisive when choosing how to render the gamestate on the client. 

I can forward extrapolate the other clients to where they would be on the server, but then things like playing animations would need to be forwarded to compensate. As a result, the animation would "jump" into a frame; such that if the latency was half a second and the animation framerate was 60 frames per second, 30 frames would be skipped. 


How should I deal with the timing issues?

  1. Extrapolate forwards by the downstream latency to get the current position of the object
  2. Render at time of receipt but still run client itself ahead using prediction

Forgetting about the jitter buffer, I'm concerned about hit detection; The first option would potentially be out by some margin, but the latter option would technically be incorrect as it was aiming at the client's old positions. Any thoughts?

Unreal Networking design questions

10 February 2013 - 06:38 AM

Hey everyone.
I'm redesigning my multiplayer architecture to work in a more manageable structure. At present, client and server classes are separately defined and ultimately share little in common. Simple functions that interface with additional data must be separated for the sake of perhaps one function call, and it is explicitly serialised. Anyway, this is mostly network terms. What I wish to understand is how Unreal (the solution I'm building off) handles the entry points for server and client routines. 

For the most part, there is little difference between how the server and client actors run; certain functions are suppressed because they're not client-side or server-side, and certain functions are simply invoked at the other end. However, the fundamental difference exists somewhere in the loop. I understand that both the Server and Client have predefined events that are called onCollision etc.


My first question is how are server and client-side functions separated if they both need to be called but do different things? For example, collision events would do different things on the server and client, If the client-side actor was autonomous it would play a sound before the server called playSound, but on the server it would tell nearby actors but the current actor to hearSound. How is this achieved? 


Secondly, how are events sent to the server? It seems like they are sent in an RPC after the client's autonomous actor runs the simulation client-side. But if this is the case, how does the simulation get called? I know of a playerTick function, but how does this choose which function to run? Is it a simple if-switch? If ROLE == ROLE_AUTONOMOUS server_func() else client_func()?


Lastly, simulated functions. I know that the question has been asked many a time, but I would like to clarify their purpose.

I'm writing this solution in Python, so I have a lot of duck-typing possibilities I can exploit. I intend to use decorators to define when functions should be called on the remote side (@remote(world_info, target=SERVER)). I therefore assume that any function without a remote call can be called by either side. I think once I clarify the above points I may answer my own question, but until then what does the Simulated keyword actually mean for the client and server?


Many thanks for your time,