# Conceptual: Synchronizing complex objects

## Recommended Posts

dirtyminuth    123
Hi everyone, first post here, long time lurker, usw. Onto the issue! Some friends and I are working on a small game for fun. Nothing serious, taking things easy, just enjoying ourselves. We've run into a networking issue, and wouldn't mind insight / comments on it. Our game centers around tile groups (think multiple instances of a 25x25 array of tiles). Tiles can be dynamically added and removed from groups during gameplay. We need to synchronize the state of each tile group between the server and any clients. There are two basic options: sending the absolute state of each tile group or sending deltas. Each is explained below: Absolute Having the server send the absolute state of each tile group might be infeasible, as this would require a minimum (ignoring compression) of 625 bits (25x25 array * 1 bit) per server tick. Only 1 bit per tile is needed (0 = tile present, 1 = tile not present). Assuming 20 server ticks per second, this equals, for a single tile group, 1.52 kB/s. We may have around 10-20 groups at any given time. However, using optimizations like potentially visible sets, we could reduce bandwidth requirements significantly. Sending absolute states is desirable anyways, as we can handle a lossy connection with more ease than sending deltas (see below). Deltas An alternative to sending absolute states is for the server to send tile creation / removal events instead. A client can take the initial state of a tile group, apply these events, in-order, to the tile group, and reconstruct the server state. This would certainly require less bandwidth, but requires each client to receive a complete list of all events. Missing even a single event could cause the server and client to lose synchronization. Moreover, if a new player wants to join the game halfway into a round, it would have to receive all events from the server accumulated thus far. Given the possible length of a single round, this could also be infeasible. So, given these two options to synchronize data-heavy objects, what might be the best option? Are there other optimizations that could help? Would a hybrid absolute / delta solution work well? Are we hosed, Tommy? Any comments would be appreciated! -dirtyminuth

##### Share on other sites
hplus0603    11347
If your game is strategic (turn based, or like an RTS) then use just deltas and TCP to make sure nobody misses any data.

##### Share on other sites
dirtyminuth    123
Thanks hplus - I should have stated that this game is not turn-based or RTS - it's fast paced like an FPS. One more point (for additional info): any player in the game is likely going to create 2 to 3 delta events per second.

##### Share on other sites
Drigovas    509
The need to consider data loss is only there if you're using a protocol that has a tendancy to loose data, which sounds like the only thing that would make you choose 'absolute' over 'deltas'. 'Deltas' is clearly the better choice [it sounds like, if I am understanding you correctly] in terms of what you'd do in an ideal world with perfect connections that never loose data, so how about this:

If you use TCP, you will never loose data, and can use deltas without fear.

If you use UDP, include a small counter [even a few bits would be more than sufficient for this kind of use], and use it as a sort of time stamp. Each packet sent recieves a counter equal to the previous packet's counter, with 1 added. Request packets to be re-transmitted if they are recieved out of order, and you've assured no packets get dropped.

Either way, the overhead is less than using the 'absolute' method. I'd advise TCP, just because it's easier, and you don't have to write the packet stamping stuff [since that's effectively what it does internally].

##### Share on other sites
Ozymandias43    158
Another option would be to send deltas from a known good point. At the start of the game, the server and the client will know the initial state. As the game plays, the server will compute the difference between the current state and the INITIAL state. The client regularly sends acknowledgments back as to which states it knows about. The server continues sending deltas from the last state it knows the client knows about. This technique will get you get the benefits of deltas without the expense of the full state information.

You'll have to think carefully about how you encode the tiles delta, by the way. If you just list all the missing tiles, a packet expressing an empty tile grid will be much larger than the full data update would be.

##### Share on other sites
Antheus    2409
Quote:
 Original post by Ozymandias43Another option would be to send deltas from a known good point. At the start of the game, the server and the client will know the initial state. As the game plays, the server will compute the difference between the current state and the INITIAL state. The client regularly sends acknowledgments back as to which states it knows about. The server continues sending deltas from the last state it knows the client knows about. This technique will get you get the benefits of deltas without the expense of the full state information.You'll have to think carefully about how you encode the tiles delta, by the way. If you just list all the missing tiles, a packet expressing an empty tile grid will be much larger than the full data update would be.

class Observable{  void subscribe( Client c )   {     c.send( pack_state() );    subscribers.add(c);   }  void unsubscribe( Client c )   {     c.send( destroy_message() );    subscribers.remove(c);   }  void send_delta( )  {    Message delta = pack_delta();    for (i : every subscriber) {      subscriber[i].send( delta );    }  }  Message pack_state() { // store baseline state }  Message pack_delta() { // write what has changed into message, and clear delta flags }  Message destroy_message() { // notify client the object has been removed }}

On each game loop:
- run through client requests, unsubscribe those that need to be unsubscribed
- iterate through all observables, call send_delta()
- parse through client requests, and subscribe those that need to be subscribed

This might be too much overhead depending on your structure and number of tiles. But I think that each of your 25x25 tile set could easily be one observable.

The subscribe here is called when a client gets into range of a tile set, not when it logs into the game (well, then too, but just because it enters world and sees some tile sets).

This is intended for dynamic POV updates - the basic publish/subscribe mechanism.

[Edited by - Antheus on May 10, 2007 9:03:08 AM]

##### Share on other sites
oliii    2196
why not just do both and use delta compression?

here's an example I've been playing with :

struct C_GameState{	// whatever is in there	// .......};struct C_Sequence{	C_GameState m_State;	int			m_sequenceId;};struct C_RemoteAddress	{	in_addr m_Address;};struct C_ClientProxy{	bool				m_valid;	int					m_sequenceId;	C_RemoteAddress		m_Address;};struct C_DataPacket{	char m_Data[128 * 1024]; // max of 128 kb of data.	int  m_DataSize;	// Full game state	bool SendFullSequence(const C_RemoteAddress& ToAddress, C_Sequence* pSequence);	bool ReceiveFullSequence(C_Sequence* pNewSequence);	// delta game state	bool SendDeltaSequence(const C_RemoteAddress& ToAddress, C_Sequence* pNewSequence, C_Sequence* pBaseSequence);	bool ReceiveDeltaSequence(C_Sequence* pNewSequence, C_Sequence* pBaseSequence);	// receive a new packet. if it's a full update, baseSequenceId will be -1, else it will be	// the last ackowledged sequence number seen by the server.	bool PollForNewPacket(const C_RemoteAddress& ToAddress, int& newSequenceId, int& baseSequenceId);	bool SendAck(const C_RemoteAddress& ToAddress, int sequenceId);};struct C_SequenceBuffer{	C_Sequence m_Buffer[32];	int m_sequenceId;		C_SequenceBuffer() { Clear(); }	void Clear()	{		m_sequenceId = -1;		for(int i = 0; i < 32; i ++)			m_Buffer[i].m_sequenceId = -1;	}	C_Sequence* FindSequence(int id)	{		if(id < 0) return NULL;		int index = (id)&31;		if (m_Buffer[index].m_sequenceId != id)			return NULL;		return &m_Buffer[index];	}	// From clients.	// Clients try allocate a new slot. 	// If the sequence number received is older, don't bother. old packet.	C_Sequence* AllocSequence(int id)	{		if(id <= m_sequenceId) return NULL;		m_sequenceId = id;		int index = (id)&31;		m_Buffer[index].m_sequenceId = id;		return &m_Buffer[index];	}	C_Sequence* GetLatestSequence()	{		return FindSequence(m_sequenceId-1);	}	bool BuildNewSequence()	{		// The game state hasn't changed. No point serialising it again		if(!TheGame().IsDirty())			return true;		// first serialise.		if(m_sequenceId < 0) m_sequenceId = 0;		int index = (m_sequenceId&31);		C_GameState& state = m_Buffer[index].m_State;          		// [GNT - 10.5.2007] - failed to serialise		if(!TheGame().Serialise(state)) 		{			Clear();			return false;		}		m_Buffer[index] = m_sequenceId;		m_sequenceId++;		return true;	}};struct C_Server{	C_SequenceBuffer m_Buffer;	C_ClientProxy m_Clients[64]; // max 64 clients	C_DataPacket m_Packet;		void Update()	{		// build 		if(!m_Buffer.BuildNewSequence())			return;		// get latest game state		C_Sequence* pLatestSequence = m_Buffer.GetLatestSequence();		if(!pLatestSequence)			return;				for(int i = 0; i < m_Clients; i ++)		{			if(!m_Clients[i].m_valid)				continue;			// get client game state			C_Sequence* pClientSequence = m_Buffer.FindSequence(m_Clients[i].m_sequenceId);			// client is up to date			if(pClientSequence == pLatestSequence)				continue;			// client state unknown. Send full state			if(!pClientSequence)			{				m_Packet.SendFullSequence(m_Clients[i].m_Address, pLatestSequence);			}			// client state is known. send delta			else			{				m_Packet.SendDeltaSequence(m_Clients[i].m_Address, pLatestSequence, pClientSequence);			}		}	}};};struct C_Client{	C_DataPacket		m_Packet;	C_SequenceBuffer	m_Buffer;	C_RemoteAddress		m_ServerAddress;	void Update()	{		// receive packets from server		int newSequenceId, baseSequenceId;		while(m_Packet.PollForNewPacket(m_ServerAddress, newSequenceId, baseSequenceId))		{			// placeholders			C_Sequence* pNewSequence = m_Buffer.AllocSequence(newSequenceId); // new slot to store the new state			C_Sequence* pBaseSequence = m_Buffer.FindSequence(baseSequenceId); // the base for the delta			// failed to allocate a new slot			if(!pNewSequence)				continue;			if(!pBaseSequence)			{				// Expecting a full sequence				if(!m_Packet.ReceiveFullSequence(pNewSequence))				{					m_Buffer.Clear(); // buggered. clear everything					continue;				}			}			else			{				// Expecting a delta packet				if(!m_Packet.ReceiveDeltaSequence(pNewSequence, pBaseSequence))				{					// error. Tell server to resend everything.					m_Buffer.Clear(); // buggered. clear everything					continue;				}			}			// and deserialise the game state.			TheGame().Deserialise(pNewSequence->m_State);		}		// Acknowledge the latest packet		m_Packet.SendAck(m_ServerAddress, m_Buffer.m_sequenceId);	}};

##### Share on other sites
hplus0603    11347
Quote:
 any player in the game is likely going to create 2 to 3 delta events per second.

That's not fast paced -- that's the typical command rate of a skilled RTS player.

The biggest question, when using delta, is whether you can accept the command latency that typically comes with that implementation.

##### Share on other sites
dirtyminuth    123
Wow! Thanks for everyone putting in the time to help. I'd like to address each response:

Quote:
 Drigovas(Using TCP)

TCP is the easiest in terms of implementing endpoint logic (you don't have to code retries, etc.), but data would be much less timely. I was considering a hybrid UDP/TCP system, using UDP for things small enough to send absolute states, like player pos/vel/acc, and using TCP for these tile group events. There might be an problem with merging UDP and TCP updates into a single client state, though.

Quote:
 Ozymandias43(Sending updates based on server knowledge of client states)

That's a pretty interesting idea! It would keep the overall size of delta event small. But how would the server maintain the client 'good' states? For example, consider the following scenario: The client's last 'good' server state is at server time t0, and the server is now generating the authoritative state at time t10 - does this mean the server must maintain all game states between t0 and t10, as the client might later inform the server that his latest 'good' state is t5 (or t6, t7, etc.)?

Quote:
 Antheus(Publish / Subscribe mechanism)

This will definately reduce bandwidth requirements. What would happen if a tile group is out of a client's subscription for a long time and then comes into view? I can imagine having to send a large update, and this also could lead to the server keeping track of a lot of past game states so they can know how the client can 'catch up'.

Quote:
 oliii(Hybrid Full Sequence / Delta Sequence)

Definately considering this - but would this also lead to the server having to maintain a large number of past states (see issues above).

Quote:
 hplus0603(Event Rate)

I hope clients can handle the latency - there's more than just these delta events taking place in the game, and we might be able to implement some client-side prediction of these events.

These ideas have my mind racing, in a good way!

-dirtyminuth

##### Share on other sites
oliii    2196
Quake-style delta-compression is awesome for on any fast-paced, data-driven networked games.

in the case I pointed out, the memory overhead would be 32x a server state, for a cache 32x deep, not much in terms of data (if you just want to synch tiles, that's 32 x 80 bytes (640 bits)). Say you update at 16 fps, that gives you a two second window if the (game state got 'dirty' every frame), which isn't that deep.

If you want to further increase bandwidth savings and do it on a per-observer basis, than it can get pretty taxing. it's 32 states x 32 players. but the data cached should be a lot smaller. You don't need to cache the whole world, only what each player 'observes' at the time. Then you need a mechanism to fragment the world into observable atoms (your tiles), ways of referring to them (guids, hashed coordinates), so on and so forth.

##### Share on other sites
Will Vale    175

A slightly different idea to add to the mix: have you thought about run length encoding the grid and sending entire compressed states?

It won't always offer predictable bandwidth use (but then deltas don't exactly do this either) and could be vulnerable to pathological cases, but it could be worth a try. It depends on the kind of tile patterns you expect.

Cheers,

Will

##### Share on other sites
Antheus    2409
Quote:
 This will definately reduce bandwidth requirements. What would happen if a tile group is out of a client's subscription for a long time and then comes into view? I can imagine having to send a large update, and this also could lead to the server keeping track of a lot of past game states so they can know how the client can 'catch up'.

State is state.

If someone is playing a game of chess, and observers A and B join at time 0 and 20,000 respectively, the full state of the board is always 8x8 tiles, and up to 32 pieces. Regardless of when they join.

This is the difference between delta and baseline.

Baseline is full current state of an object. Delta is what has changed since the *last* update, not since beginning.

It's not like undo mechanism or something similar, where you need to keep track of all actions since the beginning.

##### Share on other sites
oliii    2196
Yeah, think about using huffman encoding, or other lossless data compression schemes. That works well if your data has lots of repeating information.

For bug-free delta compression, you do need to cache previous states and exchange sequence numbers. This is to overcome packet loss problems and such. What happens if you drop a packet, and delta-compress strictly from the previous known state, and a packet is dropped? I used delta compression from previous state only in a replay system, since you cannot loose packets. unless you use TCP.

There is also another way to do it simply, which is a mix of delta and baseline. Where you send baselines every so often (say, 4 seconds), and send deltas from that baseline. If a packet is lost, you will potentially have discrepancies until the next full state is sent again.

##### Share on other sites
hplus0603    11347
oliii,

The delta compression system they are discussing (the "Quake model") is where the server saves the last X updates, together with their sequence numbers. The client will acknowledge each received update, so the server knows one sequence number that the client has seen. The server will send deltas based on that sequence number, until a new acknowledgment is received. Btw: The delta frame needs to also include which sequence number it's based off of, so the client knows how to reconstruct it, and the client needs to keep some amount of backlog, too.

##### Share on other sites
dirtyminuth    123
Thanks for the replies, everyone. I've been thinking about the suggestions and think I've come up with a good method:

During normal operation, the server will send all delta events, time-tagged with the server time, to all clients. The clients will process delta events in order, and, if they miss any, will send a request to the server. Developing a system for delta states isn't worth the work, as I see it. Sending delta events with requests should be a low bandwidth, latency tolerant system (and basically accomplishes what delta states do, just smaller granularity).

When a player joins, the server will send an absolute state of the game. Moreover, if a client is deemed too far out of synch with the server, an absolute state will be sent. These states will (ideally) be few and far between (less than one absolute state per client per minute).

Compression has been considered and will likely be implemented. RLE and Huffman encoding techniques are options, and it will involve assumptions made about a "normal" tile group state.

This is, of course, an ideal situation. We'll see what happens!

-dirtyminuth

##### Share on other sites
Ozymandias43    158
Quote:
 Original post by oliiiQuake-style delta-compression is awesome for on any fast-paced, data-driven networked games.

What does the quake server store as far as previous states? Does it remember the last N states for compression purposes?

##### Share on other sites
oliii    2196
Quote:
 Original post by hplus0603oliii,The delta compression system they are discussing (the "Quake model") is where the server saves the last X updates, together with their sequence numbers. The client will acknowledge each received update, so the server knows one sequence number that the client has seen. The server will send deltas based on that sequence number, until a new acknowledgment is received. Btw: The delta frame needs to also include which sequence number it's based off of, so the client knows how to reconstruct it, and the client needs to keep some amount of backlog, too.

Yes, I know that. Quake 3 goes further I think and caches the 'viewable' states for each clients. That increase the memory requirements.

Quote:
Original post by Ozymandias43
Quote:
 Original post by oliiiQuake-style delta-compression is awesome for on any fast-paced, data-driven networked games.

What does the quake server store as far as previous states? Does it remember the last N states for compression purposes?

Yes, so it can diff with the last aknowledged state for each clients.

##### Share on other sites
Ozymandias43    158
Quote:
Original post by oliii
Quote:
 Original post by Ozymandias43What does the quake server store as far as previous states? Does it remember the last N states for compression purposes?

Yes, so it can diff with the last aknowledged state for each clients.

Hrm....not to hijack the thread, but now that I think about it, what's the preferred data type for a large game state? Game states can be kind of large and, worse, their size can change. If you need to keep the last N states in memory, I suppose you'd want to store the game states as a circular buffer of some sort.

Let's say the only two dynamic values are players, missiles, and buffs. A player can have an unlimited number of buffs, and there are an unlimited number of missiles in the air at any one time.

What I'm thinking right now is that I'd create a circular buffer of N elements. Each element would be big enough to store a good amount of missiles and buffs and players. Every time a new missile or buff or player showed up, I'd check to see if the size of a gamestate needed to be increased.

There's gotta be a better way, though, I hope?

##### Share on other sites
hplus0603    11347
There should be no "unlimited" anythings in your design, or you will see problems with resource consumption (maybe even exploits).

If you want to support the worst case, you have to set aside memory for the worst case. There is no other way!

##### Share on other sites
matt_j    106
One thing I recently did for the server end was actually store only the binary delta data that was generated when it had to send delta information to the client. That way you're saving a lot of memory. (my game state is over 50k, but the delta data in the packets was only about 1k max)

Store these binary strings in a linked list. When you get a sequence ack from the client, you know which delta info in memory you can apply to the known client game state. As stated above you could also store a history of what the client knows, but that might be overkill.

Also, use a function that both the client and the server can use for applying the delta information. Mine looks something like:

UncompressDelta(snapshot_t *targetData, delta_t *d)

So, again whether data is coming in packets on the client side or the server is simply updating the known client state, you can use one function for this.