Conceptual: Synchronizing complex objects

Started by
18 comments, last by matt_j 16 years, 11 months ago
Hi everyone, first post here, long time lurker, usw. Onto the issue! Some friends and I are working on a small game for fun. Nothing serious, taking things easy, just enjoying ourselves. We've run into a networking issue, and wouldn't mind insight / comments on it. Our game centers around tile groups (think multiple instances of a 25x25 array of tiles). Tiles can be dynamically added and removed from groups during gameplay. We need to synchronize the state of each tile group between the server and any clients. There are two basic options: sending the absolute state of each tile group or sending deltas. Each is explained below: Absolute Having the server send the absolute state of each tile group might be infeasible, as this would require a minimum (ignoring compression) of 625 bits (25x25 array * 1 bit) per server tick. Only 1 bit per tile is needed (0 = tile present, 1 = tile not present). Assuming 20 server ticks per second, this equals, for a single tile group, 1.52 kB/s. We may have around 10-20 groups at any given time. However, using optimizations like potentially visible sets, we could reduce bandwidth requirements significantly. Sending absolute states is desirable anyways, as we can handle a lossy connection with more ease than sending deltas (see below). Deltas An alternative to sending absolute states is for the server to send tile creation / removal events instead. A client can take the initial state of a tile group, apply these events, in-order, to the tile group, and reconstruct the server state. This would certainly require less bandwidth, but requires each client to receive a complete list of all events. Missing even a single event could cause the server and client to lose synchronization. Moreover, if a new player wants to join the game halfway into a round, it would have to receive all events from the server accumulated thus far. Given the possible length of a single round, this could also be infeasible. So, given these two options to synchronize data-heavy objects, what might be the best option? Are there other optimizations that could help? Would a hybrid absolute / delta solution work well? Are we hosed, Tommy? Any comments would be appreciated! -dirtyminuth
Advertisement
If your game is strategic (turn based, or like an RTS) then use just deltas and TCP to make sure nobody misses any data.
enum Bool { True, False, FileNotFound };
Thanks hplus - I should have stated that this game is not turn-based or RTS - it's fast paced like an FPS. One more point (for additional info): any player in the game is likely going to create 2 to 3 delta events per second.
The need to consider data loss is only there if you're using a protocol that has a tendancy to loose data, which sounds like the only thing that would make you choose 'absolute' over 'deltas'. 'Deltas' is clearly the better choice [it sounds like, if I am understanding you correctly] in terms of what you'd do in an ideal world with perfect connections that never loose data, so how about this:

If you use TCP, you will never loose data, and can use deltas without fear.

If you use UDP, include a small counter [even a few bits would be more than sufficient for this kind of use], and use it as a sort of time stamp. Each packet sent recieves a counter equal to the previous packet's counter, with 1 added. Request packets to be re-transmitted if they are recieved out of order, and you've assured no packets get dropped.

Either way, the overhead is less than using the 'absolute' method. I'd advise TCP, just because it's easier, and you don't have to write the packet stamping stuff [since that's effectively what it does internally].
Another option would be to send deltas from a known good point. At the start of the game, the server and the client will know the initial state. As the game plays, the server will compute the difference between the current state and the INITIAL state. The client regularly sends acknowledgments back as to which states it knows about. The server continues sending deltas from the last state it knows the client knows about. This technique will get you get the benefits of deltas without the expense of the full state information.

You'll have to think carefully about how you encode the tiles delta, by the way. If you just list all the missing tiles, a packet expressing an empty tile grid will be much larger than the full data update would be.
Quote:Original post by Ozymandias43
Another option would be to send deltas from a known good point. At the start of the game, the server and the client will know the initial state. As the game plays, the server will compute the difference between the current state and the INITIAL state. The client regularly sends acknowledgments back as to which states it knows about. The server continues sending deltas from the last state it knows the client knows about. This technique will get you get the benefits of deltas without the expense of the full state information.

You'll have to think carefully about how you encode the tiles delta, by the way. If you just list all the missing tiles, a packet expressing an empty tile grid will be much larger than the full data update would be.


class Observable{  void subscribe( Client c )   {     c.send( pack_state() );    subscribers.add(c);   }  void unsubscribe( Client c )   {     c.send( destroy_message() );    subscribers.remove(c);   }  void send_delta( )  {    Message delta = pack_delta();    for (i : every subscriber) {      subscriber.send( delta );    }  }  Message pack_state() { // store baseline state }  Message pack_delta() { // write what has changed into message, and clear delta flags }  Message destroy_message() { // notify client the object has been removed }}


On each game loop:
- run through client requests, unsubscribe those that need to be unsubscribed
- Do your mojo, advance the game, etc., the fun stuff
- iterate through all observables, call send_delta()
- parse through client requests, and subscribe those that need to be subscribed


This might be too much overhead depending on your structure and number of tiles. But I think that each of your 25x25 tile set could easily be one observable.

The subscribe here is called when a client gets into range of a tile set, not when it logs into the game (well, then too, but just because it enters world and sees some tile sets).

This is intended for dynamic POV updates - the basic publish/subscribe mechanism.

[Edited by - Antheus on May 10, 2007 9:03:08 AM]
why not just do both and use delta compression?

here's an example I've been playing with :


struct C_GameState{	// whatever is in there	// .......};struct C_Sequence{	C_GameState m_State;	int			m_sequenceId;};struct C_RemoteAddress	{	in_addr m_Address;};struct C_ClientProxy{	bool				m_valid;	int					m_sequenceId;	C_RemoteAddress		m_Address;};struct C_DataPacket{	char m_Data[128 * 1024]; // max of 128 kb of data.	int  m_DataSize;	// Full game state	bool SendFullSequence(const C_RemoteAddress& ToAddress, C_Sequence* pSequence);	bool ReceiveFullSequence(C_Sequence* pNewSequence);	// delta game state	bool SendDeltaSequence(const C_RemoteAddress& ToAddress, C_Sequence* pNewSequence, C_Sequence* pBaseSequence);	bool ReceiveDeltaSequence(C_Sequence* pNewSequence, C_Sequence* pBaseSequence);	// receive a new packet. if it's a full update, baseSequenceId will be -1, else it will be	// the last ackowledged sequence number seen by the server.	bool PollForNewPacket(const C_RemoteAddress& ToAddress, int& newSequenceId, int& baseSequenceId);	bool SendAck(const C_RemoteAddress& ToAddress, int sequenceId);};struct C_SequenceBuffer{	C_Sequence m_Buffer[32];	int m_sequenceId;		C_SequenceBuffer() { Clear(); }	void Clear()	{		m_sequenceId = -1;		for(int i = 0; i < 32; i ++)			m_Buffer.m_sequenceId = -1;	}	C_Sequence* FindSequence(int id)	{		if(id < 0) return NULL;		int index = (id)&31;		if (m_Buffer[index].m_sequenceId != id)			return NULL;		return &m_Buffer[index];	}	// From clients.	// Clients try allocate a new slot. 	// If the sequence number received is older, don't bother. old packet.	C_Sequence* AllocSequence(int id)	{		if(id <= m_sequenceId) return NULL;		m_sequenceId = id;		int index = (id)&31;		m_Buffer[index].m_sequenceId = id;		return &m_Buffer[index];	}	C_Sequence* GetLatestSequence()	{		return FindSequence(m_sequenceId-1);	}	bool BuildNewSequence()	{		// The game state hasn't changed. No point serialising it again		if(!TheGame().IsDirty())			return true;		// first serialise.		if(m_sequenceId < 0) m_sequenceId = 0;		int index = (m_sequenceId&31);		C_GameState& state = m_Buffer[index].m_State;          		// [GNT - 10.5.2007] - failed to serialise		if(!TheGame().Serialise(state)) 		{			Clear();			return false;		}		m_Buffer[index] = m_sequenceId;		m_sequenceId++;		return true;	}};struct C_Server{	C_SequenceBuffer m_Buffer;	C_ClientProxy m_Clients[64]; // max 64 clients	C_DataPacket m_Packet;		void Update()	{		// build 		if(!m_Buffer.BuildNewSequence())			return;		// get latest game state		C_Sequence* pLatestSequence = m_Buffer.GetLatestSequence();		if(!pLatestSequence)			return;				for(int i = 0; i < m_Clients; i ++)		{			if(!m_Clients.m_valid)				continue;			// get client game state			C_Sequence* pClientSequence = m_Buffer.FindSequence(m_Clients.m_sequenceId);			// client is up to date			if(pClientSequence == pLatestSequence)				continue;			// client state unknown. Send full state			if(!pClientSequence)			{				m_Packet.SendFullSequence(m_Clients.m_Address, pLatestSequence);			}			// client state is known. send delta			else			{				m_Packet.SendDeltaSequence(m_Clients.m_Address, pLatestSequence, pClientSequence);			}		}	}};};struct C_Client{	C_DataPacket		m_Packet;	C_SequenceBuffer	m_Buffer;	C_RemoteAddress		m_ServerAddress;	void Update()	{		// receive packets from server		int newSequenceId, baseSequenceId;		while(m_Packet.PollForNewPacket(m_ServerAddress, newSequenceId, baseSequenceId))		{			// placeholders			C_Sequence* pNewSequence = m_Buffer.AllocSequence(newSequenceId); // new slot to store the new state			C_Sequence* pBaseSequence = m_Buffer.FindSequence(baseSequenceId); // the base for the delta			// failed to allocate a new slot			if(!pNewSequence)				continue;			if(!pBaseSequence)			{				// Expecting a full sequence				if(!m_Packet.ReceiveFullSequence(pNewSequence))				{					m_Buffer.Clear(); // buggered. clear everything					continue;				}			}			else			{				// Expecting a delta packet				if(!m_Packet.ReceiveDeltaSequence(pNewSequence, pBaseSequence))				{					// error. Tell server to resend everything.					m_Buffer.Clear(); // buggered. clear everything					continue;				}			}			// and deserialise the game state.			TheGame().Deserialise(pNewSequence->m_State);		}		// Acknowledge the latest packet		m_Packet.SendAck(m_ServerAddress, m_Buffer.m_sequenceId);	}};

Everything is better with Metal.

Quote:any player in the game is likely going to create 2 to 3 delta events per second.


That's not fast paced -- that's the typical command rate of a skilled RTS player.

The biggest question, when using delta, is whether you can accept the command latency that typically comes with that implementation.
enum Bool { True, False, FileNotFound };
Wow! Thanks for everyone putting in the time to help. I'd like to address each response:

Quote:Drigovas
(Using TCP)

TCP is the easiest in terms of implementing endpoint logic (you don't have to code retries, etc.), but data would be much less timely. I was considering a hybrid UDP/TCP system, using UDP for things small enough to send absolute states, like player pos/vel/acc, and using TCP for these tile group events. There might be an problem with merging UDP and TCP updates into a single client state, though.

Quote:Ozymandias43
(Sending updates based on server knowledge of client states)

That's a pretty interesting idea! It would keep the overall size of delta event small. But how would the server maintain the client 'good' states? For example, consider the following scenario: The client's last 'good' server state is at server time t0, and the server is now generating the authoritative state at time t10 - does this mean the server must maintain all game states between t0 and t10, as the client might later inform the server that his latest 'good' state is t5 (or t6, t7, etc.)?

Quote:Antheus
(Publish / Subscribe mechanism)

This will definately reduce bandwidth requirements. What would happen if a tile group is out of a client's subscription for a long time and then comes into view? I can imagine having to send a large update, and this also could lead to the server keeping track of a lot of past game states so they can know how the client can 'catch up'.

Quote:oliii
(Hybrid Full Sequence / Delta Sequence)

Definately considering this - but would this also lead to the server having to maintain a large number of past states (see issues above).

Quote:hplus0603
(Event Rate)

I hope clients can handle the latency - there's more than just these delta events taking place in the game, and we might be able to implement some client-side prediction of these events.


These ideas have my mind racing, in a good way!

-dirtyminuth
Quake-style delta-compression is awesome for on any fast-paced, data-driven networked games.

in the case I pointed out, the memory overhead would be 32x a server state, for a cache 32x deep, not much in terms of data (if you just want to synch tiles, that's 32 x 80 bytes (640 bits)). Say you update at 16 fps, that gives you a two second window if the (game state got 'dirty' every frame), which isn't that deep.

If you want to further increase bandwidth savings and do it on a per-observer basis, than it can get pretty taxing. it's 32 states x 32 players. but the data cached should be a lot smaller. You don't need to cache the whole world, only what each player 'observes' at the time. Then you need a mechanism to fragment the world into observable atoms (your tiles), ways of referring to them (guids, hashed coordinates), so on and so forth.

Everything is better with Metal.

This topic is closed to new replies.

Advertisement