Flow Control, Prioritizing Data for Packets, and Thread Communication

Started by
2 comments, last by wodinoneeye 12 years, 2 months ago
I'm designing a top-down twitchy space shooter a la Subspace/Continuum, and am in the process of doing research to plan my server architecture (built in C# with lidgren-gen3, communicating to a Unity Web Client for the game client -- very slick stuff). I'm primarily basing it off of the FPS model, with UDP packets sent at regular intervals to players (20-30 Hz). I have a number of questions regarding these packets.

In particular, I studied David Aldridge's Halo Reach GDC talk, where he talks about flow control and the server model, and I want to adopt something a little simpler to begin with. My simpler system goes a little something like this:

Client:
- Sends control information to the server
- Control information is immediately realized locally
- Other entities in the world are Interpolate based on past received state unless new state contradicts (dead reckoning, etc.)
- (Possible expansion: Receive other players' control input and use that for prediction)

Server:
- Receives control information from clients and modifies the world accordingly
- Each tick creates a new immutable state from the last, to avoid errors caused by update order
- Dispatches delta-compressed state updates to each client regularly (based on their last acknowledged received state)
- If bandwidth allows, dispatches events to justify change in state (weapon fired, bomb exploded, etc.)
- (Possible expansion: Distribute other players' control input to clients for prediction)

I plan on running at least two threads. These will be:

1 Communication thread:
- Handles sending data, and dispatching received data to wherever it needs to go

1..n Arena threads:
- Simulate a single arena's world model and state, receiving information from a communication thread

The arena threads might be multiple threads to manage different sectors of an arena, but let's assume that one arena is one thread.

There's the relevant details, now we're getting to my question. The Halo Reach server paradigm has a Flow Controller, which tells the game logic manager "I have this much room for a packet to send to player X, how would you like to fill it?" then the game logic manager prioritizes state updates and events, and returns a packing of that packet space based on what information is most important and relevant for player X. If I'm understanding that incorrectly, stop me here.

This is complicated with threads, since I'm guessing all of my communication will be through C# lockless queues. Receiving messages and telling the Arena thread about them is fine. The problem is, how do I say the following things (let's pretend this is a chat log between the Flow Controller and the Arena thread):

---

<Flow_Controller>: Arena thread, here's a packet for you of size N, time to fill it.
(i.e. How do I tell the Arena thread that it's time to send out messages, OR should the Arena thread constantly be sending out messages by priority?)

---

<Arena_Thread>: Flow controller, remember that low-priority state update I sent you a minute ago that you haven't sent over the network yet? Well, it isn't relevant now, so forget about it.
(i.e. Once a state update is already in the queue, how can the Arena thread invalidate it so it won't be sent out?)

---

<Arena_Thread>: Flow controller, remember that low-priority state update I sent you a minute ago that you haven't sent over the network yet? Well, here's a new state update about that same entity that's now really important, so forget about the original update and send this one.
(i.e. A state update that was previously enqueued as very low priority message is now really important and should be sent out ASAP.)

---

There's two ways of approaching this, as I see it:

1) 20-30 times a second, the Flow Controller tells the Arena thread that it's time to send a packet, and we can send x bytes for player X, and y bytes for player Y, etc. Once the Arena thread gets that message, it pauses simulating, picks the best messages to send to each player based on those size restrictions, enqueues them for the Flow Controller, and goes back to simulating.
Problem: There will be a delay between when the Flow Controller asks for the packet, and the Arena thread gets around to filling it. The Arena thread also has to stop for a period of time (that grows with each connected player).

2) The Arena thread is constantly, asynchronously outputting messages with priorities, maintaining, updating, and deleting old messages to that at any time the Flow Controller wants it, there's a nice up-to-date list of messages to send out with correct priorities based on the current (or very recent) world state.
Problem: Standard mutual exclusion nightmare. How does the Arena thread constantly update and maintain this list while the Flow controller is reading it and consuming messages it sent out? This sounds like a lot of locks, and performance compromise. The Arena thread will also have to maintain one of these lists for each player.



So, am I completely off-track here? I really like the idea of prioritizing messages and selecting the highest-priority messages to send to each player based on bandwidth. I'd like to implement it if possible, but if this sounds completely crazy, I guess I could just try the Quake3/Valve model. Any ideas on how to do this packet-packing problem with inter-thread communication would be quite wonderful. Thanks!
Advertisement
I think you're better of with the arena thread just simulating, period. It then emits the state of all objects after each simulation step. Some separate entity is responsible for keeping clients up to date. This entity inspects the known data about the game, and about the client, and sends the appropriate messages in packets.
You can model this using real messages -- arena sends "entity 3 changed property 12 to value 77.4" -- or you can save RAM by sharing a pointer into an entity object, and just taking that property value out of the object when time comes to send it. Which option you choose depends on implementation strategy, how much you feel you need to lock, etc.
If you go full messaging, you could even hoist the "send data to clients" work into another process/node, for some additional bit of robustness and scalability, but in practice, that gain is not super big.
enum Bool { True, False, FileNotFound };
Okay, that's a good way to look at it. So the way I envision the system working is this:

Each game tick, the Arena thread creates a new world state by applying rules on the current immutable world state, without modifying that original world state. After the new world state is created, it's locked and read-only. The Arena thread emits that read-only timestamped world state. Also, the Arena thread, in a separate channel, produces a stream of events (this bomb exploded, that ship fired a laser) to justify changes in the world state.

Periodically (20-30Hz), the Flow Controller takes the most recent world state (discarding any old states it missed), determines the relevance and priority of each state update to send for each client, and packs some of that client's outgoing packet with the highest-priority world state changes (or the entire world state, delta-compressed, if there are enough changes). The Flow Controller also picks as many events as it can take, also prioritized, and stuffs them into the outgoing packet as well. It sends off that packet as well, and moves on to the next client in a round-robin fashion. The Flow Controller always sends the most recent state info to each client. So if we have 4 clients A-D, A and B might get tick 122, then C gets tick 123, then D and A get tick 124, B and C get tick 125, etc.

Does that sound like a good way to approach it, then? I envision the Arena thread with two output queues, a state queue and an event queue. The Flow Controller will just burn through everything but the most recent state queue whenever it wants to send out state info, but will try to send out as many events as it can. Events, of course, will time out if the Flow Controller couldn't send them out soon enough for them to actually be relevant.

Since the states in the state queue will be immutable, I can just pass references out. I'll have to figure out how many old world states save and keep around, but that's a minor question.
You can have multiple msg/event queue with different 'quantum' of priorities (highest gets priority to be sent before lower ones considered - some even absolute 'always send immediately') Anything put into Queue X is assumed equal in priority so just get pulled from linearly.
Some transitory information (ie- position updates) graduate to a higher priority (esacalate) when they havent been sent for some time threshold
(that is the data set (ie- SHIP1234.posx & .posy) not a particular times instance of it - latest instance of course would go out)

This requires you to have to maintain the current 'update' state (last confirmed sent) for all the data sets and the last sent values
(if you are doing delta compression you do this anyway). Bandwitdh per client session is controlled obviously (with throttling now possible when external things begin to choke or that client seesion is experiencing network conniptions)

That may be alot of processing that might best be moved to an intermediary thread(we seem to have lot more cores these days) or seperate session server (bige scalability) that determines the outbound content and the network thread is pure send/recv.

Lockeless FIFO buffers can be used between these threads to avoid the usual lock hell.



Queue Quantums you might want:

Absolute - always sent immediately
High - events usually sent every time
Low - msgs can wait but will be bumped up if too longa wait
Advisory - gets sent if nothing else goes (usually for keep alives and game irrelevant extras)
--------------------------------------------[size="1"]Ratings are Opinion, not Fact

This topic is closed to new replies.

Advertisement