Flow Control

Started by
28 comments, last by Antheus 13 years, 1 month ago
We are making an RTS.

Is it a good idea to send a user command without delay instead of waiting for the next packet send event? My team mate feels very strongly about sending the commands only every 30 ms instead of right away. He feels there shouldn't be a delay between the player issuing commands to a unit and sending to the server. I feel like that delay isn't even noticeable and It introduces burstyness into the internet traffic because it's not restricting the packets being sent to a steady time period.

Here's a little description of the client side code. We decided not to go with a peer to peer connection and are doing client server.

So on the client I would queue up user input for an RTS. I communicate with the server every 30ms or whatever the packet rate is for the current user's connection. I queue up unit commands to send the server in every packet. When the server replies with an acknowledgment for those commands the units execute them. But if the server doesn't acknowledge them the commands are resent along with new commands that are queued up in the next packet send event. So every 30ms the client keeps sending commands until they are acknowledged by the server, to ensure the server recieves them. There is no client side prediction since this is an RTS and doesn't really need it.
Advertisement
It looks like you are queuing up data before sending it, then sending it in large batches. How big is that block of data?

It looks like your friend wants to send every bit of data immediately. How small is that block of data?


Recall that there is overhead to every packet. Anything less than an MTU (generally 1500 bytes) may be held for a moment internally anyway. It may be held at your modem. It may be held by a border router momentarily. You have no control over what every other machine along the line will do; many will buffer it themselves to prevent your chatty interface from breaking the hundreds of other ISP users.

It is generally best to follow the easiest path: Let the sockets code decide how much to buffer, and you give it everything immediately. Eventually it will make it through the wire.
So your options are as follows?
1) Several small packets throughout a frame.
2) One larger packet at a specific point in a frame.

#1 sounds like it's just adding to overall networking traffic for a possible improvement in latency of up to 1-frame on an indeterminable number of packets each frame.
#2 sounds more efficient and simpler.
"He feels there shouldn't be a delay between the player issuing commands to a unit and sending to the server."

Well, there'll be a round-trip delay anyway. So you're going to have to handle delays.

The usual trick here is to have "wind up" animations. You click a soldier and say "go there". He starts to do stuff like pick up his weapon, turn around... and then the order from the server arrives saying he's to do the move because it's been OKed. Your player never notices that the other players figures spend less time picking up their weapons... :-)


The player gets immediate feedback, your system gets to handle delays. Job done.

So your options are as follows?
1) Several small packets throughout a frame.
2) One larger packet at a specific point in a frame.

#1 sounds like it's just adding to overall networking traffic for a possible improvement in latency of up to 1-frame on an indeterminable number of packets each frame.
#2 sounds more efficient and simpler.



Actually with option 1, the packets are always just as big as they would be with option 2 so option 1 no longer even sounds as good in that regard.

And we realize there is a round trip delay. He just feels like the added delay of waiting to send the queued up command will add to the round trip delay as well. I just can't convince him that the added delay isn't a problem because it's unnoticeable to human players. I also can't convince him that sending packets at a rate faster than can be handled will induce lag spikes and not having that delay before sending to the server will actually make the round trip even longer.
Hi, this is his teammate. Thank you so much for helping us out with this. My perspective is this:

If two packets are sent at the same time (within "Sd" milliseconds), the first packet will cause a slight delay ("Rd" milliseconds). I would guess that how close they are together ("Sd") affects the delay ("Rd").

To prevent a delay of Rd, we are sending packets at an interval of F, which is the frame duration.

So since we have an interval of F, that means the average time we wait for the next frame is F / 2.

In theory, ill's technique is amazingly cool and makes sense, but only if the average delay that we would have gotten, Rd, is greater than the average delay we are inducing ourselves, F / 2.

So my question is this: for a game that sends a maximum of 15 packets a second, of maximum 64 bytes each, can you guys give me some sort of reasonable estimate of whether or not F / 2 < Rd?

The problem I've been having with ill is that he keeps citing other games, which all are much more complex than our RTS, and probably send much more data than 15 * 64 / s, and would therefore have a much bigger Rd to begin with. So my question is, do you think the F / 2 delay would be less than Rd for our game?

Thanks so much,

- Evan

Actually with option 1, the packets are always just as big as they would be with option 2 so option 1 no longer even sounds as good in that regard.
You've got to differentiate between a packet at your game's level, and a packet at the TCP/UDP level. Multiple "game packets" can be sent in the one "TCP/UDP packet".
This means option 1 sends more data, as you're paying the TCP/UDP-overhead on each game-packet, instead of on each group of game packets.

The reason that I see doing things at an above-frame-rate rate as wasteful, is that your simulation is being advanced in discrete steps of one-over-frame-rate.
If a frame takes 33ms, and if a packet arrives 1ms into a frame, 5ms into a frame, or 20ms into a frame -- it's still processed (i.e. affects the simulation) at the same time: next frame.

Because of this frame-based simulation, you can pretty much treat one-over-frame-rate as the Planck time for most systems.

So my question is this: for a game that sends a maximum of 15 packets a second, of maximum 64 bytes each
I assume that your frame-rate is bigger than 15hz, so you're not going to be sending more than 1 packet a frame anyway.... Which kinda makes the discussion pointless, because no matter when you decide to issue the send command, you'll only be sending at most once per frame no matter what :P

If two packets are sent at the same time (within "Sd" milliseconds), the first packet will cause a slight delay ("Rd" milliseconds). I would guess that how close they are together ("Sd") affects the delay ("Rd").
To prevent a delay of Rd, we are sending packets at an interval of F, which is the frame duration.
Can you explain a bit more what "Rd" represents? And how only issuing send commands once per frame changes the nature of "Rd"?
We are using UDP not TCP.

Also It's already obvious you get no benefit from sending the packet sooner since their chance to arrive on the other side at the same time due to the router buffering them up sortof like TCP/IP does. That is one reason to just do things my way.
Thanks for the reply!
Ah, I see what you're saying, but in the server, our game logic is not always processed frame-by-frame, almost everything happens inbetween. For example, in the server, I receive a unit-move command, and immediately I do things like pathfinding and AI. I believe the only thing that we will do frame-by-frame is combat, and even that's only a possibility. So packets received can be processed immediately.

So Rd... let's say packet A is sent, and five ms later packet B is sent. In a perfect world, they would be received 5ms apart. But because the first one slows the second one down, there's an additional delay on the second packet of 2 ms (I don't know if 2 is realistic, I just threw that out there). In this case, Rd is the 2.

Does Rd depend on the proximity of the packets? The frequency? Size? If it's possible, I'd like to get a better idea of what factors into this delay, so I can know if the delay will really happen in our game.

Ah, I see what you're saying, but in the server, our game logic is not always processed frame-by-frame, almost everything happens inbetween. For example, in the server, I receive a unit-move command, and immediately I do things like pathfinding and AI. I believe the only thing that we will do frame-by-frame is combat, and even that's only a possibility. So packets received can be processed immediately.


So you're not doing the deterministic simulation input-synchronous simulation thing? Do you send regular snapshots of each object on the server to other players to stay in sync?
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement