Sign in to follow this  

Flow Control

This topic is 2484 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

We are making an RTS.

Is it a good idea to send a user command without delay instead of waiting for the next packet send event? My team mate feels very strongly about sending the commands only every 30 ms instead of right away. He feels there shouldn't be a delay between the player issuing commands to a unit and sending to the server. I feel like that delay isn't even noticeable and It introduces burstyness into the internet traffic because it's not restricting the packets being sent to a steady time period.

Here's a little description of the client side code. We decided not to go with a peer to peer connection and are doing client server.

So on the client I would queue up user input for an RTS. I communicate with the server every 30ms or whatever the packet rate is for the current user's connection. I queue up unit commands to send the server in every packet. When the server replies with an acknowledgment for those commands the units execute them. But if the server doesn't acknowledge them the commands are resent along with new commands that are queued up in the next packet send event. So every 30ms the client keeps sending commands until they are acknowledged by the server, to ensure the server recieves them. There is no client side prediction since this is an RTS and doesn't really need it.

Share this post


Link to post
Share on other sites
It looks like you are queuing up data before sending it, then sending it in large batches. How big is that block of data?

It looks like your friend wants to send every bit of data immediately. How small is that block of data?


Recall that there is overhead to every packet. Anything less than an MTU (generally 1500 bytes) may be held for a moment internally anyway. It may be held at your modem. It may be held by a border router momentarily. You have no control over what every other machine along the line will do; many will buffer it themselves to prevent your chatty interface from breaking the hundreds of other ISP users.

It is generally best to follow the easiest path: Let the sockets code decide how much to buffer, and you give it everything immediately. Eventually it will make it through the wire.

Share this post


Link to post
Share on other sites
So your options are as follows?
1) Several small packets throughout a frame.
2) One larger packet at a specific point in a frame.

#1 sounds like it's just adding to overall networking traffic for a possible improvement in latency of up to 1-frame on an indeterminable number of packets each frame.
#2 sounds more efficient and simpler.

Share this post


Link to post
Share on other sites
"He feels there shouldn't be a delay between the player issuing commands to a unit and sending to the server."

Well, there'll be a round-trip delay anyway. So you're going to have to handle delays.

The usual trick here is to have "wind up" animations. You click a soldier and say "go there". He starts to do stuff like pick up his weapon, turn around... and then the order from the server arrives saying he's to do the move because it's been OKed. Your player never notices that the other players figures spend less time picking up their weapons... :-)


The player gets immediate feedback, your system gets to handle delays. Job done.

Share this post


Link to post
Share on other sites
[quote name='Hodgman' timestamp='1298270251' post='4776937']
So your options are as follows?
1) Several small packets throughout a frame.
2) One larger packet at a specific point in a frame.

#1 sounds like it's just adding to overall networking traffic for a possible improvement in latency of up to 1-frame on an indeterminable number of packets each frame.
#2 sounds more efficient and simpler.
[/quote]


Actually with option 1, the packets are always just as big as they would be with option 2 so option 1 no longer even sounds as good in that regard.

And we realize there is a round trip delay. He just feels like the added delay of waiting to send the queued up command will add to the round trip delay as well. I just can't convince him that the added delay isn't a problem because it's unnoticeable to human players. I also can't convince him that sending packets at a rate faster than can be handled will induce lag spikes and not having that delay before sending to the server will actually make the round trip even longer.

Share this post


Link to post
Share on other sites
Hi, this is his teammate. Thank you so much for helping us out with this. My perspective is this:

If two packets are sent at the same time (within "Sd" milliseconds), the first packet will cause a slight delay ("Rd" milliseconds). I would guess that how close they are together ("Sd") affects the delay ("Rd").

To prevent a delay of Rd, we are sending packets at an interval of F, which is the frame duration.

So since we have an interval of F, that means the average time we wait for the next frame is F / 2.

In theory, ill's technique is amazingly cool and makes sense, but only if the average delay that we would have gotten, Rd, is greater than the average delay we are inducing ourselves, F / 2.

So my question is this: for a game that sends a maximum of 15 packets a second, of maximum 64 bytes each, can you guys give me some sort of reasonable estimate of whether or not F / 2 < Rd?

The problem I've been having with ill is that he keeps citing other games, which all are much more complex than our RTS, and probably send much more data than 15 * 64 / s, and would therefore have a much bigger Rd to begin with. So my question is, do you think the F / 2 delay would be less than Rd for our game?

Thanks so much,

- Evan

Share this post


Link to post
Share on other sites
[quote name='ill' timestamp='1298326162' post='4777248']
Actually with option 1, the packets are always just as big as they would be with option 2 so option 1 no longer even sounds as good in that regard.[/quote]You've got to differentiate between a packet at your game's level, and a packet at the TCP/UDP level. Multiple "game packets" can be sent in the one "TCP/UDP packet".
This means option 1 sends more data, as you're paying the TCP/UDP-overhead on each game-packet, instead of on each [i]group [/i]of game packets.

The reason that I see doing things at an [i]above-frame-rate [/i]rate as wasteful, is that your simulation is being advanced in discrete steps of [i]one-over-frame-rate[/i].
If a frame takes 33ms, and if a packet arrives 1ms into a frame, 5ms into a frame, or 20ms into a frame -- it's still processed (i.e. affects the simulation) at the same time: [i]next frame[/i].

Because of this frame-based simulation, you can pretty much treat [i]one-over-frame-rate[/i] as the [url="http://en.wikipedia.org/wiki/Planck_time"]Planck time[/url] for most systems.[quote name='Verdagon' timestamp='1298333914' post='4777312']
So my question is this: for a game that sends a maximum of 15 packets a second, of maximum 64 bytes each[/quote]I assume that your frame-rate is bigger than 15hz, so you're not going to be sending more than 1 packet a frame anyway.... Which kinda makes the discussion pointless, because no matter when you decide to issue the send command, you'll only be sending at most once per frame no matter what :P
[quote name='Verdagon' timestamp='1298333914' post='4777312']
If two packets are sent at the same time (within "Sd" milliseconds), the first packet will cause a slight delay ("Rd" milliseconds). I would guess that how close they are together ("Sd") affects the delay ("Rd").
To prevent a delay of Rd, we are sending packets at an interval of F, which is the frame duration.[/quote]Can you explain a bit more what "Rd" represents? And how only issuing send commands once per frame changes the nature of "Rd"?

Share this post


Link to post
Share on other sites
We are using UDP not TCP.

Also It's already obvious you get no benefit from sending the packet sooner since their chance to arrive on the other side at the same time due to the router buffering them up sortof like TCP/IP does. That is one reason to just do things my way.

Share this post


Link to post
Share on other sites
Thanks for the reply!
Ah, I see what you're saying, but in the server, our game logic is not always processed frame-by-frame, almost everything happens inbetween. For example, in the server, I receive a unit-move command, and immediately I do things like pathfinding and AI. I believe the only thing that we will do frame-by-frame is combat, and even that's only a possibility. So packets received can be processed immediately.

So Rd... let's say packet A is sent, and five ms later packet B is sent. In a perfect world, they would be received 5ms apart. But because the first one slows the second one down, there's an additional delay on the second packet of 2 ms (I don't know if 2 is realistic, I just threw that out there). In this case, Rd is the 2.

Does Rd depend on the proximity of the packets? The frequency? Size? If it's possible, I'd like to get a better idea of what factors into this delay, so I can know if the delay will really happen in our game.

Share this post


Link to post
Share on other sites
[quote name='Verdagon' timestamp='1298366820' post='4777459']
Ah, I see what you're saying, but in the server, our game logic is not always processed frame-by-frame, almost everything happens inbetween. For example, in the server, I receive a unit-move command, and immediately I do things like pathfinding and AI. I believe the only thing that we will do frame-by-frame is combat, and even that's only a possibility. So packets received can be processed immediately.
[/quote]

So you're not doing the deterministic simulation input-synchronous simulation thing? Do you send regular snapshots of each object on the server to other players to stay in sync?

Share this post


Link to post
Share on other sites
[quote name='hplus0603' timestamp='1298398569' post='4777615']
[quote name='Verdagon' timestamp='1298366820' post='4777459']
Ah, I see what you're saying, but in the server, our game logic is not always processed frame-by-frame, almost everything happens inbetween. For example, in the server, I receive a unit-move command, and immediately I do things like pathfinding and AI. I believe the only thing that we will do frame-by-frame is combat, and even that's only a possibility. So packets received can be processed immediately.
[/quote]

So you're not doing the deterministic simulation input-synchronous simulation thing? Do you send regular snapshots of each object on the server to other players to stay in sync?
[/quote]

Yeah we're not doing a lockstep model.

Our model is more like Quake 3's delta encoded Client Server minus the client side prediction.




I was wondering though, what should happen when a client queues up commands and sends to a server. Then those commands aren't acknowledged right away so at next heartbeat the client sends those commands again, but some are different now, and there are some new ones. Then suddenly the client receives the ack for the first commands. Should the first set of commands be considered completely invalid or should the ones that are the same as the commands sent in the second heartbeat be considered acknowledged and have the client start doing those commands?

Share this post


Link to post
Share on other sites
[quote name='Verdagon' timestamp='1298366820' post='4777459']

So Rd... let's say packet A is sent, and five ms later packet B is sent. In a perfect world, they would be received 5ms apart. But because the first one slows the second one down, there's an additional delay on the second packet of 2 ms (I don't know if 2 is realistic, I just threw that out there). In this case, Rd is the 2. [/quote]

Connections are serial, so one packet does not affect the other. The only time where two packets could exist in same place would be in some buffer in some router.

[quote]Does Rd depend on the proximity of the packets? The frequency? Size?[/quote]On speed of light. Throughput and latency is affected only by speed of light (in coper or fiber) and the required time for hysteresis inside transistors. On internet, physical distance will contribute an order of magnitude more to delays than anything else.

Packets do get delayed for many reasons, some get lost, some corrupted, but those are all statistical parameters. One packet does not affect the other. This is not analog signal where interference would reduce available bandwidth.

The only time where one packet does affect the other is in case of overtaxed connection, but this is not something one can control or productively address, excluding having a dedicated one-to-one line. It's also something client cannot compensate at this level, usually interpolation or extrapolation is used.



The other effect that could occur is aliasing. This would happen where fractional delays (sending at 66.66Hz) would due to combination of reasons cause jitter with simulation or rendering rate of 30 or 60Hz. This is what happens when resizing raster images and bands appear. This is rarely an issue since variance in network delays is typically a significant part of RTT.

Share this post


Link to post
Share on other sites
[quote][color="#1C2837"][size="2"]The only time where two packets could exist in same place would be in some buffer in some router.[/quote][/size][/color]
[color="#1C2837"] [/color]
[color="#1C2837"][size="2"]This is where we've been assuming the Rd would come from; ill's been saying (and I've been trusting him on this minor assumption), since a router already has packets waiting to be sent, those must be sent first, which means the latter packets must wait. That wait would have contributed to the Rd. The final Rd would be the sum of all these "waiting" delays. ...right?[/size][/color]
[color="#1C2837"] [/color]
[color="#1C2837"][size="2"]Also, not only routers have buffers, it's also the client's network card, and that acts the same way, right?[/size][/color]
[size="3"][color="#1C2837"][size=2]
[/size][/color][/size]
[size="3"][color="#1C2837"][size=2]
[/size][/color][/size]
[color="#1C2837"][size="2"][quote][/size][/color][color=#1C2837][size=2]The only time where one packet does affect the other is in case of overtaxed connection, but this is not something one can control or productively address, excluding having a dedicated one-to-one line. It's also something client cannot compensate at this level, usually interpolation or extrapolation is used.[/quote][/size][/color]
[color=#1C2837][size=2]
[/size][/color]
[size="3"][color="#1C2837"][size=2]I think this is where ill's entire point is: to avoid an overtaxed connection, we can group our packets (at least, this will avoid only the part of overtaxing that we're contributing to, we can't affect other programs and the rest of the world). Grouping our packets then introduces its own delay of F / 2. Am I understanding what you're saying?[/size][/color][/size]
[size="3"][color="#1C2837"][size=2]
[/size][/color][/size]
[size="3"][color="#1C2837"][size=2]Edit: added second quote[/size][/color][/size] Edited by Verdagon

Share this post


Link to post
Share on other sites
[quote name='ill' timestamp='1298401055' post='4777635']
I was wondering though, what should happen when a client queues up commands and sends to a server. Then those commands aren't acknowledged right away so at next heartbeat the client sends those commands again, but some are different now, and there are some new ones. Then suddenly the client receives the ack for the first commands. Should the first set of commands be considered completely invalid or should the ones that are the same as the commands sent in the second heartbeat be considered acknowledged and have the client start doing those commands?
[/quote]

In general, commands are given on a specific time step. When you forward them to the server, you should also forward the time step at which they were given. This means the server can easily ignore later duplicates.

If you only send commands like "send units (1,2,3,4) to position X" to the server, it doesn't matter much if you send the command right away or wait for the end of the frame. Assuming a high command/simulation rate (30 Hz?) the player is highly unlikely to generate more than one command in a single frame anyway. You will, however, have to also send packets that acknowledge previously messages back to the server at some rate, so it might be better to just put all commands into a queue, and at the end of each frame, send all commands that are not yet acked. When receiving acks, remove matching commands from the queue. Very simple, and simple is great when it comes to complex systems like distributed simulation!

Share this post


Link to post
Share on other sites
We don't have a command/simulation rate, we handle all data as it comes in. The nice thing about this is that if the server is told at 10:55:800 that a unit wants to start moving away, he will already be moving, instead of waiting until 10:55:850, when our simulation would have happened. The end result is that the unit is a little further on his way, and not just starting then.

Here's a question: if I have a ping of 30ms, does that mean I can only send a packet every 30ms? That seems counterintuitive to me. I'd imagine a client could send a packet at 0ms and a packet at 5ms, and the server would receive them at 15ms and 20ms.

Ill seems to be telling me that if I have a ping of 30ms, then that's the same thing as only being able to send a packet every 30ms; and if this was true with the example above, then the client who sent his packet at 5ms will actually not have it sent across the connection until 30ms, and the server would receive it at 45ms.

So my question is, does ping = packets per second? I'd imagine not, but I'd like to hear what you guys have to say.

Share this post


Link to post
Share on other sites
Verdagon-

To answer your question about ping, what the 30ms represents is the round trip time to send a message from one machine to another. You can interpret this to mean that it takes 15ms to send a message from your machine to be received at the destination. There is 15ms of lag between the machines. It really doesn't have much to directly with throughput. If you send 100 packets at time 0 then they should all arrive around time 15ms (relative to the computer you sent them from).

As for the rest of the discussion, here's my 2 cents (as somebody who may do this for his day job): Depending on how much data/packets you want to send per tick (or frame), you can benefit (greatly) from bundling the data together within a single UDP packet. The catch is that you keep each bundled packet below the MTU size - UDP header size (1492 bytes if I recall correctly). Otherwise you risk having the packet being fragmented at the IP layer and that introduces it's own issues (waiting for out of order/missing frames). If you are only sending a packet or two every tick, just send the messages and be done with it. If you're sending hundreds, avoid the overhead and bundle.

How are you synchronizing the time between different machines? It's pretty common with your type of game to just use the time that the packet arrives as the moment the event happens and let the temporal differences between the machines exist. Other mechanisms for time synchronization go from "hmmm...." to "you've got to be kidding me."

cheers,

Bob

Share this post


Link to post
Share on other sites
Hi Bob, on the server side we may send as many as 100 per second, on the client we probably send maximum 15 per second. So I'm hoping we only have to bundle on the server.

Someone would say, "but if youre sending 100 per second from the server, wouldnt you have to send 100 acks per second back?" but no, i think we could bundle those on the client, but send the initial inputs immediately. acks aren't as time sensitive as initial inputs. Would this approach be reasonable?

Share this post


Link to post
Share on other sites
[quote name='Verdagon' timestamp='1298434817' post='4777833']
We don't have a command/simulation rate, we handle all data as it comes in. The nice thing about this is that if the server is told at 10:55:800 that a unit wants to start moving away, he will already be moving, instead of waiting until 10:55:850, when our simulation would have happened. The end result is that the unit is a little further on his way, and not just starting then.

Here's a question: if I have a ping of 30ms, does that mean I can only send a packet every 30ms? That seems counterintuitive to me. I'd imagine a client could send a packet at 0ms and a packet at 5ms, and the server would receive them at 15ms and 20ms.

Ill seems to be telling me that if I have a ping of 30ms, then that's the same thing as only being able to send a packet every 30ms; and if this was true with the example above, then the client who sent his packet at 5ms will actually not have it sent across the connection until 30ms, and the server would receive it at 45ms.

So my question is, does ping = packets per second? I'd imagine not, but I'd like to hear what you guys have to say.
[/quote]

We would only be sending 100 packets from the server if the delay between packets is 10ms. We're not gonna send data that often if the connection can't handle it.

Also it's not reasonable to not just send the acks along with the input that you want to send separately. The ack data is small enough to not add a lot to the packet, so why not just send it anyway. My professor in Networking class also told us to do it this way since it simply doesn't make sense to not piggyback this small useful data along with the important data. Also the input data will contain sequence numbers and all that which is already half of the ack data anyway. You can't not have the sequence number or else the server won't know that the packet it just received is no longer relevant because for some reason this particular packet got delayed by 20 seconds by something completely random on the internet somewhere.

We have a pretty solid way to delta encode relevant info between snapshots so the client will be able to synchronize with the server even if the packets are being sent at say every 1 second. Only problem is the units will be corrected way more and the game will look horribly jittery if done every 1 second, so I want to be sending packets as fast as possible.

I'm using ping as a metric for delay between packets because sending packets faster doesn't make sense. If you draw this out in a diagram it becomes obvious. A packet sent from side A to side B isn't acknowledged on side A until the round trip time has elapsed. I am going to be sending the same data over and over again to side B from side A until side B acknowledges it to ensure that side B receives that data as fast as possible. There's no point in sending the same data over and over though if the data did manage to get to the other side. I just need to wait for the ack.

If I don't send the data over and over though, a packet that is sent from side A to side B might be dropped. I won't know about this until like 1 second later if I don't do it at a constant fast rate. That way side B will have an even bigger delay until receiving a message from side A.

The chances of a packet getting dropped are entirely dependent on completely random conditions on the internet that we have absolutely no control over. I'm gonna go ahead and make an educated guess that packets have about 10% chance dropping under normal conditions if I have a connection from San Francisco all the way to Berlin, Germany. There are maaaany routers along the way. One of them is gonna drop it some time. That means that 10% of the time, the server won't receive a message from the client about which orders were issued to units the first time it's sent. That means 10% of the orders the player makes are going to be delayed considerably because instead of resending the same data out at the rate of the ping until acknowledged, I'm resending that same data out much less often like Verdagon keeps telling me I should do. The server won't receive the data as fast as possible. The first command might sometimes come through, but then sometimes it might not and the command will be delayed even more.

With my implementation, I won't be waiting 1 second to resend to Berlin. I will keep resending that same packet until acknowledged by the other side every couple milliseconds based on our ping. The chances of Berlin receiving my packet sooner are considerably higher.

My implementation handles a lot of problems that the internet creates that aren't obvious at first. It might seem that sending packets less often is more beneficial until you realize all of these things and things just click in your head telling you, "WOW! Yeah, now I understand why all these other people do it this way." I too at first wanted to do something similar to Verdagon's idea until I thought about it more in depth.

Share this post


Link to post
Share on other sites
[quote name='ill' timestamp='1298454360' post='4777890']
We would only be sending 100 packets from the server if the delay between packets is 10ms. We're not gonna send data that often if the connection can't handle it.[/quote]

Of course not, but who said the connection can't handle it? I think the throughput of packets/second of the router is much, much higher than the ping (in this case 100). You keep saying that if the ping is X, then the throughput is X.

Would someone kindly answer my ping = throughput question? From what I understand, a ping of 30ms does not mean a throughput of 33 packets per second. It would if you were limiting yourself to waiting until you receive an ack for every packet before sending the next one, but that's a limitation you imposed on yourself in your method, it's a limitation I don't impose on myself in mine.

[quote]
Also it's not reasonable to not just send the acks along with the input that you want to send separately. The ack data is small enough to not add a lot to the packet, so why not just send it anyway. My professor in Networking class also told us to do it this way since it simply doesn't make sense to not piggyback this small useful data along with the important data. Also the input data will contain sequence numbers and all that which is already half of the ack data anyway. You can't not have the sequence number or else the server won't know that the packet it just received is no longer relevant because for some reason this particular packet got delayed by 20 seconds by something completely random on the internet somewhere.[/quote]

Quick question, In what situation would there have been acks that are waiting to be sent? In my method (as well as yours) acks are sent immediately. The only thing that's delayed in my method is the server sending the input packet's ack back, which waits for the next scheduled round of acks.

[quote]We have a pretty solid way to delta encode relevant info between snapshots so the client will be able to synchronize with the server even if the packets are being sent at say every 1 second. Only problem is the units will be corrected way more and the game will look horribly jittery if done every 1 second, so I want to be sending packets as fast as possible.[/quote]

For the back-and-forth acks and packets, we would be operating at the same period as the ping, just like in your method. The only difference is that I send the initial input packet early. This will not slow down the rest of the packets because sending something in the middle of the back-and-forth period just eats up throughput, which we have a lot of.

[quote]I'm using ping as a metric for delay between packets because sending packets faster doesn't make sense.[/quote]
It only doesn't make sense for your method. For mine, we send it early, and leave it out of the next round of scheduled packets/acks. Besides, sending packets before the ping period is up would work because it only takes up throughput, which we have a lot of.

[quote]If you draw this out in a diagram it becomes obvious. A packet sent from side A to side B isn't acknowledged on side A until the round trip time has elapsed.[/quote]
Right, same in my method.

[quote]I am going to be sending the same data over and over again to side B from side A until side B acknowledges it to ensure that side B receives that data as fast as possible. There's no point in sending the same data over and over though if the data did manage to get to the other side. I just need to wait for the ack.[/quote]
Right, or wait until when you think the ack should have arrived (I think thats what you meant too?)

[quote]If I don't send the data over and over though, a packet that is sent from side A to side B might be dropped. I won't know about this until like 1 second later if I don't do it at a constant fast rate. That way side B will have an even bigger delay until receiving a message from side A.[/quote]
That's not true. Just set yourself a timer, saying, okay I just sent this packet, I need to check in (ping) milliseconds, because that's when an ack would have arrived.


[quote]The chances of a packet getting dropped are entirely dependent on completely random conditions on the internet that we have absolutely no control over. I'm gonna go ahead and make an educated guess that packets have about 10% chance dropping under normal conditions if I have a connection from San Francisco all the way to Berlin, Germany. There are maaaany routers along the way. One of them is gonna drop it some time. That means that 10% of the time, the server won't receive a message from the client about which orders were issued to units the first time it's sent. That means 10% of the orders the player makes are going to be delayed considerably because instead of resending the same data out at the rate of the ping until acknowledged, I'm resending that same data out much less often like Verdagon keeps telling me I should do. The server won't receive the data as fast as possible. The first command might sometimes come through, but then sometimes it might not and the command will be delayed even more.[/quote]
I'm not telling you to do it less often, I'm just telling you not to wait, if some information is more time-sensitive; such as the initial input. And even if it is delayed, my resend will arrive at the same time as your resend would have, if you paid attention to the document I shared with you (https://docs.google.com/document/pub?id=1Nv2fWdT8w8QMwhsyTDD-lCaZGgGJzBC64UslnNEdM44).

[quote]With my implementation, I won't be waiting 1 second to resend to Berlin. I will keep resending that same packet until acknowledged by the other side every couple milliseconds based on our ping. The chances of Berlin receiving my packet sooner are considerably higher.[/quote]
In this, when you say every couple of milliseconds, does that number of millseconds = your ping? From what I understand, that's how your method works. And in my method, that's how it works too.


TL;DR: Two main points in my post that need to be addressed:
1. I think that ping != throughput (packets per second). Can someone tell me whether I have the right idea or not?
2. The main difference between your and my method, ill, is that I can expedite sending when something important comes from the user. Based on my assumption in (1), this would not slow down the rest of the scheduled sends.

(If the rest of you are wondering what my method is and what ill's method is, see the document here: https://docs.google.com/document/pub?id=1Nv2fWdT8w8QMwhsyTDD-lCaZGgGJzBC64UslnNEdM44)

Share this post


Link to post
Share on other sites
But you're not accounting for the fact that packets get dropped and arrive late. I would include acks with every packet I send because a previous ack might have gotten dropped or got delayed by some huge time delay and won't arrive at the other side until later when it doesn't matter.

My method handles the fact that packets are dropped or delayed or corrupted and dropped. Your method only takes travel time into account. And it assumes travel time is constant and never has random variation. You are completely ignoring the fact that if packet A is sent 5ms before packet B when the delay should have been 30ms, that packet A won't arrive 5ms before packet B. It'll arrive at the same time as packet B and might even make packet A and B take a bit longer to travel.

I'm handling many important edge cases you are simply ignoring. The fact that packets are dropped, corrupted, or delayed is not negligible. The reason you never notice these things is because applications are programmed correctly and don't do the things you want to do. A lot of applications like web browesers you use simply use HTTP or TCP/IP or FTP or SSH or whatever. TCP/IP is implemented similar to my way for example. All other internet protocols are implemented this way. We are creating our own protocol for our game on top of UDP and I've added these considerations into our protocol. This protocol is similar to what other videogames do and works great because data constantly travels back and forth to synchronize gamestates and the connection knows at all times what the RTT is.

TCP/IP on the other hand never knows initially what the send rate is. If you give it a large packet to send it'll divide that packet up and send the tiny packets at a super slow rate initially because it needs to account for the fact that I might be on a 14.4k connection. Then it slowly speeds up the transfer rate because it sees that I'm on a good connection. Suddenly it's done. The next time I want to send a packet with TCP/IP it does this exact same thing and sends it slowly initially. It has no memory of previous sends. It also has no way of knowing that in the last 20 seconds since I last sent a packet that the internet conditions didn't worsen. It's a robust algorithm that works on any system and handles all edge cases. It has mechanisms to prevent flooding connections with too many packets than it can handle by implementing flow control and not deviating from it in any way like you want to. It doesn't matter whether I'm on a 1kbps connection or a 16MB/s connection. It just works. It's also nice and simple and doesn't have crazy complexities.

The article I read on implementing this said RTT is a good metric for packet send rate. RTT is ping. I thought about it enough to where it makes sense to me that that is a good metric. Things have just clicked for me and I know exactly why I have to do everything the way I do, and they haven't clicked for you yet. If you start implementing it your way you will suddenly start having the problems I'm describing above and eventually end up creating an implementation very similar to mine anyway as you start handling each edge case one by one. When I first sat down and started brainstorming how to implement our networking I too had very similar ideas that you had. In fact it was pretty much exactly as you have in mind now. Then I realized that the internet introduces all of these problems and started modifying my solution until i finally arrived at this one and see that it is very robust and will work amazingly. I've already done all the research and asked the questions you are asking now. I'm part of your team. You don't need to duplicate the effort and do it all over again if someone on your team already did it. There's no point in even having a team if that's the case. If I'm sitting there telling you that packets will just buffer up if they can't be sent that fast because I know that's how the internet works, you can't say to me, "I simply don't beleive that."

Also you just simply CANNOT say that the input delays from queuing up the commands and sending them with the next heartbeat will ruin play experience and induce noticeable input delays. An input delay of 30ms is just not noticeable. Write yourself a little test program that delays your input by different values and take not of the time you finally start noticing a difference. Also don't just sit there and notice there is an input delay. Consider the fact that you are playing a fast paced game and have other things on your mind (like the fact that you also need to build 20 soldier dudes and manage your mining or something) besides sitting there and noticing this delay. People have done this research before and gave their results so there is no point in doing it all over again but if you really don't believe it, try it out first before you use your opinion to implement a system incorrectly and with more complexity.

Share this post


Link to post
Share on other sites
[quote name='ill' timestamp='1298486372' post='4778061']
But you're not accounting for the fact that packets get dropped and arrive late. I would include acks with every packet I send because a previous ack might have gotten dropped or got delayed by some huge time delay and won't arrive at the other side until later when it doesn't matter.[/quote]

My method hands these two cases in the same way yours handles them.

[quote]My method handles the fact that packets are dropped or delayed or corrupted and dropped. Your method only takes travel time into account. And it assumes travel time is constant and never has random variation. You are completely ignoring the fact that if packet A is sent 5ms before packet B when the delay should have been 30ms, that packet A won't arrive 5ms before packet B. It'll arrive at the same time as packet B and might even make packet A and B take a bit longer to travel. [/quote]

This is where I think you're having trouble. Routers have a high throughput, much higher than 1000 / ping, I'm pretty sure. I'm still waiting on an answer to that from the board members here, but what Scourage said, "If you are only sending a packet or two every tick, just send the messages and be done with it. If you're sending hundreds, avoid the overhead and bundle." So for the client, who is only sending (original input packets + regularly scheduled ack packets), that's more in the "a packet or two every tick" range, and not in the "hundreds" range. This tells me that the overhead isn't as dire as you seem for our low frequency as sending.

[quote]I'm handling many important edge cases you are simply ignoring. The fact that packets are dropped, corrupted, or delayed is not negligible.[/quote]
My method handles these method just like yours, see the document.

[quote]TCP/IP on the other hand never knows initially what the send rate is. If you give it a large packet to send it'll divide that packet up and send the tiny packets at a super slow rate initially because it needs to account for the fact that I might be on a 14.4k connection. Then it slowly speeds up the transfer rate because it sees that I'm on a good connection. Suddenly it's done. The next time I want to send a packet with TCP/IP it does this exact same thing and sends it slowly initially. It has no memory of previous sends. It also has no way of knowing that in the last 20 seconds since I last sent a packet that the internet conditions didn't worsen. It's a robust algorithm that works on any system and handles all edge cases. It has mechanisms to prevent flooding connections with too many packets than it can handle by implementing flow control and not deviating from it in any way like you want to. It doesn't matter whether I'm on a 1kbps connection or a 16MB/s connection. It just works. It's also nice and simple and doesn't have crazy complexities.[/quote]
I thought you said TCP/IP isn't suitable for games. I'm guessing the point you're trying to make is that TCP/IP is reliable. So is my method. According to the document, things are resent. I know it's a complex document and you probably missed that part, so I'd be happy to walk you through it.

[quote]The article I read on implementing this said RTT is a good metric for packet send rate. RTT is ping. I thought about it enough to where it makes sense to me that that is a good metric.[/quote]
Show me this article! It may contain the key to your method, the part I'm not understanding; this would answer the throughput = ping question, and tell me why it is we're limiting our throughput to the ping.

[quote]Things have just clicked for me and I know exactly why I have to do everything the way I do, and they haven't clicked for you yet. If you start implementing it your way you will suddenly start having the problems I'm describing above and eventually end up creating an implementation very similar to mine anyway as you start handling each edge case one by one. When I first sat down and started brainstorming how to implement our networking I too had very similar ideas that you had. In fact it was pretty much exactly as you have in mind now. Then I realized that the internet introduces all of these problems and started modifying my solution until i finally arrived at this one and see that it is very robust and will work amazingly.[/quote]
My method handles all the edge cases that yours handles; my method is the same as your method except for this slight optimization for original input packets.


[quote]Also you just simply CANNOT say that the input delays from queuing up the commands and sending them with the next heartbeat will ruin play experience and induce noticeable input delays. An input delay of 30ms is just not noticeable. Write yourself a little test program that delays your input by different values and take not of the time you finally start noticing a difference. Also don't just sit there and notice there is an input delay. Consider the fact that you are playing a fast paced game and have other things on your mind (like the fact that you also need to build 20 soldier dudes and manage your mining or something) besides sitting there and noticing this delay. People have done this research before and gave their results so there is no point in doing it all over again but if you really don't believe it, try it out first before you use your opinion to implement a system incorrectly and with more complexity.
[/quote]

Just because it's negligible by itself doesn't excuse you from making inefficient decisions. I take performance very seriously, because these little things add up in the end. If you chop off 30ms of responsiveness here, 100ms over there, and lower the framerate by 5 fps here and by 10fps there because of inefficient coding and "oh they won't notice" all over the place, then they'll add up, and you'll find that when you start introducing the really complex things like bloom which actually *needed* that power that we're wasting elsewhere, you'll have a slow program. And you'll start saying, "well, we can't handle these other features now because the program's just not fast enough." So that's the issue I take with your liberal attitude when it comes to speed.

I do see your point when you say, it's not worth the time implementing something that will only save us a tiny bit of performance, but I believe my method would not take that much time to implement; it's just sending packets early in one case.

Also, just because a lot of other people have done it a certain way, doesn't mean we should do it that way. That's groupthink, and also an "argumentum ad populum" fallacy ([url="http://en.wikipedia.org/wiki/Argumentum_ad_populum"]http://en.wikipedia.org/wiki/Argumentum_ad_populum[/url])

And if they've done their research, show me, and maybe they can answer my question of why we are limiting our throughput to the ping. If you can show me a good reason they're doing that, then I will concede.

Share this post


Link to post
Share on other sites
[quote name='Verdagon' timestamp='1298493337' post='4778115']
This is where I think you're having trouble. Routers have a high throughput, much higher than 1000 / ping, I'm pretty sure. I'm still waiting on an answer to that from the board members here, but what Scourage said, "If you are only sending a packet or two every tick, just send the messages and be done with it. If you're sending hundreds, avoid the overhead and bundle." So for the client, who is only sending (original input packets + regularly scheduled ack packets), that's more in the "a packet or two every tick" range, and not in the "hundreds" range. This tells me that the overhead isn't as dire as you seem for our low frequency as sending.[/quote]

Regarding routers and network inefficiencies. Doing a simple ping to a server ~6000km away, the RTT is ~140ms. Given speed of light in copper (2c/3), the theoretical lowest RTT is 2*6000/200000 = ~60ms. So entire overhead of all routers, switches, the server, client network stack and anything else is 2.5 times above theoretical limit. Quite amazingly low and something not worth worrying about.


UDP packet header is 28 bytes. So each UDP packet sent will have this much overhead. This will be the main source of inefficiency.


There is no reason why protocol should do anything but batched transport, even if utilization is low. The loop works simply like this:[code]while (...) {
process_input();
update();
if (can_send()) {
collect_all_output();
send(output);
}
}[/code]This way packets are sent ASAP but will batch if needed. To simplify simulation, the above loop can run at fixed rate, perhaps 30Hz. As a consequence, data is sent at same rate at most.

The above can be expanded to handle cases where output buffer is overflowing (>MTU, >SO_SNDBUF, >threshold) or when some other parameters indicate congestion (increased ping, lost packets, ...).

[quote]I thought you said TCP/IP isn't suitable for games.[/quote]TCP is often not suitable due to in-order delivery. If fragments ABCD are sent and part of B got lost or corrupted, application will be unable to access CD even though they arrived.

Since TCP is stream protocol there are no boundaries between fragments, so protocol doesn't know what makes up 'B' and needs to delay everything.

[quote]I take performance very seriously, because these little things add up in the end. If you chop off 30ms of responsiveness here, 100ms over there, and lower the framerate by 5 fps here and by 10fps there because of inefficient coding and "oh they won't notice" all over the place, then they'll add up, and you'll find that when you start introducing the really complex things like bloom which actually *needed* that power that we're wasting elsewhere, you'll have a slow program. And you'll start saying, "well, we can't handle these other features now because the program's just not fast enough." So that's the issue I take with your liberal attitude when it comes to speed.[/quote]

Meh, I simply have no clue where performance bottlenecks lie in networking. Write an application, run tests on real network, profile and gather logs then analyze them. Too many variables.

That doesn't mean there isn't huge body of knowledge on sources of networking issues, but there are way too many too list and for simple traffic, such as tens of kilobytes per second one-on-one implementation won't matter much.

It has also been commonly observed that seemingly counter-intuitive solutions improve real-world performance. TCP contains many hard-learned lessons.

[quote]Also, just because a lot of other people have done it a certain way, doesn't mean we should do it that way. That's groupthink, and also an "argumentum ad populum" fallacy ([url="http://en.wikipedia.org/wiki/Argumentum_ad_populum"]http://en.wikipedia....ntum_ad_populum[/url])[/quote]
I agree with you.

Share this post


Link to post
Share on other sites
ill is right v-guy is wrong. v-guy, you're worrying way to much about these delays which, you'll find out, are inconsequential. Send all your commands at frame time (assuming frame-time is something lower than 200 mS, about the speed a human might notice time difference), you'll be glad you did.

Share this post


Link to post
Share on other sites
You're also not considering the fact that a router doesn't magically send a packet instantly to the other side. A wire can't fit say, 200 bytes of data in parallel. The router sends a packet down the wire in small chunks of bytes. No matter how amazing a router is, it's always limited by the wire. You can google this and read up on it if you're curious. I myself have already learned this in Netowrking class and didn't find this out from any articles on the internet, so I can't really give you internet articles that I used as a reference. You'll have to search yourself for this info if you are that disbeleiving. This is your answer to why even the most amazing routers can't just send packets out instantly. And you have to consider that there are many routers along the way from my computer to the destination and they all have to do this buffering and transfer. If we were to play on LAN the delay and impacts would just not add up as much since we only have 1 router between us and we have smaller ping. As a result I'd be sending packets faster and you'd have less apparent input delay. That delay isn't noticeable to humans anyway so it doesn't matter.

This also contributes to the delay, and is also a reason packets should be as small as possible.

Also my input delay in no way contributes to processing time and can't possibly slow down things like Bloom. Our framerate isn't affected by this in any way. That wasn't a very valid argument against my implementation.

The article about the ping as a good metric is the same article I keep showing you. [url="http://www.gafferongames.com/networking-for-game-programmers/reliability-and-flow-control"]http://www.gafferong...nd-flow-control[/url]

And my opinion isn't based on groupthink and everyone telling me that it should be done this way. I actually have a deep understanding of why it should be done this way so I agree with them because of that. And I am in fact providing evidence to you of why it should be done this way but you aren't understanding it so you keep telling me that my evidence isn't good enough.

Share this post


Link to post
Share on other sites
[quote name='ill' timestamp='1298496695' post='4778143']

This also contributes to the delay, and is also a reason packets should be as small as possible.[/quote]This is definitely incorrect. Over WAN, packets should be sized roughly as MTU.

While often not possible or desirable, packets of such size would introduce minimal overhead.

The smaller the packet, the bigger the overhead cost, the lesser the efficiency.

Efficiency can be measured as payload/(payload+header). With UDP, the minimum overhead is 28 bytes, If sending 64-byte payload individually, efficiency is at most 70%. If packing 10 64-byte payloads into same packet, it goes up to 95%.

There is additional overhead at lower layers but there is nothing one can do about it.

A good heuristic will then find a balance between how often to send these batches to minimize delay and maximize efficiency. Nagle algorithm is one such attempt, but default 300ms delay is often too large for interactive games. Although, as an interesting detail, World of Warcraft didn't disable Nagle algorithm since a year or so ago. So apparently, 300ms latency is not a deal breaker.

Share this post


Link to post
Share on other sites

This topic is 2484 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this