Latency affecting frame rate

Started by
7 comments, last by hplus0603 12 years, 9 months ago
Hello

When I test my 3d game over a 170ms latency connection, frame rate goes down to unplayable levels.

Why would latency even affect frame rate? It's supposed to affect only the response time for the first packet. Somehow, latency affects every packet of the stream seperately.

Maybe it's because I'm using TCP. I disabled Nagle's and set the winsock send buffer to 0, but still, each packet gets delayed.

Thanks
Advertisement
How are you doing sends/receives in your client? Can you show some code?

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

Client recv()s: Using windows event system (async socket)

Client send()s: Client samples keyboard 30 times a second and send()s commands
Latency doesn't just affect one packet and then disappear; I'm not sure where you get this idea?

Every packet and its corresponding acknowledgement from the remote end is going to take time; if that time increases (latency) the corresponding round trip will be longer. If you force your socket to wait for acknowledgements before sending its next rounds of packets (i.e. you're using TCP) then of course your entire conversation is going to be slower.

As to why this affects your framerate, that's probably just a mistake in your design, e.g. using blocking sockets. Since you really didn't offer any information on what you're doing, I can only speculate.

Please note that the solution is not "stop using TCP." The solution is "stop coupling your framerate to your socket throughput."

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

OK, I'm not talking about framerate, i'm talking about network update ticks. It should be possible to get 30 ticks/second using TCP even on a high-latency link, right? If you disable Nagle's and the send buffer, that is. But I'm getting 6 ticks per second. Why is the server waiting for ack before sending another packet (i'm guessing that this is the problem)?
I did some traffic capturing with Wireshark, and it seems that my ~15 byte data packets are getting aggregated to ~80 byte TCP packets and sent ~5 times/second even though I disabled Nagle's and set the send buffer to 0.

I'm really stumped by this. I disabled Nagle's and the send buffer by doing:

Socket=socket(AF_INET,SOCK_STREAM,IPPROTO_TCP);
WSAAsyncSelect(Socket, hwnd, WM_SOCKET, (FD_CLOSE|FD_READ));

int flag = 1;
setsockopt(Socket, IPPROTO_TCP, TCP_NODELAY, (char *) &flag, sizeof(int));

int sendbuff = 0;
setsockopt(Socket, SOL_SOCKET, SO_SNDBUF, (char *) &sendbuff, sizeof(int));


...

connect(Socket,(LPSOCKADDR)(&SockAddr),sizeof(SockAddr));

Is anything wrong with this code? Why is TCP still aggregating my data packets instead of sending them off immediately?
WOW!!!! I commented out the set socket buffer to zero code and it fixed it, well kinda. It seems that it was a bad idea. And Nagle's or no Nagle's doesn't seem to have much impact on performance, weird as it seems. So much for theory vs. practice. Weird stuff. THANKS
This article should explain things nicely.


Also, just a reminder, but you already had a perfectly good thread for this discussion; in the future please elect to either post new threads or continue old ones, rather than trying to split discussion over multiple threads. That just makes it inconvenient to remember what's going on smile.gif

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]


Maybe it's because I'm using TCP. I disabled Nagle's and set the winsock send buffer to 0, but still, each packet gets delayed.


Didn't you already ask this question? Let's keep discussing it in that thread.
Btw: setting the send buffer size to 0 is never the right thing to do.
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement