Sign in to follow this  
uri8700

Latency affecting frame rate

Recommended Posts

uri8700    100
Hello

When I test my 3d game over a 170ms latency connection, frame rate goes down to unplayable levels.

Why would latency even affect frame rate? It's supposed to affect only the response time for the first packet. Somehow, latency affects every packet of the stream seperately.

Maybe it's because I'm using TCP. I disabled Nagle's and set the winsock send buffer to 0, but still, each packet gets delayed.

Thanks

Share this post


Link to post
Share on other sites
uri8700    100
Client recv()s: Using windows event system (async socket)

Client send()s: Client samples keyboard 30 times a second and send()s commands

Share this post


Link to post
Share on other sites
ApochPiQ    23000
Latency doesn't just affect one packet and then disappear; I'm not sure where you get this idea?

Every packet and its corresponding acknowledgement from the remote end is going to take time; if that time increases (latency) the corresponding round trip will be longer. If you force your socket to wait for acknowledgements before sending its next rounds of packets (i.e. you're using TCP) then of course your entire conversation is going to be slower.

As to why this affects your [i]framerate,[/i] that's probably just a mistake in your design, e.g. using blocking sockets. Since you really didn't offer any information on what you're doing, I can only speculate.

Please note that the solution is not "stop using TCP." The solution is "stop coupling your framerate to your socket throughput."

Share this post


Link to post
Share on other sites
uri8700    100
OK, I'm not talking about framerate, i'm talking about network update ticks. It should be possible to get 30 ticks/second using TCP even on a high-latency link, right? If you disable Nagle's and the send buffer, that is. But I'm getting 6 ticks per second. Why is the server waiting for ack before sending another packet (i'm guessing that this is the problem)?

Share this post


Link to post
Share on other sites
uri8700    100
I did some traffic capturing with Wireshark, and it seems that my ~15 byte data packets are getting aggregated to ~80 byte TCP packets and sent ~5 times/second even though I disabled Nagle's and set the send buffer to 0.

I'm really stumped by this. I disabled Nagle's and the send buffer by doing:

Socket=socket(AF_INET,SOCK_STREAM,IPPROTO_TCP);
WSAAsyncSelect(Socket, hwnd, WM_SOCKET, (FD_CLOSE|FD_READ));

int flag = 1;
setsockopt(Socket, IPPROTO_TCP, TCP_NODELAY, (char *) &flag, sizeof(int));

int sendbuff = 0;
setsockopt(Socket, SOL_SOCKET, SO_SNDBUF, (char *) &sendbuff, sizeof(int));


...

connect(Socket,(LPSOCKADDR)(&SockAddr),sizeof(SockAddr));

Is anything wrong with this code? Why is TCP still aggregating my data packets instead of sending them off immediately?

Share this post


Link to post
Share on other sites
uri8700    100
WOW!!!! I commented out the set socket buffer to zero code and it fixed it, well kinda. It seems that it was a bad idea. And Nagle's or no Nagle's doesn't seem to have much impact on performance, weird as it seems. So much for theory vs. practice. Weird stuff. THANKS

Share this post


Link to post
Share on other sites
ApochPiQ    23000
[url="http://support.microsoft.com/kb/214397"]This article[/url] should explain things nicely.


Also, just a reminder, but you already had a perfectly good thread for this discussion; in the future please elect to either post new threads [i]or[/i] continue old ones, rather than trying to split discussion over multiple threads. That just makes it inconvenient to remember what's going on [img]http://public.gamedev.net/public/style_emoticons/default/smile.gif[/img]

Share this post


Link to post
Share on other sites
hplus0603    11347
[quote name='_TL_' timestamp='1311296232' post='4838719']
Maybe it's because I'm using TCP. I disabled Nagle's and set the winsock send buffer to 0, but still, each packet gets delayed.
[/quote]

Didn't you already ask this question? Let's keep discussing it in that thread.
Btw: setting the send buffer size to 0 is never the right thing to do.

Share this post


Link to post
Share on other sites
Guest
This topic is now closed to further replies.
Sign in to follow this