Jump to content

  • Log In with Google      Sign In   
  • Create Account

UDP protocol to minimize latency?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
12 replies to this topic

#1 lerno   Members   -  Reputation: 209

Like
3Likes
Like

Posted 27 July 2013 - 02:16 PM

This is sort of a follow-up to this question: http://www.gamedev.net/topic/645872-responsive-mobile-multiplayer-udp-or-tcp/

 

Experimental results

 

After a lot of testing I managed to determine that the bad 3G latency times I initially got for UDP on my phone was due to the phone saving power.

 

In particular, I was seeing roundtrips of 2000 ms for UDP over 3G when sending 1 message every 2 seconds. When I instead sent a new message within 50 ms of receiving the return message, that number would dive to a steady 80 ms, with occasional hiccups of 400 ms.

 

TCP got around 120-200 ms with the same setup, but when it encountered packet loss, the roundtrip would hit 2000-4000 ms!

 

This verifies that there are indeed serious stuttering with TCP that would be extremely hard to cover with animations.

 

Looking for a good reliable UDP protocol

 

I've been looking at various existing libraries. Most of them are way too high level - I prefer to write all the serialization myself thank you.

 

But even for products like ENet, it looks like the strategy is to push a TCP-like reliable layer on top of UDP and I don't think I want that.

 

What about something like thie?

 

I remember reading about one of those early Lucasart space combat games (X-wing vs Tie-fighter?), that they used an extremely simple scheme - basically sending the packet n together with packet n + 1, so that only on two consequent packet losses would there actually be a packet loss.

 

Our particular game doesn't have much action. In fact, as I've described in the other posting, it's about 80 packets sent per player total.

 

In order to keep the phone awake though, one would have to send pings, and send pings often.

 

Assuming we send pings every 100 ms we could imagine simply piggy bagging un-ack:ed messages until they're acked, it could look something like this:

 

1. Client sends action request with message n

2. Client sends ping with message n + 1, and adds message n

3. Client sends ping with message n + 2, and adds ping n + 1, message n

4. Client receives pong from server, ack-ing up to n + 1

5. Client sends ping with message n + 3, and adds ping n + 2

 

etc.

 

We could then prune old pings from the resends, so we only send resend actions, and vice versa for the server obviously.

 

This looks like it could be much faster in recovering from latency than any resend mechanism that relies on requesting missing packets, like TCP does.

 

The biggest worry I have is that I'd move from TCP with very conservative bandwidth-requirements to something which constantly bombards the server with packets.



Sponsor:

#2 KnolanCross   Members   -  Reputation: 1291

Like
0Likes
Like

Posted 27 July 2013 - 03:08 PM

Hmm, interesting, but can you give us more details on how the tests were ran?

- How did you dealt with package loss in the UDP case?

- How did you test the latency?

- If I got it right, you tested in your phone 3G connection, but did you monitor how the network was varying? You could just had got an unlucky peak with TCP.

 

I am saying this because I found the results very odd, there is no sorcery in a TCP implementation with UDP (at least not that I am aware of), you just can use a better retransmit protocol (If TCP loses the first package of a 100 package window, he will ask for the retransmition of the 100 packages) and you can use the information you receive right away (instead of waiting for retransmition, if you can interpolate the lost package, such as movement, you don't even have to ask for a retransmition).

 

Thanks for the time to share the result, hope to hear more from the methods.


Currently working on a scene editor for ORX (http://orx-project.org), using kivy (http://kivy.org).


#3 3TATUK2   Members   -  Reputation: 730

Like
0Likes
Like

Posted 27 July 2013 - 03:10 PM

enet works nicely for me... the reliability is an optional flag so it's not required to be used



#4 lerno   Members   -  Reputation: 209

Like
0Likes
Like

Posted 27 July 2013 - 03:18 PM

- This was just a simple ping test, so UDP packet loss (in either direction) was simply logged as that.

 

- I set up echo servers for both UDP and TCP. Running a client on the same computer but targeting the wifi router's external IP showed consistent < 5 ms, so anything above that I ascribe to either the phone itself or the 3G connection.

 

- I just tried it again before writing the post. Same behaviour and same times (although admittedly, I only got to a latency of about 1800 ms with TCP this time)

 

(I hope it's clear that when I'm talking about my TCP results, I'm talking about actual TCP, and not some reliable UDP)



#5 lerno   Members   -  Reputation: 209

Like
0Likes
Like

Posted 27 July 2013 - 03:20 PM

In regards to enet, the question is what I actually would gain by using it, especially with the servers using java - which doesn't seem to have a full-fledged enet port.

 

It would seem like I'd be better of trying to implement my own library by copying off some suitable proven protocol.



#6 Cornstalks   Crossbones+   -  Reputation: 6989

Like
0Likes
Like

Posted 27 July 2013 - 06:47 PM

I remember reading about one of those early Lucasart space combat games (X-wing vs Tie-fighter?), that they used an extremely simple scheme - basically sending the packet n together with packet n + 1, so that only on two consequent packet losses would there actually be a packet loss.

One challenge with this method is that when you lose packets, it's typically in clusters. That's not to say you won't ever lose just one, but if you're going to miss some packets they're more likely to be lost in clusters. Of course, this depends on how frequently you're sending them, too.


[ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]

#7 lerno   Members   -  Reputation: 209

Like
0Likes
Like

Posted 28 July 2013 - 06:18 AM

 

I remember reading about one of those early Lucasart space combat games (X-wing vs Tie-fighter?), that they used an extremely simple scheme - basically sending the packet n together with packet n + 1, so that only on two consequent packet losses would there actually be a packet loss.

One challenge with this method is that when you lose packets, it's typically in clusters. That's not to say you won't ever lose just one, but if you're going to miss some packets they're more likely to be lost in clusters. Of course, this depends on how frequently you're sending them, too.

 

I'm likely to end up sending data 5-10 times a second (mainly ping packets) if I use this method. How long can I expect the duration to be?



#8 Xanather   Members   -  Reputation: 708

Like
1Likes
Like

Posted 30 July 2013 - 08:14 AM

Just want to put this out there, I am developing a client-server 2D TCP networked game, I tried connecting to my game server (hosted at my home) on my laptop though my phones WiFi hotspot (I was further out in the country at the time aswell). That version's build had liquid animation that was compressed -> It ended up running pretty well and latency was within 200ms and this is a game that is not developed for internet through the phone.

 

It would be pointless to use UDP if your going to just reimplement TCP. If some packets can be dropped and/or you want a custom error detection design other than the way TCP handles it then UDP would make more sense, otherwise in my opinion TCP saves time and is optimized well.



#9 Cornstalks   Crossbones+   -  Reputation: 6989

Like
0Likes
Like

Posted 30 July 2013 - 08:28 AM

I'm likely to end up sending data 5-10 times a second (mainly ping packets) if I use this method. How long can I expect the duration to be?

To be honest, I'm not sure what some realistic numbers are. A lot of it depends on the network. hplus might know better.


[ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]

#10 hplus0603   Moderators   -  Reputation: 5303

Like
2Likes
Like

Posted 30 July 2013 - 09:44 AM

I agree: It depends on the network! It's probably better to design robust re-connection mechanics. If you can't get a message through for a second, that may start to severely impact gameplay, and sometimes when some IP address changes route through the Internet, it may take a minute to come back.


enum Bool { True, False, FileNotFound };

#11 lerno   Members   -  Reputation: 209

Like
0Likes
Like

Posted 30 July 2013 - 10:18 AM

Just want to put this out there, I am developing a client-server 2D TCP networked game, I tried connecting to my game server (hosted at my home) on my laptop though my phones WiFi hotspot (I was further out in the country at the time aswell). That version's build had liquid animation that was compressed -> It ended up running pretty well and latency was within 200ms and this is a game that is not developed for internet through the phone.

 

It would be pointless to use UDP if your going to just reimplement TCP. If some packets can be dropped and/or you want a custom error detection design other than the way TCP handles it then UDP would make more sense, otherwise in my opinion TCP saves time and is optimized well.

 

Well, for my gameplay I think the lag in TCP when moving is sufficiently annoying to detract from the gameplay. Opportunities in my case for predictively animating responses is minimal, so I'm willing to try anything that can reduce the latency.

 

I agree that it's meaningless to implement reliable UDP for a game like this. However, given that my packets are typically on the order of a few tens of bytes, I have more opportunities with UDP. More specifically I can keep resending the all the non-acked data until I receive an ack for them. That way I don't need any ack-timeouts (which is what TCP and many "reliable UDP"-solutions are using).



#12 snacktime   Members   -  Reputation: 292

Like
0Likes
Like

Posted 30 July 2013 - 10:34 PM

I prefer not having ordering at the protocol  layer at all and let individual systems send requests again if they need extra reliability.  Reliable, ordered messaging over unreliable networks just ain't going to happen,and trying to force a square peg into a round hole doesn't work.   I've seen more brittle, unmanageable code because dev's try to create architectures that rely on ordered messaging. 

 

Be wary of larger messages crossing packet boundaries, which is an issue with udp.  

 

As for serialization, I'm firmly convinced that bit level protocols should be a thing of the past.   There is very little to gain from operating at that level, if at all.  I've been using protocol buffers with good success.  They are sparse(no schema data in the message), performant, and cross platform. 



#13 hplus0603   Moderators   -  Reputation: 5303

Like
1Likes
Like

Posted 30 July 2013 - 10:47 PM

If you don't have some way to support reliable-in-order where it's really needed, then each dev will try to build it alone in each application system, which is a much worse place to put it than in the network layer proper.


enum Bool { True, False, FileNotFound };




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS