Jump to content
  • Advertisement
Sign in to follow this  
stodge

ENet - long delay before server realises client has disconnected?

This topic is 5025 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Is anyone using ENet? I was just wondering if other people experience such long delays between a client disconnecting and the server receiving the disconnect message. My experiment shows it takes at least 5-10s for the server to realise that the client disconnected. Can anyone confirm this? Thanks

Share this post


Link to post
Share on other sites
Advertisement
I don't know what ENet is, but 5-10s delay sounds reasonable. (even fast!)

Unless you're talking about an explicit disconnect? (one requested by the user or server) and not something like a network interruption?

Share this post


Link to post
Share on other sites
ENet is a high level reliable UDP network library. The disconnect was caused by pressing CTRL+C on the client (Linux process). So I guess it simulates a client application crash. It seems like a long time to detect that a client disconnected.

Thanks

Share this post


Link to post
Share on other sites
You can add an atexit() handler, and a signal handler, that sends a disconnect message, if you want very quick disconnect in those cases.

You can also send your own ping messages three times a seocnd, and if you haven't gotten them for a second, kick the user on the server side.

I don't know if ENet supports tuning the time-out for disconnects, but putting it too low will make the server kick clients if they just suffer an internet routing hiccup (which happens occasionally) so 5 seconds seems about right.

Share this post


Link to post
Share on other sites
quick question..

i liked hplus's idea of sending a message atexit()... however, wouldnt this not work at least some of the time? that is, since the program is sending the packet and then immediately closing, this means that we dont have time to receive an ack, and so, the peer may never get the message?

maybe have it wait ping MS before exiting? or do you think this will be very noticable and annoying? even then it still wouldnt be 100% accurate.

Share this post


Link to post
Share on other sites
Graveyard: yes, that would be a possible problem.

The reason TCP can USUALLY get quick disconnects, is that the kernel stays around after the process exists, and can send the RST for you. If you want to emulate this in your UDP-based protocol like you say, you have to hang around until you get an ack, which means delaying both in atexit(), and your signal (or structured exception) handler.

I wouldn't worry about it. 5 seconds is quick.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!