Sign in to follow this  

ENet - long delay before server realises client has disconnected?

This topic is 4714 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Is anyone using ENet? I was just wondering if other people experience such long delays between a client disconnecting and the server receiving the disconnect message. My experiment shows it takes at least 5-10s for the server to realise that the client disconnected. Can anyone confirm this? Thanks

Share this post


Link to post
Share on other sites
I don't know what ENet is, but 5-10s delay sounds reasonable. (even fast!)

Unless you're talking about an explicit disconnect? (one requested by the user or server) and not something like a network interruption?

Share this post


Link to post
Share on other sites
ENet is a high level reliable UDP network library. The disconnect was caused by pressing CTRL+C on the client (Linux process). So I guess it simulates a client application crash. It seems like a long time to detect that a client disconnected.

Thanks

Share this post


Link to post
Share on other sites
You can add an atexit() handler, and a signal handler, that sends a disconnect message, if you want very quick disconnect in those cases.

You can also send your own ping messages three times a seocnd, and if you haven't gotten them for a second, kick the user on the server side.

I don't know if ENet supports tuning the time-out for disconnects, but putting it too low will make the server kick clients if they just suffer an internet routing hiccup (which happens occasionally) so 5 seconds seems about right.

Share this post


Link to post
Share on other sites
quick question..

i liked hplus's idea of sending a message atexit()... however, wouldnt this not work at least some of the time? that is, since the program is sending the packet and then immediately closing, this means that we dont have time to receive an ack, and so, the peer may never get the message?

maybe have it wait ping MS before exiting? or do you think this will be very noticable and annoying? even then it still wouldnt be 100% accurate.

Share this post


Link to post
Share on other sites
Graveyard: yes, that would be a possible problem.

The reason TCP can USUALLY get quick disconnects, is that the kernel stays around after the process exists, and can send the RST for you. If you want to emulate this in your UDP-based protocol like you say, you have to hang around until you get an ack, which means delaying both in atexit(), and your signal (or structured exception) handler.

I wouldn't worry about it. 5 seconds is quick.

Share this post


Link to post
Share on other sites

This topic is 4714 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this