Archived

This topic is now archived and is closed to further replies.

Blocking, Non-Blocking, or Async sockets for my server ?

This topic is 5784 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Well I''m asking myself what kind of socket I would need for the server side of my online RPG. In the gamedev tutorial intituled "Winsock 2 for games": it says that blocking sockets should be used for the server-side "since they are the most logical, simple, and practical when all you want to do is wait for a connection and then communicate with the client". BUT in that tutorial (It''s a rock paper scissor example) the server IS WAITING (listening) for all players to connect BEFORE processing the game ! In my own game players should be able to connect to the server, disconnect, without affecting the game state and the server processing. So my question is: Would Blocking sockets be good for my kind of server or not ? If yes, do I have to make a socket listening in the same loop than other sockets ? If not, what kind of socket should I use ? Blocking, Non-blocking or Asynchronous sockets ? ------ GameDev''er 4 ever.

Share this post


Link to post
Share on other sites
I think Asynchronous sockets are most efficient.

I have chosen to use non-blocking sockets for my first multiplayer game to keep it simple.
I think THEY are the most logical. I use only one socket for all trafic. The IP shows where the packet was from.
Of course this means polling the socket but hey...That''s only one socket pr frame to poll...I think that is acceptable...

Share this post


Link to post
Share on other sites
When you''re using a single socket (typical UDP situation), you don''t have any problems with scaling etc.., so polling every frame should be perfectly fine, so non-blocking is probably best - considering that its implementation is really simple.

The only potential problem (when you have _lots_ of clients) is that the OS buffers could run out or something. I''m not sure about the implementation, but the maximum backlog of packets might be set on a per-socket (rather than a system-wide) basis. This is of course OS-dependant, but it could cause unnecessary packet drops.

In that case (and mind you, I''m not sure whether that is really true), you might consider writing a networking thread which simply recvfrom()s on the socket and copies all packets into a FIFO queue which is then read by the main thread. That way, you can determine the size of the backlog yourself.

However, that''s probably unnecessary unless you''re running at really slow framerates, or you''ve got hundreds of clients.

cu,
Prefect

Share this post


Link to post
Share on other sites
quote:
Original post by granat
I think Asynchronous sockets are most efficient.



Yes I think they are.

BUT is that right for a server application ? Should I really program a win32 SERVER just for asynchronous sockets ?


(NB: I'm using TCP)


------
GameDev'er 4 ever.

Edited by - Khelz on February 4, 2002 10:31:11 AM

Share this post


Link to post
Share on other sites
Does this mean that Blocking sockets arn''t scaleable?
I''m working on a server/client right now and I''m using blocking sockets. I have a FIFO buffer for each client with a thread that does nothing but read data into the buffer. Should I try to switch to non-blocking sockets? Would I experiance problems with > 1000 clients? > 5000?

Jason Mickela
ICQ : 873518
E-Mail: jmickela@sbcglobal.net


Please excuse my spelling


-"I''m Cloister the Stupid" Lister (Red Dwarf)

Share this post


Link to post
Share on other sites
quote:
Original post by griffenjam
Would I experiance problems with > 1000 clients? > 5000?

-"I'm Cloister the Stupid" Lister (Red Dwarf)


If 5000 clients means 5000 threads then yes I belive you will experience problems...But I haven't tried it and I'm certaintly no expert...Actually...Don't listen to me...



"There are always casualties in war gentlemen, otherwise it wouldn't be war. It would be a rather large argument with alot of pushing and shoving." - Rimmer (Red Dwarf)

PS! Quote as remembered.




Edited by - granat on February 5, 2002 6:08:21 AM

Share this post


Link to post
Share on other sites
Heh, you should _never_ use one thread per client, especially not for games or other applications where the different client<->server connections heavily share data (an IRC server would count as such an app as well, a web server wouldn''t).

The main problem with scalability is AFAICT the number of sockets you use. The select()-type programming model doesn''t scale when you''ve got lots of file descriptors (sockets) to select on, simply because a huge array needs to be traversed. This problem can be solved with more advanced I/O-methods.

On the other hands, games would typically use UDP for their communication, which means that the server needs to have only one single socket... I can hardly imagine a situation where you''d need more advanced sockets API in order to scale better with a _single_ socket.

cu,
Prefect

Share this post


Link to post
Share on other sites
Prefect:
quote:

On the other hands, games would typically use UDP for their communication, which means that the server needs to have only one single socket... I can hardly imagine a situation where you''d need more advanced sockets API in order to scale better with a _single_ socket.



Even though you may only use one socket you are still servicing multiple clients. What happens if you have a multi-processor system and only use one-thread for servicing that socket? Your application isn''t working at top efficiency. A better way to service those requests would be to post multiple WSARecv calls on the socket and have different threads service those calls via I/O completion ports (you''d think I love those things or something - wouldn''t you? ) That way you maximize processor usage and can service multiple incoming client requests/data.

Disclaimer : I haven''t actually tried that before with UDP (only TCP and hence multiple sockets.) I know you can post multiple, overlapped WSARecv calls on a single socket. It works for TCP sockets so it should work the same for UDP. If anything UDP should be easier because of its discreet messages (non-streaming data.)

Dire Wolf
www.digitalfiends.com

Share this post


Link to post
Share on other sites
I read a report recently about server performance using the various types of socket management: blocking socket per thread, non blocking using select(), using message notification, using callback events, and using I/O completion ports. I cannot post the report because it''s an internal MS document, but if I can dig up the paper again I will post some of the pertinent info...

Share this post


Link to post
Share on other sites
While we''re on the subject I have a question of my own

I recently wrote a simple "game" server which created two threads - one to do networking stuff and the other to control the gameplay. The two threads communicate via messages queues.

The network thread does this:
  check socket for client messages
  process messages, push anything required for gameplay onto the gameplay message queue
  check outgoing queue for gamestate updates from the gameplay queue and broadcast them to clients.

While the gameplay thread is doing this:
  check queue for messages from network thread
  update gamestate according to messages. move bots around
  if it''s time to do so, push a gamestate update onto the outgoing queue for the network thread to broadcast.

I did this because i wanted to separate the blocking network sockets from the gameplay (for aesthetic reasons and to keep the gameplay running independently of network stuff.)

The problem is that the network thread blocks on the recvfrom waiting for client updates, and so gamestate updates can''t be broadcast to the clients until the recvfrom moves on.

I thought of a few possible ways to get around this. The first one is to make the socket non-blocking, in which case I may as well just put the whole thing in a single thread.

The next one was to put the broadcast method in the gameplay thread which means I need two sockets and have network code in the gameplay thread which I was trying to avoid.

And lastly I thought I could instead create make a gameplay process as a child or subprocess of the network process and have them communicate via a pipe. That way the network process can do a select on the network socket and on the pipe.

Does anyone have any comments or other ideas? I would very much appreciate them

Share this post


Link to post
Share on other sites
Number one, what platform are you programming on?

Number two... if you have two threads, why not THREE... ? Wouldn''t that be easier and better than integrating something into your gamestate thread?

Share this post


Link to post
Share on other sites
quote:
Original post by sQuid

The problem is that the network thread blocks on the recvfrom waiting for client updates, and so gamestate updates can''t be broadcast to the clients until the recvfrom moves on.




ok, here''s my 2 cents. Depending on which platform you''re on,
you could do these things. On *nix, you can use a POSIX semaphor
or Mutex to lock your outbound queue. The calls to generate
these things create a filedescriptor handle which you can
pass to select(). Now select will interrupt if there is
data available on your outbound queue (mutex released) , because the read state on the descriptor will
be set. I''ve implemented that and I know that it works
Ok, on Win* things look different because the use of select()
isn''t encouraged and you don''t have the descriptors.
What I''ve thought about would be to use WSAWaitForMultipleEvents() function
to control your networking thread. Then, have your locking mechanism Create/Set a Event that is caught by the WSA* function.
In that case you get an interrupt as well and the outbound queue will be processed.
Does this make sense ? I haven''t tried that yet (but will shortly) but it seems to be a workable way.


Share this post


Link to post
Share on other sites
On *nix, you can use a POSIX semaphor or Mutex to lock your outbound queue. The calls to generate these things create a filedescriptor handle which you can pass to select(). Nows elect will interrupt if there is data available on your outbound queue (mutex released) , because the read state on the descriptor will be set. I''ve implemented that and I know thati t works

Thanks, thats more like 20c I am on *nix (which I should have mentioned first up) and the queues are already using mutex locks so that took me 2 minutes.

Share this post


Link to post
Share on other sites
It works on Windows, too. I tried the following :
(for those interested ;-) )
1. Create WSAEvents for the Socket and the locking object
2. Use WSAEventSelect to associate socket and event
3. use SetEvent() to manually set event when releasing the lock
4. use WSAWaitForMultipleEvents to interrupt mainloop when either event occurs.

Cheers !

Share this post


Link to post
Share on other sites
hey, I was mixing things up, sorry. On IRIX, there is a function
available called usopenpollsema() which gives you direct access to the filedescriptor. If you are using posix stuff, I would use something like a pipe. Here it goes:
1. create a pipe() and use one filedescriptor in your game thread
2. take the other filedescriptor and pass it to select via FD_SET(...);
3. If your game thread finishes updating the outbound queue, write a single byte onto the pipe, in which case the other filedescriptor will become writeable (and select() returns).

you might still want to use some synchronisation to control access to the queue

hope that helps.


Edited by - behemoth on February 14, 2002 6:54:57 AM

Share this post


Link to post
Share on other sites
Cool, that''s all working great now.

But I figure if I''m writing to a pipe I may as just forget the queue altogether and just get the game thread to write updates and the net thread to read them directly from the pipe.

A computer scientist friend of mine said there was another way to do it, if it''s interesting I''ll post it on this thread.

thanks again

Share this post


Link to post
Share on other sites
Yes, I would be interested in different methods. I think if
you are on *nix you could use some signal mechanism as well. Have the game thread send a signal to the network thread when the outbound queue gets filled and have a signal handler deal
with it. I think there might be an issue with signal backlog but
I''m not sure. Haven''t tried that one ;-)

Share this post


Link to post
Share on other sites