Blocking, Non-Blocking, or Async sockets for my server ?
Well I''m asking myself what kind of socket I would need for the server side of my online RPG.
In the gamedev tutorial intituled "Winsock 2 for games": it says that blocking sockets should be used for the server-side "since they are the most logical, simple, and practical when all you want to do is wait for a connection and then communicate with the client".
BUT in that tutorial (It''s a rock paper scissor example) the server IS WAITING (listening) for all players to connect BEFORE processing the game !
In my own game players should be able to connect to the server, disconnect, without affecting the game state and the server processing.
So my question is: Would Blocking sockets be good for my kind of server or not ?
If yes, do I have to make a socket listening in the same loop than other sockets ?
If not, what kind of socket should I use ? Blocking, Non-blocking or Asynchronous sockets ?
------
GameDev''er 4 ever.
I think Asynchronous sockets are most efficient.
I have chosen to use non-blocking sockets for my first multiplayer game to keep it simple.
I think THEY are the most logical. I use only one socket for all trafic. The IP shows where the packet was from.
Of course this means polling the socket but hey...That''s only one socket pr frame to poll...I think that is acceptable...
I have chosen to use non-blocking sockets for my first multiplayer game to keep it simple.
I think THEY are the most logical. I use only one socket for all trafic. The IP shows where the packet was from.
Of course this means polling the socket but hey...That''s only one socket pr frame to poll...I think that is acceptable...
When you''re using a single socket (typical UDP situation), you don''t have any problems with scaling etc.., so polling every frame should be perfectly fine, so non-blocking is probably best - considering that its implementation is really simple.
The only potential problem (when you have _lots_ of clients) is that the OS buffers could run out or something. I''m not sure about the implementation, but the maximum backlog of packets might be set on a per-socket (rather than a system-wide) basis. This is of course OS-dependant, but it could cause unnecessary packet drops.
In that case (and mind you, I''m not sure whether that is really true), you might consider writing a networking thread which simply recvfrom()s on the socket and copies all packets into a FIFO queue which is then read by the main thread. That way, you can determine the size of the backlog yourself.
However, that''s probably unnecessary unless you''re running at really slow framerates, or you''ve got hundreds of clients.
cu,
Prefect
The only potential problem (when you have _lots_ of clients) is that the OS buffers could run out or something. I''m not sure about the implementation, but the maximum backlog of packets might be set on a per-socket (rather than a system-wide) basis. This is of course OS-dependant, but it could cause unnecessary packet drops.
In that case (and mind you, I''m not sure whether that is really true), you might consider writing a networking thread which simply recvfrom()s on the socket and copies all packets into a FIFO queue which is then read by the main thread. That way, you can determine the size of the backlog yourself.
However, that''s probably unnecessary unless you''re running at really slow framerates, or you''ve got hundreds of clients.
cu,
Prefect
quote:Original post by granat
I think Asynchronous sockets are most efficient.
Yes I think they are.
BUT is that right for a server application ? Should I really program a win32 SERVER just for asynchronous sockets ?
(NB: I'm using TCP)
------
GameDev'er 4 ever.
Edited by - Khelz on February 4, 2002 10:31:11 AM
You don''t have to write a server to use asynchronous I/O. It just happens that servers are the commonly implemented using asynchronous I/O because it is more efficient and scalable.
Dire Wolf
www.digitalfiends.com
Dire Wolf
www.digitalfiends.com
Does this mean that Blocking sockets arn''t scaleable?
I''m working on a server/client right now and I''m using blocking sockets. I have a FIFO buffer for each client with a thread that does nothing but read data into the buffer. Should I try to switch to non-blocking sockets? Would I experiance problems with > 1000 clients? > 5000?
Jason Mickela
ICQ : 873518
E-Mail: jmickela@sbcglobal.net
Please excuse my spelling
-"I''m Cloister the Stupid" Lister (Red Dwarf)
I''m working on a server/client right now and I''m using blocking sockets. I have a FIFO buffer for each client with a thread that does nothing but read data into the buffer. Should I try to switch to non-blocking sockets? Would I experiance problems with > 1000 clients? > 5000?
Jason Mickela
ICQ : 873518
E-Mail: jmickela@sbcglobal.net
Please excuse my spelling
-"I''m Cloister the Stupid" Lister (Red Dwarf)
quote:Original post by griffenjam
Would I experiance problems with > 1000 clients? > 5000?
-"I'm Cloister the Stupid" Lister (Red Dwarf)
If 5000 clients means 5000 threads then yes I belive you will experience problems...But I haven't tried it and I'm certaintly no expert...Actually...Don't listen to me...
"There are always casualties in war gentlemen, otherwise it wouldn't be war. It would be a rather large argument with alot of pushing and shoving." - Rimmer (Red Dwarf)
PS! Quote as remembered.
Edited by - granat on February 5, 2002 6:08:21 AM
Heh, you should _never_ use one thread per client, especially not for games or other applications where the different client<->server connections heavily share data (an IRC server would count as such an app as well, a web server wouldn''t).
The main problem with scalability is AFAICT the number of sockets you use. The select()-type programming model doesn''t scale when you''ve got lots of file descriptors (sockets) to select on, simply because a huge array needs to be traversed. This problem can be solved with more advanced I/O-methods.
On the other hands, games would typically use UDP for their communication, which means that the server needs to have only one single socket... I can hardly imagine a situation where you''d need more advanced sockets API in order to scale better with a _single_ socket.
cu,
Prefect
The main problem with scalability is AFAICT the number of sockets you use. The select()-type programming model doesn''t scale when you''ve got lots of file descriptors (sockets) to select on, simply because a huge array needs to be traversed. This problem can be solved with more advanced I/O-methods.
On the other hands, games would typically use UDP for their communication, which means that the server needs to have only one single socket... I can hardly imagine a situation where you''d need more advanced sockets API in order to scale better with a _single_ socket.
cu,
Prefect
Prefect:
Even though you may only use one socket you are still servicing multiple clients. What happens if you have a multi-processor system and only use one-thread for servicing that socket? Your application isn''t working at top efficiency. A better way to service those requests would be to post multiple WSARecv calls on the socket and have different threads service those calls via I/O completion ports (you''d think I love those things or something - wouldn''t you? ) That way you maximize processor usage and can service multiple incoming client requests/data.
Disclaimer : I haven''t actually tried that before with UDP (only TCP and hence multiple sockets.) I know you can post multiple, overlapped WSARecv calls on a single socket. It works for TCP sockets so it should work the same for UDP. If anything UDP should be easier because of its discreet messages (non-streaming data.)
Dire Wolf
www.digitalfiends.com
quote:
On the other hands, games would typically use UDP for their communication, which means that the server needs to have only one single socket... I can hardly imagine a situation where you''d need more advanced sockets API in order to scale better with a _single_ socket.
Even though you may only use one socket you are still servicing multiple clients. What happens if you have a multi-processor system and only use one-thread for servicing that socket? Your application isn''t working at top efficiency. A better way to service those requests would be to post multiple WSARecv calls on the socket and have different threads service those calls via I/O completion ports (you''d think I love those things or something - wouldn''t you? ) That way you maximize processor usage and can service multiple incoming client requests/data.
Disclaimer : I haven''t actually tried that before with UDP (only TCP and hence multiple sockets.) I know you can post multiple, overlapped WSARecv calls on a single socket. It works for TCP sockets so it should work the same for UDP. If anything UDP should be easier because of its discreet messages (non-streaming data.)
Dire Wolf
www.digitalfiends.com
I read a report recently about server performance using the various types of socket management: blocking socket per thread, non blocking using select(), using message notification, using callback events, and using I/O completion ports. I cannot post the report because it''s an internal MS document, but if I can dig up the paper again I will post some of the pertinent info...
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement