On Server Implimentations

Started by
4 comments, last by Redleaf 22 years, 11 months ago
Because of my infancy in network programming (I've been learning/using winsock 1.1 for about a month now) I've come here asking for some guidance, as I'm not familiar with the potential of particular implimentation methods to meet my expectations. I am currently beginning implimentation of a TCP server using windows sockets, which I would like to keep compatible with Berkeley sockets for future porting to linux. To this end, I'm currently using non-blocking sockets. The design right now only runs in a single thread, using select(). I've read that a select-based server is more scalable than one using a single thread to handle each user connected (edit: meaning multiple threads to handle all users), via a blocking socket. If I'm wrong on this, please explain. (My main source thus far has been the Winsock Programmers' FAQ.) I would like this server to be scalable to support at least a few hundred clients at some point in the future. Yes, this is for an online game of sorts. My current notion is that while the server is processing the game universe, it will come upon things which it must inform certain/all users about. I had envisioned using a separate buffer or queue stored for each user which would hold information which could not immediately be sent when the server decides a user needs to be notified. I see a possibility of sending messages out of order with this EXACT method, however... I'd imagine there to be better "grouping" strategies than this, of course. Please feel free to enlighten me. Having read a bit on asynchronous sockets, I know that if a call to send or recv would block on a particular asynchronous socket, the WSAEWOULDBLOCK error would be returned. Would something like this be the case for non-blocking sockets as well? I wouldn't think that Berkeley sockets would use an error named according to the Winsock API, but perhaps there's a comparable error that would be indicated? Assuming that there would be some indication that a call would block, I thought of it working something like this: (In game logic portion of server code) // Try sending user some necessary message // If send would block, place message in queue (In the select() portion of code) // Try sending users the messages on the queue, in order If there would not be some indication of a call that would block, I figure that either: 1) Trying to send a message outside the select()'s jursidiction would be unsafe or 2) Trying to send a message on a non-blocking socket will always indicate success as long as the message will eventually be sent If it's neither, I need some info badly Now, obviously (or not so obviously?), the server wouldn't normally come to points in the logic where it determines that it needs to read from the user... instead, the client implimentation would have (comparable?) code which tries it's own attempts to send to the server within its game logic. Therefore, I only forsee recv being called within the logic where select() is called, filling up a reading buffer on that particular user for the server's use. What I'm looking for then, other than answers to my indirect questions (I just sort of stated what I thought might work) is general guidance, comments, corrections, anything at all regarding anything I've stated here. If I'm way out in left field without a glove, please tell me so. If I should scrap everything I've thought about the way it all works, and should do something else entirely, let me know. Thanks in advance to all who offer their assistance. Edited by - Redleaf on May 20, 2001 6:06:05 PM
Advertisement
The problem with the single threaded approach is that calls to blocking operations such as rcv or send stalls the thread, which is also runs the world simulation. Now if your using select polling in conjunction to blocking sockets, which is simpler to design and from what you''ve said, is what you intend to do, this situation will occur.

Stalls in world logic can result in desyncrhonization of the world between the client and server, or make it harder or more erratic (jumpy) to syncrhonize. Some people use a fixed timestep model for their world simulation, but the problem with that is now you''ve thrown in an indeterministic stalling mechanism which can lead to hickups on the server which will proprogate down to the client. Others use a variable timestep method, this system would work better for your system. However chaining together a series of sends, with large amounts of data can stall your thread for a long enough amount of time that it will throw off your world model, such a skiped collision checks, timed events are missed, etc.. The best approach would be to use a 2nd thread for sending. That way you avoid the stalls and still keep the minumum number of threads you want.

You could use a non-blocking sockets however this will make your program much more complex. You''ll need to handle cases of buffering your packet data, attempts at resending, handling partial sends, lower througput, etc.. Since you want to maintain compatablity with Berkely sockets implementation, you won''t be using asynchrous sockets.

Well Good Luck

-ddn
To clear up the confusion, I''m stating that I had intended to use non-blocking sockets, as opposed to multiple threads, each managing a blocking socket for a particular user.

"Don''t be afraid to dream, for out of such fragile things come miracles."
Still hoping for some more detailed answers...

"Don''t be afraid to dream, for out of such fragile things come miracles."
Lots of questions, some which I can answer.

Firstly, a single threaded application will be more scaleable if you're going to have a lot (say > 100) of clients connected. All the source for text based muds I've looked at took a 1-thread-with-select() approach.

The buffer/queue idea of tcp is exactly how i've implimented my networking using non-blocking sockets. I use 2 circular buffers, one for incomming and one for outgoing data. During the game logic loop, each client's bufferd get filled with a bunch of little messages about various state changes, chat messages etc.

After the game logic is complete, I run select() on all the client sockets. I call write once for each socket ready for writing, and update the circular buffer's internal pointers based on the number of bytes returned by the call. I use the same approach for reading (read as much as possible with one call).

This limits the number of calls to read() and write(), which is good for preformance. Berkley sockets return EWOULDBLOCK if a call to read() or write() would block on a non-blocking socket (it's the same as WSAEWOULDBLOCK I think, just check the winsock header)

BTW, you should send/recv data on your sockets as soon as you can after calling select().

Sounds to me like you're on the right track. I have some nice (I think) C++ code that wraps winsock and provides a nice object oriented non-blocking socket interface. The code's your's if you want it.

Edited by - genovov on May 22, 2001 10:12:14 AM
Thank you very much for the info. It''s always reassuring to see something else which reinforces my original concepts.

As for the code, sure. I''d love to take a look at the way you implimented the wrapper in an object-oriented fashion.

If you would, send it to glr9940@rit.edu (same as in my profile).

"Don''t be afraid to dream, for out of such fragile things come miracles."

This topic is closed to new replies.

Advertisement