Jump to content
Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
Posted 12 September 1999 - 10:16 PM
Posted 01 September 1999 - 03:06 PM
Well, I came upon the same questions some time ago. I dont have an absolute anwser, but..
I used new threads. But I also wrote a function that timesout the recv(), and I used the nonblocking mode (winsock2, the ioctl()). Personaly, I prefer this solution, lets your programe do more things at the same time, and you can also use threads to do easy pipline treatments.
But I have looked in some game codes with socketSpy, and QuakeWorld uses nonblocking mode sockets. They may be used for timeouts, but somehow, I think not. I think QW dosent use many threads, so, is my solution really the best... but it still is the one I would chose in most cases
Posted 01 September 1999 - 04:30 PM
Posted 02 September 1999 - 06:45 AM
If it's for a server, the "message queue" or "round robin" approach is good. It provides fast response and doesn't require any complex threading or resource handling.
Posted 10 September 1999 - 07:11 AM
For massive multiplayer you'd need a different approach altogether, similar to the way HTTP servers deal with massive amounts of clients - that is one thread per request, out of a given thread pool. When the pool size is exceeded, clients are put on hold...
Posted 12 September 1999 - 08:03 PM
Asynchronous approach - I'm not a big fan of it. The code is pretty Windows-specific. Most socket samples you'll find on the net are blocking. Plus keeping track of all the async calls you've got going can be a headache. And its not that much of an improvement performance-wise- at some level, there is threading/async processing going on within the TCP/IP stack. In general, not worth it.
Message queue/round robin approach. This is good as long as each request can be served in a predictable manner. Ideally as fast as possible but you want predictablility over speed. Say you have two algorithms for handling requests - number one takes 200 ms on average but once in a while takes 2 seconds. Number two takes less than 500 ms every time. Even though on average number one is faster, number two is preferred because you have a gauranteed response time of 500 ms. If you get a request that takes a long time to serve it kills performance for everyone.
Thread-per-client approach. Best for when client requests are independent of each other, such as in web server. Can get complicated when requests require significant processing of shared state - the locking code can be complex,prone to deadlocks, and cause scalability chokepoints. The overhead of allocating threads can be dealt with by pre-allocating a pool of threads. Normally "unused" threads are kept suspended until they are grabbed from the pool and used as a client thread. When a client disconnects its servicing thread is returned to the pool.
Process-per-client - All of the above methods suffer from a flaw - if the request servicing code crashes, the whole server goes down and everyone has a bad day. Increased reliablility can be had at the cost of some overhead by having each request service routine in a separate process - if one crashes, it doesn't affect other clients. I know what you are thinking - I'm Mr. Bad-Ass Network Coder and my code never crashes. Well maybe so, but can you say that for the underlying OS? Or how about malicious attacks that exploit security holes in your protocol and bring down the server? Do you really want to have 1000 angry users when some script kiddie figures out an exploit or maybe just a few?
I'm sure the last two methods could be combined to give the efficiency of threading with the robustness of forking processes.
Posted 12 September 1999 - 10:16 PM
GameDev.net™, the GameDev.net logo, and GDNet™ are trademarks of GameDev.net, LLC.