Thread Design

Started by
27 comments, last by hplus0603 18 years, 2 months ago
Here's a problem with non-blocking sockets. Suppose the server has a list of clients. It iterates through the list, retrieving any data from the sockets, and building commands from the data where possible. The server then uses those commands to execute all the turns in the next timeslice. The problem is that as the list of clients grows, the newest clients in the list have their turns executed closer to the next timeslice. This means that the longer a player is logged in, the more laggy his or her exprience becomes. However if threads are used, the time between command sending and executing is fair and nondeterministic.
Advertisement
The trick here is to manage your server update and message priorities properly. That said, iteration is the least of your worries. You won't get a sufficient number of TCP sockets for it to be a major problem.

I use a structure similar to this:
[queue - packets out] (dissembled messages)
[rbTree - packets in] (for assembling messages)
[vector - clients [queue - messages In] [queue - messages Out]] (clients and what goes where)
[thread - listener] (make new clients)
[thread - network IO] (send / receive and prioritize packets, deal with keep-alive, cull orphan packets)
[thread - server] (game logic - respond to messages with messages)

The three threads are fairly simple -

The listener simply dumps clients to the network IO thread once they're connected. This thread isn't required for UDP.

The network IO thread ONLY deals with IO of messages, and receives and sends packets for a certain amount of time per cycle, if the queues are not empty. Complete messages received are passed to the appropriate client's queue. Messages are despatched by priority. Once despatched to the network thread, they are the responsibility of that thread - packets are 'skimmed', ie. a packet is sent from each queue until empty. Incoming messages are again prioritized, and if a 'replacement' message is received, the original is destroyed. Further packets for that message will be orphaned and eventually culled.

The server thread iterates clients looking for high priority messages first. These are dealt with before lower priority messages. All high priority messages are dealt with, without timeout. Lower priority messages are dealt with with a timeout.

With the message priority system, lag can be limited to some extent - chat might lag, but combat will not lag so much, for example.

Winterdyne Solutions Ltd is recruiting - this thread for details!
I think for now I'll stick with multithreading for two reasons:

1. It's easy to implement.
2. All the client threads do is fill a command queue from the clients' input. The only time these threads lock out the gameplay thread is when they add commands to the queue, which would take a very minimal amount of time.

Of course, I think it would be absolutely crazy to have client threads actually perform gameplay. Absolute madness.
I have still a question: Should i use one Socket which recv after send and the he sends again and then recv. Or should I user 2 sockets, one for recv and one for send?

mfg.
I'd use a single socket for most purposes. Multi-processor platforms can leverage some gains by having several sockets running in separate threads, but the network driver is often the bottleneck in these cases.

Edit: That said, gains are usually found in multiple-receiving socket cases, where the sources differ- for example a separate set of sockets (and/or protocols) may be used in a game server for accessing other servers rather than clients, which might use a single UDP socket.

Be aware that TCP is a 'connected' protocol, and there is an overhead in making / breaking connections (over and above the socket creation overhead), so in the situations where you have many clients communicating with a limited socket pool, you should take care in selecting which sockets to cycle out.
Winterdyne Solutions Ltd is recruiting - this thread for details!
Quote:Original post by SteveTaylor
I'm using Java at the moment. I've found it quite easy to put each client in its own thread and use a BufferedReader and PrintStream for input and output respectively (just using strings at the moment). I haven't got the command queing code yet, but I figure I'll just use a ConcurrentLinkedQueue and all will be good.


This may be easy but it's not scalable. For an application that has few clients it's no big deal. But as the client numbers start rising you run into difficulties. Unfortunately, using the java.net API there is no other option as the sockets cannot be set to nonblocking mode. Many thread-per-client Java applications have been released into the wild. Sun added NIO (new IO) in 1.4 to address this issue. NIO allows sockets to be set to nonblocking mode and is more efficient than the java.net API when handling large numbers of clients. It's also scalable, and allows you to condense your network code into a single thread if you want.

[Edited by - Aldacron on January 22, 2006 6:37:13 PM]
Yeah I looked at the java.nio stuff briefly. I'll probably do a conversion later. I'm keeping the networking code out of the gameplay code. The incoming commands just fill a queue. The gameplay code consumes items in the queue. Hopefully that level of separation will facilitate a conversion to non-blocking I/O if it becomes necessary.

I thought java.net.Socket could be used in a non-blocking fasion anyway: Suppose you get an InputStream from a Socket and call it in. A call to in.available() will return the number of bytes that can be read without blocking. So what's the deal with all the java.nio stuff then?
Quote:Original post by SteveTaylorI thought java.net.Socket could be used in a non-blocking fasion anyway: Suppose you get an InputStream from a Socket and call it in. A call to in.available() will return the number of bytes that can be read without blocking.


Yes, and then simply calling 3 arg version of read you can pull out exactly that number of bytes, yes? This is highly inefficient. The problem is that the socket itself can not be configured to be nonblocking in through the java.net API. This means that in order to simulate nonblocking IO you must repeatedly poll each socket (through the associated input stream) to see if any data is available to read immediately. For a handful of clients this won't hurt too badly. But as the number of clients increases you'll start to feel the pain. But that's not all. The read operation can still block. Just because in.available says you can read X bytes without blocking does not mean you won't block on read. There are no guarantees.

Quote:So what's the deal with all the java.nio stuff then?


It allows the programmer to make use of nonblocking sockets via a select mechanism. Rather than the programmer polling sockets for data, the operating system does at in a much more efficient manner. Once per frame (or whenever you're ready to process data) you ask the selector to look for any sockets that have input pending, then read the ones that do. It reduces the amount of time you spend looping over sockets. At a conceptual level, the networking API is doing the same thing you suggested originally - polling sockets for pending data. The reason it can do it more efficiently than you is because you don't have access to the innards of the system.

Furthermore, Java is free to use the most efficient method possible on the current operating system. On Windows, it might be through the select API, or it might be using AsyncSockets, or maybe IO Completion Ports (the most scalable networking API on Windows systems). The point is, you don't have to worry about it. By using java.net and polling InputStream.available, you take away Java's opportunity to use the best avaialble while at the same time throwing efficiency out of the window.

NIO also introduces ByteBuffers. ByteBuffers can be allocated in JVM space, in which case they are essentially wrappers for Java byte arrays. But more importantly, ByteBuffers can be allocated outside of the JVM heap in native memory. SocketChannels allow you to send and recieve data through ByteBuffers. Natively allocated BBs are more efficient than those allocated in the JVM heap.

The benefits of NIO are big. It allows you to write efficient, highly scalable servers in Java. This was not possible before without dropping down to native code and JNI yourself. Sun is currently building a game server that can run multiple virtual worlds with thousands and thousands of players across server clusters. It's imaginatively named the Sun Game Server. It's a massive project. There's a "significant announcement" (to quote one of the SGS developers) coming at GDC this year. And I hear they have a couple of game developer shops on board to do some stuff already and are talking with more. That's getting off topic though. All I wanted to say, really, is that SGS would not have been possible through the java.net and java.io packages alone.
Thanks for the insight. Maybe it's now time for me to have another look at nio.
I don't normally reply to myself...

I found this tutorial very enlightening.

This topic is closed to new replies.

Advertisement