Multiple Conversations on a Single Socket or Multiple Sockets?

Started by
3 comments, last by JonW 15 years, 1 month ago
Hey folks, I have a client/server system where the client needs to download information on separate world objects from the server in a near-simultaneous way. Say for example I are sending individual requests for the stats on two players in a game world. I've been trying to postulate how this could be done on a single TCP connection to the server. The client would have to send one request, and then the other right after. The server would handle each request on a separate thread (using I/O completion ports) to complete them both as quickly as possible, and send the responses to the client. The problem is receiving simultaneous responses on the same socket--the data from the two responses could possibly overlap in the client's receive buffer. I would have to synchronize the sending threads on the server in some fashion to ensure that one sends its data completely before the other starts sending. As the above is growing complicated, I'm wondering if it would be better to use multiple server connections from a single client to achieve the same thing -- so each thread would be sending over a different socket, as I imagine a web browser would probably do (having a separate connection open for each downloading image for instance). So my questions are: 1.) Is having multiple connections to the server from each client a reasonable alternative to processing requests from a single in a serial fashion, or using complex asynchronous code to send the data in the correct order from multiple threads? 2.) If multiple connections are used, should they be opened and closed as needed or left open throughout gameplay in a socket pool? 3.) Do any games use multiple sockets? Thanks!
Advertisement
I think this is a non-issue really. You want to send 2 pieces of data at 'the same time' and have them arrive at 'the same time'. Yet you can't control the variable speed of the internet or packet fragmentation, plus the client will presumably have other things competing for resources (eg. OS, other processes, rendering graphics, playing sound) so there's a large chance that however you send the data and however frequently you read from the network, it will arrive over a period of time (eg. 5-15ms). The Internet is not a hard real-time data source and no amount of fiddling with the end points will change that.

With that in mind, the simplest solution is to just send them sequentially and process them sequentially. For this, one socket is perfectly sufficient.

The big question is - why do you think the two requests needs to be "near-simultaneous"? If they really need to be handled together, combine them into one request. Having 2 server threads specifically for this is unlikely to gain you anything unless you can guarantee that you always have 2 cores free for this specific purpose and only have 1 client who could possibly require this.

Web browsers may have a different connection for each image but that more an artefact of the way the HTTP and HTML protocols interact. A HTML page is served up by one HTTP connection and within the data it tells the browser of more resources that also need to be requested by HTTP, any of which might be on completely different servers. You, on the other hand, presumably have a custom protocol where you always know exactly where the resources are for a given request, and can happily push all resources down the same connection to the destination.

So, to answer your numbered questions:

1) I would say it's an unreasonable alternative, given the scenario you mention. A single connection with sequential reads would suffice.
2) Only you can answer that, but it would depend on how often you'd need these extra data channels. However the overhead of managing extra connections contributes to my answer to (1).
3) Some games apparently use one UDP connection and one TCP connection since each has benefits. (An example might be UDP for movement, TCP for chat and trade.) I've never heard of any using multiple TCP connections unless they're connected to completely different servers (eg. 2 zone servers, or a zone server and a chat server, etc).
Does the client have one or two network connections? If it only has one, then two separate sockets won't help, because, physically, the packets will be ordered one after the other on the wire. In general, when using UDP, you only need one socket, total. When using TCP, you need one socket per host you want to talk to, but generally no more.

The way to make sure that things happen "simultaneously" is to divide your simulation into time slots (steps), where you step some number of times per second (say, 30). Then, when you send a message, you send it marked for some step. If two things happen at the same time, mark them both for the same step.
enum Bool { True, False, FileNotFound };
Quote:Original post by JonW

The server would handle each request on a separate thread (using I/O completion ports) to complete them both as quickly as possible, and send the responses to the client.


Don't issue multiple receive requests on same socket. There is no benefit to it since TCP is a stream.

The problem is identical to trying to read from fstream by multiple threads. It's conceptually impossible.

The by far simplest and robust solution is to only ever have a single outstanding IOCP request per socket. This is fairly trivial to implement. If multiple threads need to send data, then keep a buffer of outstanding data to send, or provide a blackboard per socket which the send operation will query for most recent data.

Quote:as I imagine a web browser would probably do (having a separate connection open for each downloading image for instance).


This is a different problem. In context of real-time game networking, there should never ever be the case where either party needs to send so much data over real-time channel for this to be a problem.

Web servers may run into problems with single connection if an image is 50Mb in size, and keeps small 500b gifs from transferring.

For anything time critical, one would almost certainly enforce priorities, maximum message sizes (probably under minimum MTU) or, as last resort, manually multiplex context and send low-priority bulk data in small fragments in addition to real-time high-priority sensitive data.

But all of this is unrelated to IOCP or threads. Regardless of which networking API is used, one needs tight control over what and how is sent.
Thank you for the wise responses. My motive for considering streaming multiple sources of data at once was to have several pieces of information loading at once in a way that emulates true simultaneous loading, similar to how Windows performs multitasking on a single processor. Also I worried about suffocating client-to-server messages that need to be sent while a large piece of information is being downloaded from the server (if I allow a single IOCP request per socket). However I agree with the consensus here; I probably will never have enough data loading for a single item to justify simultaneous conversations.

This topic is closed to new replies.

Advertisement