Non-Dedicated Server: how the client connects to his own server

Started by
7 comments, last by oliii 15 years, 9 months ago
I'm in the process of merging my multiplayer game's client and server together. It's going great so far, and I have one .exe working as a client, or a server (one or the other at a time). Next step is to have both the server and client running in one process. That got me thinking as to how to handle a local client connecting to a locally-hosted server. Up until now, all client-server communication has been over TCP/UDP sockets. But it doesn't seem to make much sense to have a local client connect to its own server via a networking protocol. It will cause unnecessary lag (even if less than 1 ms), but most importantly, the client and server will need to keep a separate track of each of the players. In other words, the players in memory cannot be shared. The client will need his own copy of each player on the server, driven by networked updates from the server, utilizing client-side prediction and all the complications of a networked game model. The server will have it's own. It seems a lot simpler if the client could just directly use the server's copy of the players in memory. No need to use the complicated networking protocol to keep two copies of the same data within one process. This brings about one major complication: it will need some sort of local communication protocol between the server and the client (the server is a different thread). It's actually less of a problem when it comes to movement code, but rather the remaining networking events. For example, when a player joins a server, he gets to choose a team. Normally, that would create a team-change-request packet to be sent to the server. The server would in turn process that packet and send a broadcast to all clients about team-change-event. That is exactly the kind of logic I will have to duplicate when a local client changes team on a local server. To send a broadcast packet to the rest of the players, etc. Unless I find a way to reuse that logic, so that it has to be written in one place, but can be used by either networked connection, or local loopback connection (using shared memory, not sockets). ----------- So I'm just wondering what are some common approaches used in multiplayer games, when a user hosts a non-dedicated server and locally connects to it as a client. Thanks.
Advertisement
A systematic solution to this are approaches like CORBA, which use half-sync, half-async approach for network communication. Those however need to enforce other constraints, which makes them generally impractical for unreliable communication. But if starting from scratch, this would be your best bet - it does require you to rewrite everything with this in mind.

One downside of this is that every sharable entity acts as remote, and is resolved during run-time. This means that each access to any sharable entity has somewhat high cost, so you need to find a way to distribute the application in a coarse-grained manner.

This approach is unsuitable for sharing state. character->health suddenly changes from inlinable function or variable access into either virtual function call with a branch and dereference or even remote call.

For most real-time uses, AMI (asynchronous method dispatch) is almost mandatory, which complicates code. It requires such interactions to be implemented as chains of caller/callback idioms. This in turn complicates synchronization of state.

Another problem with this approach is that while it offers network transparency, it assumes reliable network. While things can fail in a predictable manner, latency and waiting for response becomes a problem.

Threading is also a problem. Unless local remote calls are dispatched in a different thread (an extra heap-based allocation + data copy for each function call), you get a exponentially increasing number of interactions between local and remote calls, which makes testing difficult or impossible.


These types of applications are used, but they generally tend to have reasonably high overhead if developed in network-transparent manner, so hardware will need to reflect that.


----

IMHO, use sockets.

If you really insist, look at MySQL source code, which uses local IPC as available by platforms. Then, rather than rewriting logic, just change communication from sockets to that as available.

Needless to say, some network stack implementations will use different means of passing data between locally connected sockets, resulting in access rates reasonably close that those of memcpy.
CORBA is pretty much out for games, because it's not optimized for a real-time situation where loss may be acceptable and latency is more important than throughput.

Quote:it doesn't seem to make much sense to have a local client connect to its own server via a networking protocol


Why not? It's code that you already have. It allows you to keep the server and client logic separate. It allows you to run server-only if and when that configuration makes sense. I would keep it that way if I were you.
enum Bool { True, False, FileNotFound };
hplus0603, you pretty much took the words right out of my mouth. What I mean is that I was going to post a reply that would've looked very similar to yours.

Quote:Original post by Antheus
A systematic solution to this are approaches like CORBA

Wow, I almost forgot about how complicated things get in the real world. I've taken a quick look at what CORBA is all about on Wikipedia, and it seems pretty much overkill for my simple real-time multiplayer FPS-like game.

Like hplus0603 said, it's just not suitable for real-time games where the feeling of latency is all. I am already counting every bit by hand when sending packets, trying to deliver the best possible online feel with little lag/latency problems.

There is no way I will do anything for local player <-> local server communication to disrupt the online gameplay anyway. I was just considering something faster than sockets (and the associated complicated networking protocol) for local players.

Quote:IMHO, use sockets.

Quote:Original post by hplus0603
Why not? It's code that you already have. It allows you to keep the server and client logic separate. It allows you to run server-only if and when that configuration makes sense. I would keep it that way if I were you.

You know, after some more thinking on the subject after I made the post, I almost came to the same conclusion.

The latency is really under 1 ms (closer to 0.1 ms), and client-side prediction smoothes anything out anyway. The person who hosts the game will already get enough of a benefit compared to those connecting to him anyway.

The only thing that's still slightly bothering me... Well, I'm trying to think if I can come up with some ingenious way to reroute the local player/local server communication in a neat way that'll eliminate the need to simply reuse the already established networking protocol. Some sort of a smart design pattern. I've already got a really good one going for client movement. So it's just a matter of coming up with something analogous for the rest of the 'events' that occur, such as client connecting/dying/changing teams/etc.

I guess what I'm trying to say (and not doing a great job) is this.

Using the same TCP/UDP networking protocol and connecting to 127.0.0.1 is a safe bet fallback approach. There's pretty much nothing wrong with using it, and it simplifies my job a lot (I don't have to code anything else in addition to the networking protocol).

But. I want to take a moment and think about any possible alternatives, ones that have the requirements of:
-not too complicated to implement (i.e. not more complex than using the already established UDP/TCP protocl)
-offers some benefit over using the TCP/UDP protocol approach... for example, not using UDP/TCP is a small benefit, as long as it doesn't complicate the code

A smart design that does what I want, and does it well/easy. From the description I'm giving it, it seems like I have no idea what it might be... and it's very much possible there really isn't anything better. But I'd still like to take some time to think about it and maybe come up with something interesting.

Feel free to throw some suggestions if anyone has any.

Maybe I can't think straight right now because it's a Friday... I'll post here if I can come up with something good later on. [smile]
The pattern you want, if you want to take the sockets out of the loop, is probably "interface" or "facade."

Build an interface for sockets. This will have abstractions for things like "address" and "read" and "write." Make your messaging layer use that interface. Initially, the first implementation of this interface uses sockets.

Build an interface for your messaging layer. This will have abstractions for things like "endpoints" and "buffer." Make the game use that interface. When creating the messaging interface, configure it with the socket interface.

Finally, make sure your game is configured with the messaging interface.

Now, you have the option of putting something other than a socket under the socket interface -- for example, something that talks to the socket interface, but drops every 20th packet, to test for packet loss resilience.

You also have the option of building a socket interface that doesn't talk to the network at all, but just moves data around the application.

Or you can put something in at the messaging layer, where it will forward local data to a local endpoint, without talking to the socket interface at all.

The important part is coming up with clean abstractions that explicitly define what the role of each interface is, and not expose any implementation details outside of the interface, so it truly is interchangeable. Then only talk to that system using the interface (other than during initial system creation), so the code doesn't need to know what the implementation underneath actually is, and the implementation can be replaced.
enum Bool { True, False, FileNotFound };
Quote:Original post by hplus0603
CORBA is pretty much out for games, because it's not optimized for a real-time situation where loss may be acceptable and latency is more important than throughput.
Of course it is, hence 'like'.

Quote:The pattern you want, if you want to take the sockets out of the loop, is probably "interface" or "facade."
This is why I suggested one of these facades. Using location transparency in combination with AMI allows fairly straight-forward design if application can benefit from event-driven design.

Such approach does allow for least, perhaps even no resource or code duplication. It even allows for client and server to run within same process.

The only parts of needed from this (compared to CORBA specification) is the location transparency and marshalling. State replication and all messaging however is built on top of those.

But all this applies to original question of how not to duplicate the resources when running locally. Whether such approach makes sense, or whether location transparency is viable depends on actual type of application.
I typically take the approach that I think some of the later replies are suggesting.

Most of my network protocols are built on a messaging system (e.g., network traffic can be broken down into messages that communicate specific information). When the client wants to send a message to the server, or vice versa, it makes a call into my network layer that says "deliver message [...] to this endpoint." The network layer encapsulates and sends the message, and reconstructs the message on the other end (details regarding ordering & reliability are omitted here).

When a client and server are in the same process, I simply set a flag that tells the net code that a given endpoint is a local endpoint instead of across the network. When the net code receives a call asking it to send a message to a local endpoint, it literally just drops the message into the local endpoint's "incoming messages" queue for the other sub-process to pick up the next time it asks for net messages.

This does cause some overhead, since the client and server still have to serialize the message and then deserialize it and figure out what to do with it. However, in practice I have found that this overhead has been absolutely tiny, and it keeps the client and server just as separate as they were before (which is a very good thing).

The problem you are describing with duplicated state that could be shared is also not really as hard as I think it appears at first. In a relatively sane data model, it's easy to divide out mutable resources (e.g., locally-modifiable state... where players are, how much health they have, what the score is) and immutable resources (e.g., the level's layout, hit-boxes, and so on). I have found, in practice, that the mutable resources are very small and so can be duplicated without trouble. The immutable resources, which might be larger, can be shared too, since nobody is changing them.

The one optimization I'll suggest that might be worthwhile (obviously depending on your game) is that typically the client performs full simulation and so does the server, so that clients can't cheat. This can be one of the more processor-intensive parts of the game in many cases, so if your profiling results suggest that duplicating all of these computations is causing performance problems, you might try eliminating the duplicated computation. Two approaches that have worked for me (which one is more appropriate will vary) are (1) implementing some flag you can set that causes the server to completely trust the local client, or (2) implementing some flag that causes the local client to perform no prediction at all, and replace that prediction with very frequent updates from the local server (which is fine since it's not like the updates are costing you bandwidth).

Anyhow, sorry for the long reply. Hope some of this helps.

Kevin
Solid Stage, LLC
Just wanted to add:

One really neat thing that I've been able to do with the approach described above is dual-threading. If you make sure that the cross-thread message delivery system is threadsafe (the part where the netcode directly drops messages into queues), then since the only shared state is immutable, you can run the client and server in two simultaneous threads.

This is a great performance booster, since many games are single-threaded, and many computers (and more and more in the future) have two or four cores!

Kevin
Solid Stage, LLC
Use communication by messages. Then you can bypass the network protocol and re-route the message locally.

Or just consider how much of a problem duplicating data (but not resources) really is, and how much of a problem a small latency on the local player can be (why would it be a problem? would it also be a problem for remote clients then?). I dont think it's necessary a waste of resources, if it makes the design easier (especially if you do things like replays).

Everything is better with Metal.

This topic is closed to new replies.

Advertisement