Jump to content

  • Log In with Google      Sign In   
  • Create Account

hplus0603

Member Since 03 Jun 2003
Online Last Active Today, 10:46 AM

#5297234 OpenSSL DTLS server structure suggestions?

Posted by hplus0603 on 19 June 2016 - 12:14 PM

I have seen an example open a new thread per client.


Just because the example does it, doesn't mean that you have to do it.
You can still use a single input socket (I presume they do that in the example?) and as long as you associate contexts for the OpenSSL library with the correct remote source (using some hash table, typically,) it can all live in the same thread.
Unfortunately, I haven't used OpenSSL DTLS in anger, and don't remember the exact names of the data structures/functions, so I can't contribute more than that.


#5297233 Peer-to-peer Massive Multiplayer Game

Posted by hplus0603 on 19 June 2016 - 12:11 PM

that's not a problem since everyone's simulation is based on the host


So if I run a host, I can cheat by poking at the memory of the process.

Here's the thing: The train of thought you go down, is a train of thought that many, many, network programmers have gone down for the last 30 years.
It may feel as if it's new and promising to you, because there is no large scale success to look at.
Unfortunately, the reason for that is not that it's a green field ripe for the taking; the reason is that the practicalities all slope heavily towards a hosted-games star-topology approach.

Also, your "one day" followed by "one week" estimate shows that you have very little actual experience with distributed systems development.
The good news is that, the best way of getting such experience, is to combine talking to those who have it, with trying your own things, and analyzing how they actually work once you have them up and running, so you're on the right path!


#5297089 Peer-to-peer Massive Multiplayer Game

Posted by hplus0603 on 17 June 2016 - 10:50 PM

You can prevent fake data by having each user sign their commands with their own digital signature and a time-stamp that can't be reused.


First: That's terribly expensive. Even on modern computers, public/private key cryptography has a real cost. This is why you generally only use the public/private pair to exchange a symmetric session key.

Second: The problem is not someone sending data on behalf of someone else. The problem is someone poking at memory in their own client, giving themselves more power / gold / hitpoints / speed / whatever.


#5296982 How can I optimize Linux server for lowest latency game server?

Posted by hplus0603 on 17 June 2016 - 11:24 AM

How is the ping measured?
Do you measure it in the game, or using the command-line "ping" tool?
What does a "traceroute" to the server from the player say?

Another thing to watch out for: Google Compute Engine is not optimized for low-latency real-time applications. You may suffer from virtualization-induced scheduling jitter in the game process. That kind of jitter would be visible to all players, not just one player, so I don't know if it applies in this case.

Finally: 140 milliseconds isn't that uncommon over the greater internet. If your game is unplayable at 140 milliseconds ping, then your game is not designed to be playable in general over the internet.


#5296754 Peer-to-peer Massive Multiplayer Game

Posted by hplus0603 on 15 June 2016 - 07:16 PM

Also this whole peer-to-peer architecture would at most take me one day to write up, so it's really not that big of a deal.



Oh. Oooh. AAAAAAAAAH!


So, anyway, another option is to allow players to host servers, and use a central listing/discovery server.
You can document the protocol and let users type in the name of the discovery server, so if you ever take down your servers, fans can keep it going.
Servers that are hosting will post to your listing server once every 4 minutes.
Your listing server will remove servers from the list if they haven't posted in the last 5 minutes.
Clients can get a list of servers that match some criteria (game mode, host username, etc.)
That's really all you need. A bonus would be to also let the listing server support NAT punch-through, but that's not needed for a simple solution (tell users to port forward when hosting a server.)


#5296744 Peer-to-peer Massive Multiplayer Game

Posted by hplus0603 on 15 June 2016 - 06:07 PM

While a regular large is "only" $4.50, the Jamaican Blue Mountain is $10. For a cup of coffee. ($120 per pound)




#5296145 Peer-to-peer Massive Multiplayer Game

Posted by hplus0603 on 11 June 2016 - 05:44 PM

Before you do a big re-write, it's usually quite useful to measure the performance of whatever solution you're trying to add.

Make sure that network usage. latency, scalability, cost, resource utilization, latency tolerance, security, and all other parameters you care about are actually up to snuff for what your game needs.

If you don't yet know what your game needs, then it's probably better to get further with the game, than to pick some particular solution too early.




#5296097 Instance based game with multiple nodejs instances

Posted by hplus0603 on 11 June 2016 - 11:13 AM

I didn't realize you were actually adding a second physical game process

 

So, technically, you can actually have the same physical process both play the role of "user-instance-runner" and the role of "game-instance-runner," and that process would be registered twice in the registry (once for each role.) Personally, I find it easier to keep them apart, because that makes each process do less, and thus be easier to debug.

 

Regarding your questions -- the "connection between players" for gameplay purposes should happen in the game process. Because Muffins and Turkey are both connected to Game Instance 47, they will both see what happens in game instance 47. For example, if Muffins say "I am frosted" then that could be an event emitted by that game instance, and each connected player receives the event "Muffins says 'I am frosted'"

 

The user/user connection only needs to happen for some messaging channel that is not game-specific, such as a cross-game "whisper" system or whatnot.

 

In practice, you will have each of the user-instance-processes listen to two ports: one port for incoming connections for users, and one port for incoming connections for other user-instance-process servers.

Each game-instance-process will listen on one port, for incoming user-instance-connections to the game-instance-process.

You only really need one TCP connection between process A and process B, and you can funnel the actual intended target of a message ("message for user X" or "input to game Y") as part of the packet being sent along the connection.

 

The kinds of registry information you will need include:

 

user-server-process-id -> host-and-port

game-server-process-id -> host-and-port

game-instance-id -> game-server-process-id
user-id -> game-instance-id

user-id -> user-server-process-id (if you support whisper)

 

So, in the case where user A wants to join "the game of user B" (as opposed to "game 47") then the look-up would be:

user-id(B) -> game-instance-id(47)

game-instance-id(47) -> game-server-process-id(1234)

game-server-process-id(1234) -> host-and-port(10.1.2.3:4567)

 

Typically, you will have a registry of connections already made in each server process, so if someone wants to connect to a server that you're already talking to, you just forward their messages over that connection.

In Node.js, the way to implement "raw" TCP sockets is to use net.createServer() and then server.listen(), and for each incoming connection you get a socket, which you can bind data events to.

To connect to "raw" TCP sockets you use net.Socket(), socket.connect(), and binding data events.




#5296022 Instance based game with multiple nodejs instances

Posted by hplus0603 on 10 June 2016 - 03:56 PM

So, I'm assuming that you already have a load balancer, which spreads incoming user connections to some number of nodes (hosts) that run processes which "deal with connected users" (call these "user processes")

 

Also I assume that those same processes can also talk to each other, presumably on some other port than the main incoming-user-connections port. (And presumably firewalled off!)

 

Also I assume that you manage many user connections in a single process, because that saves on overhead per-process.

 

So, far, there exists:

- incoming connections from users

- going to some of some number of user-serving processes

- a mapping between "user" and "user-serving process"

 

Now, a user wants to create some game instance. I propose that the simplest way to do that is to create a second kind of process, a "game instance process."

I assume that each "game instance process" can manage more than one game instance at the same time -- again, because that's typically how you build Node services.

You would have some function that "selects one game instance node/process, and create a new game instance on it." It would also register that instance in some database.

You then return that game instance ID to the creating player, and the creating player's user-process would make a connection between that player, and the game instance on the game-instance-process server.

 

Now, when a second user wants to join the same game instance, the user-server-process that manages that user would find the game-instance-process for the game-instance-id, and connect that second user to that process.

In-game chat would go through the game-instances.

If you want to support 'disconnect/reconnect' then you would have another database of "user id" to "game instance currently in," and when a user connects, the user-connection process would look this up, and if it's not empty, immediately (re-)connect the user to the game instance.

 

If you now want to add arbitrary user-to-user chat, then you need a separate database of "user-id" to "user-server-process."

When user B wants to send a message to user C, user B's user-server-instance will look up user C, and send a message to that user-server-instance.

 

The main problem with keeping this data in Redis is that, if a process crashes, Redis doesn't clean up after you.

And if you set an expiry time on this data, then you have to keep refreshing the data while the user is connected. Let's say you expire data after 5 minutes -- this means you have to refresh the data every 4 minutes or so, which adds a not insignificant additional write load on your Redis instance.

This is why I prefer something like Zookeeper, which can create "ephemeral" keys, which go away if the connection to Zookeeper that created the key goes away.

But, either way can work.

 

Now, it turns out that, in most systems like these, each "user-server-process" will have to talk to each "game-instance-process" on average, and if you do cross-system chat, each user-server-process will also talk to each other user-server-process, as well as everything talking to the central database. This will scale as N-squared in number-of-processes. Luckily, because you can typically do thousands of users per process, and N=100 processes still keeps N-squared at a reasonable size, you should be able to do 100,000 online players without too much trouble, and if you make sure to optimize the implementation of the various bits, you can probably do 10,000 users per process and N=1000 processes, to support games that are the largest in the world :-)

 

Node has the draw-back that you can only run a single thread per process. This means that you'll need to run multiple processes (and thus multiple server-instances) per physical host, to best use available cores. This is generally accomplished by mapping each server-instance to a separate port. Thus, the look-up table to find a particular server-instance, needs to return both a host (internal IP) and a port number. Similarly, the load balancer for incoming user connections will be configured with multiple back-end processes to load balance to, re-writing the publicly exposed port to whatever the internal port number is for each of the instances ("reverse" or "destination" NAT if your LB is a router; just an internal TCP stream if your LB is something like HAProxy.)

 

One of the best features of Erlang/OTP is that almost all of the features I talk about above (except for the load balancer) are built-into the software already!

You make sure to configure the different Erlang nodes appropriately with their roles, and find each target server using the built-in Erlang process discovery/registry functions, and you'll do great!

With Node, you have to build a bunch of this yourself (as you already discovered.)

 

typical-game-server-cluster.jpg




#5296000 Instance based game with multiple nodejs instances

Posted by hplus0603 on 10 June 2016 - 11:45 AM

There are many solutions to the "how do I talk to the right thing" problem.

Easiest is to declare that users connect to some random node. Then there's a registry of instances. Instances run on some random node. When a user talk to an instance, you create a connection from the user's process, to the game instance process, by looking up the game instance in the registry.

This is a pretty well researched area. You mention node-IPC (which I'm not a fan of) and ZeroMQ (which I'm a bit more a fan of) but there are tons of other connection methods, including bare TCP sockets, Thrift, etc.

Technologies outside the Node ecosystem may solve this even more elegantly, such as Erlang which has the entire concept built-in.

The nice part here is that, while you can start out by saying "game instances and user connections are served by the same kinds of processes," you actually have two separate kinds of service that could be split apart if you need to, assuming that you actually stick to the clean separation, rather than hard-coding things like "the game instance is always local to a process of at least one player."

 

The next question is "what do you use as the registry of the mapping from game-instance to process-address?"

Common distributed solutions include etcd and zookeeper or even just plain DNS SRV records (using a private network.) You can also use a central solution like a simple database.




#5295642 Unity Android run in background

Posted by hplus0603 on 08 June 2016 - 10:54 AM

Android apps can create a "persistent notification" in the notification bar.

I would suggest that you use the notification system to display a "so and so is casting a spell" notification while that's happening if the app is in the background.

Separately, Android allows you to start Services, which can run in the background and communicate with servers, without having the same aggressive suspension as UI-facing activities.

This can be used for things that need to communicate while the app is in the background. However, don't abuse Services to do things like continually run a game update loop -- that will drain the battery very fast!




#5295510 Peer-to-peer Massive Multiplayer Game

Posted by hplus0603 on 07 June 2016 - 10:06 AM

Some games support host migration, which mitigates the "game poofing" problem.




#5295373 Multiplayer Movement Problem

Posted by hplus0603 on 06 June 2016 - 04:04 PM

I just am not sure how I would update their positions on the server without some type of loop.

 

If it's a physics based simulation, you actually want to update the simulation using a loop, and you want the physics time step size to be the same on all clients and server.

 

Another option is to keep a priority queue of commands sorted by time. Each time someone needs the state of the world, resolve all the events that have a time less than the time that you're querying for. This will update the world "on demand" as needed.




#5295324 Wi-Fi Latency (Local Network)

Posted by hplus0603 on 06 June 2016 - 12:18 PM

I agree. Use UDP broadcast on the local subnet. Send three or so packets per button press, at time T, T+15ms, and T+60 ms, with a "this is packet N" counter.

Alternatively, have the PC echo back to the network each time it receives a packet, and have the client keep sending every 15 milliseconds until it sees the echo. That would be even simpler!

 

UDP broadcast on the local subnet is generally easiest by turning on SO_BROADCAST and sending to the IP address 0xffffffff (255.255.255.255)

However, if your wifi access point routes (as oppoed to switches) between wired and wireless, you may need to actually use the network address -- if your network is 192.168.1.0/24, then the network address is 192.168.1.255, for example. Finding the network address of a host is actually annoying and platform specific, so start with 255.255.255.255 and hopefully it works fine (it usually does, once SO_BROADCAST is turned on.)
 




#5295321 Peer-to-peer Massive Multiplayer Game

Posted by hplus0603 on 06 June 2016 - 12:12 PM

I played around with the idea of a tree-structured network topology a while back, and came to the conclusion that the additional latency from the additional hops is not worth it, compared to just letting everybody connect in a star to the host. Network bandwidth is not generally a main limiting factor.






PARTNERS