A better way to communicate with relay server?

Started by
15 comments, last by Acar 8 years, 3 months ago
I've seen before, an architecture that puts game simulation on one class of hosts (call them "simulation servers"), and game "services" on another class of hosts (call them "application servers.") That can work. (This is a variant of what's called a "multi-tier architecture" -- if you include the game clients for presentation, you end up with four tiers here: presentation, simulation, services, and persistence.) You still should probably think of them as application servers, rather than as "relay servers." A "relay server" is either a network topology router (which is not what you're building here) or it's a forwarding proxy (which is also not what you're building here.)

Also, think a bit about the API to your application server. How will the simulation server express its requests? There are a number of RPC methods you could use, all the way from heavy-weight SOAP requests, to simple XDR RPC, and a number of options in between. You might want to choose something on top of HTTP, such that you could re-use the application servers for other systems you'll need to build if you're successful (customer service, marketing, etc.)

Also not that whatever mechanism you choose, should have an asynchronous API (or you should asynchronize it internally in the game server) to avoid blocking the simulation loop.

Which specific library you use (proxify? thrift? protocol buffers? J2EE?) depends on which language/system you use to implement the stack. If you have a different language/stack on the simulation server than on the database server, you gain flexibility, but the integration work becomes even harder, and making sure both sides agree when you use different code bases to implement the same protocol is an additional challenge.
enum Bool { True, False, FileNotFound };
Advertisement

Other than windows API and oracle c++ library I don't have any dependencies. I'm using asynchronous IOCP to handle client connections. As far as the current communication between game server and relay(or rather application) server goes it's a simple tcp connection using very simple custom protocol buffers. An example would be:


/*
0x00, 0x01 : Header
0x02, 0x00 : Character name length
0x41, 0x00 : null terminated character name
*/
unsigned char dbRequest[] = { 0x00, 0x01, 0x02, 0x00, 0x41, 0x00 };

/*
0x01 : Query successful
0x05, 0x00, 0x00, 0x00 : Character stat
*/
unsigned char dbReply[] = { 0x01, 0x05, 0x00, 0x00, 0x00 };

/*
0xFF : Query failed
*/
unsigned char dbReply[] = { 0xFF };

To me it looks alright for small number of requests but I imagine it would not be enough if the requests were to burst to a number like 100 momentarily.

Connection between client-gameserver also uses a similar approach with better structured protocol buffers. Game server is almost never proactive and only responds when there's a client action.

"Protocol Buffers" is a specific data format defined by Google.
If you want a generic names for "binary blobs of data that I know how to parse" then perhaps "PDU" is a better name (for "Protocol Data Unit.")

Also:

0x02, 0x00 : Character name length
0x41, 0x00 : null terminated character name


Belt and suspenders is actually not good engineering practice, because you'll end up using more space than necessary, and more importantly, if one part of the code just uses one option (say, null terminated) then the other may be broken without you noticing (length prefix.)
Thus, pick one, or the other, encoding, and religiously enforce it.

To me it looks alright for small number of requests but I imagine it would not be enough if the requests were to burst to a number like 100 momentarily.


In general, you want to use some kind of language to describe packet contents, and generate the code to read and write those packets. That way, you debug the generator once, and then don't have packet binary data encoding bugs after that.
This is the approach taken in most successful tools/libraries, including Google Protocol Buffers, Apache Thrift, and XDR RPC.

Using a custom binary protocol, instead of formulating HTTP requests and using HTTP as your transport, means that your application server is harder to talk to from the rest of the world. Sometimes, that doesn't matter. And sometimes, you'll kick yourself for getting it wrong :-)
enum Bool { True, False, FileNotFound };

Game server creates a statement like "opcode, user_id, item_id" and sends it to relay server. Then relay server updates sql procedure with those values and runs the query. Sends a result to game server depending on the query result.
I'm not sure this is a very good idea alltogether. What do you expect to gain from this? You already stated "asynchronous IOCP", so the one thing about SQL which is painful (queries take a long time) doesn't really affect you.

Now, instead of building the SQL statement yourself (which should be an almost trivial string concatenation: stored procedure name followed by two or three parameters) and interpreting the status code whenever it comes in asynchronously, you are building a custom packet and interpreting a custom reply with a status code. Which is pretty much the same (give or take a few dozen cycles), except you now have added extra latency on the millisecond scale to your query processing, and basically doubled the number of network packets on the wire.

Traffic is neither free, nor unlimited. Sending out a greater number of packets per second inevitably means that (not always, but in general) more packets will be dropped and you will have to pay more money. Even if internal traffic at your datacenter is "free", it isn't really free (you'll pay for it one way or the other). and its bandwidth sure isn't unlimited.

"Protocol Buffers" is a specific data format defined by Google.
If you want a generic names for "binary blobs of data that I know how to parse" then perhaps "PDU" is a better name (for "Protocol Data Unit.")

Also:


0x02, 0x00 : Character name length
0x41, 0x00 : null terminated character name

Belt and suspenders is actually not good engineering practice, because you'll end up using more space than necessary, and more importantly, if one part of the code just uses one option (say, null terminated) then the other may be broken without you noticing (length prefix.)
Thus, pick one, or the other, encoding, and religiously enforce it.

To me it looks alright for small number of requests but I imagine it would not be enough if the requests were to burst to a number like 100 momentarily.


In general, you want to use some kind of language to describe packet contents, and generate the code to read and write those packets. That way, you debug the generator once, and then don't have packet binary data encoding bugs after that.
This is the approach taken in most successful tools/libraries, including Google Protocol Buffers, Apache Thrift, and XDR RPC.

Using a custom binary protocol, instead of formulating HTTP requests and using HTTP as your transport, means that your application server is harder to talk to from the rest of the world. Sometimes, that doesn't matter. And sometimes, you'll kick yourself for getting it wrong :-)

I have take a small look into protocol buffers and it looks pretty nice. I'll try to switch to that instead of using my custom "PDU" so thanks for the suggestion. Although my issue still exists which is "how" to transfer data inbetween.

Game server creates a statement like "opcode, user_id, item_id" and sends it to relay server. Then relay server updates sql procedure with those values and runs the query. Sends a result to game server depending on the query result.
I'm not sure this is a very good idea alltogether. What do you expect to gain from this? You already stated "asynchronous IOCP", so the one thing about SQL which is painful (queries take a long time) doesn't really affect you.

Now, instead of building the SQL statement yourself (which should be an almost trivial string concatenation: stored procedure name followed by two or three parameters) and interpreting the status code whenever it comes in asynchronously, you are building a custom packet and interpreting a custom reply with a status code. Which is pretty much the same (give or take a few dozen cycles), except you now have added extra latency on the millisecond scale to your query processing, and basically doubled the number of network packets on the wire.

Traffic is neither free, nor unlimited. Sending out a greater number of packets per second inevitably means that (not always, but in general) more packets will be dropped and you will have to pay more money. Even if internal traffic at your datacenter is "free", it isn't really free (you'll pay for it one way or the other). and its bandwidth sure isn't unlimited.

Thanks for you reply. Game server is a resource heavy application so I was hoping if, in future, a situation arises where separating game server and database server into different machines will decrease machine load considerably I could provide that without any problem.

"how" to transfer data between game servers and application servers?
Use sockets. That's the only way. There are libraries that help you -- they use sockets, underneath.
TCP is fine in the data center, especially for requests that will be asynchronous anyway.

The protocol itself should probably be defined as a set of request/reply pairs.
Request might be "get me a login token, user name is <stirng> and user password supplied is <string>"
Response to that might be "either success, with login token <token>, or failure, with message <message>"

Define your protocol as a set of these pairs. Encode the fundamental data of those pairs of request and replies using some encoding (such as protocol buffers.) Send request on TCP connection; when full reply is available, you have the answer. Success!
Later, you could then re-define the mapping between "data structure" and "network bytes" to support things like making a request through HTTP POST and receiving a reply in JSON format.
enum Bool { True, False, FileNotFound };

That is something close to what I'm using currently but rather poorly compared to the way you've described. Would a single TCP connection be enough(for let's say 20 requests average per second) or is a pool of multiple connections is necessary?

That is something close to what I'm using currently but rather poorly compared to the way you've described. Would a single TCP connection be enough(for let's say 20 requests average per second) or is a pool of multiple connections is necessary?

No need for more than a single connection. One TCP connection will shove as much data over the cable as will 20,000 connections (actually... 20,000 connections would do less because it's a lot more trouble maintaining them, a lot more small datagrams, a lot more interrupts...).

Thanks for all the help guys. I'll go with a single TCP connection and Google Protocol Buffers as suggested.

This topic is closed to new replies.

Advertisement