Jump to content
  • Advertisement
Sign in to follow this  
ruby-lang

[web] High-level architecture for a strategy game

This topic is 3724 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I'm designing a slow-paced naval strategy/RPG and would like you to help me find the flaws in the architecture. To give you some concrete numbers, I'm planning for 2000 active players, with peaks of 500 simultaneous users. Most will have a single fleet (or more specifically, a single ship), so I'm guesstimating around 4000 fleets spread in a 400x400 grid. The game is entirely in Ajax, with most of the GUI work in javascript. There are three kinds of requests coming from the browser: orders, unit information requests, and map information requests. Orders are arrays with a command followed by one or more arguments. In pseudo-JSON, it would look like this: {"move", fleet_id, pos_x, pos_y} {"attack", fleet_id, defender_fleet_id} {"rob", fleet_id, defender_fleet_id} In ports, the available commands are different: {"repair", ship_id} {"buy", ship_id, commodity_type, quantity} {"sell", ship_id, commodity_type, quantity} {"hire", ship_id, quantity} There's more, but I'm sure you got the idea. When it receives an order, the web service gives it a timestamp and posts it to a work queue, then replies with that timestamp attached. The "work queue" has a good chance of simply being a table in the DB. Five seconds after sending an order, and every thirty seconds after that, the browser sends an information request: {"unit_info_req", x1, y1, x2, y2, timestamp} In that request,x1, y1 and x2, y2 are the viewport's top left and bottom right corner coordinates. A regular viewport is 20x20, but I think the added flexibility won't hurt. The timestamp attribute is used to make sure orders sent since the last request were processed, so the user can be alerted if, for any reason, the server is falling behind. The response will be relatively complex: { // fleets owned by the user { "fleet_own", x, y, fleet_name, fleet_id, flagship_class, {ship_name, ship_id, ship_class, hit_points, number_of_rookies, number_of_veterans, ...}, ... }, ... // fleets owned by others { "fleet_other", x, y, owner_name, fleet_name, fleet_id, flagship_class}, ... {"port_other", x, y, owner_name, port_name, port_id}, ... // some other information not all that relevant ... {textual_message_1, textual_message_2,...} } The map information request and response are what you'd expect: coordinates and a long list of IDs representing different kinds of terrain. Since it doesn't change, it's called only when the user loads the page or scrolls. Ok, that's the frontend. Now the backend. I'll have a daemon to validate and process orders from the queue in chronological order. Some caching here may help. More specifically, I was thinking about dividing the grid into 20x20 "pages" and keeping the 40 last accessed around. I think that merging two or three of them in-memory if needed will usually be faster than hitting the DB, but would love to hear from anyone that did something similar.

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by ruby-lang
Five seconds after sending an order, and every thirty seconds after that, the browser sends an information request:
{"unit_info_req", x1, y1, x2, y2, timestamp}
In that request,x1, y1 and x2, y2 are the viewport's top left and bottom right corner coordinates. A regular viewport is 20x20, but I think the added flexibility won't hurt.


Not until a load of malicious users put 4 billion into each field to force your database to do a far larger query than you need. Why should people be allowed to hack your protocol to get more information anyway?

Quote:
Ok, that's the frontend. Now the backend. I'll have a daemon to validate and process orders from the queue in chronological order. Some caching here may help. More specifically, I was thinking about dividing the grid into 20x20 "pages" and keeping the 40 last accessed around. I think that merging two or three of them in-memory if needed will usually be faster than hitting the DB, but would love to hear from anyone that did something similar.


The amount of data you're talking about is tiny. A transparent cache should be more than adequate, without getting into complex schemes including subdivisions and so on, providing you're doing sane things with the DB and not using an object-relational mapper in an inefficient way.

Share this post


Link to post
Share on other sites
First of all, thanks for the clearly well-thought comments, Kylotan.

Quote:
Original post by Kylotan
Quote:
Original post by ruby-lang
Five seconds after sending an order, and every thirty seconds after that, the browser sends an information request:
{"unit_info_req", x1, y1, x2, y2, timestamp}
In that request,x1, y1 and x2, y2 are the viewport's top left and bottom right corner coordinates. A regular viewport is 20x20, but I think the added flexibility won't hurt.


Not until a load of malicious users put 4 billion into each field to force your database to do a far larger query than you need. Why should people be allowed to hack your protocol to get more information anyway?


Initially I was thinking that if someone wanted to read the whole map via the API there would be nothing I could do to stop them, so it might actually be better to let them read in 40x40 increments performance-wise, but I hadn't thought that someone would try to bring a small game server to its knees just out of spite until you pointed the obvious.

There are a few use cases for a variable map snapshot. The first one is client-side caching; if I send a larger area than the viewport, the user will have a smoother scroll and potentially fewer server hits. The second one is if I decide to create a premium user category, a larger map view would be one of the perks. So I intend to keep it, but I'll make sure I validate the size of the request instead of sending the whole map.

Quote:
Quote:
Ok, that's the frontend. Now the backend. I'll have a daemon to validate and process orders from the queue in chronological order. Some caching here may help. More specifically, I was thinking about dividing the grid into 20x20 "pages" and keeping the 40 last accessed around. I think that merging two or three of them in-memory if needed will usually be faster than hitting the DB, but would love to hear from anyone that did something similar.


The amount of data you're talking about is tiny. A transparent cache should be more than adequate, without getting into complex schemes including subdivisions and so on, providing you're doing sane things with the DB and not using an object-relational mapper in an inefficient way.


Yeah, come to think of it, even in the 256MB VPS where I want to run it this amount of data shouldn't be a problem. Do you have any hints on things I should keep an eye on when writing the data layer?

Share this post


Link to post
Share on other sites


Heres my few suggestions in general.


As long as you don't expect alot from the database, the database can handle your needs.

1.The more queries you have, potentially the higher the load on the database. Keep that in mind when you're trying to scale your game up to 500 active users.

2.The fewer the queries the better, as the database is given more time to deal with more time consuming queries without slowing the game down.

3. Caching helps alot.




Share this post


Link to post
Share on other sites
Some more thoughts:

Quote:
Original post by ruby-lang
The response will be relatively complex:
{ // fleets owned by the user
{ "fleet_own", x, y, fleet_name, fleet_id, flagship_class,
{ship_name, ship_id, ship_class, hit_points, number_of_rookies, number_of_veterans, ...}, ...
}, ...
// fleets owned by others
{ "fleet_other", x, y, owner_name, fleet_name, fleet_id, flagship_class},
...
{"port_other", x, y, owner_name, port_name, port_id},
...
// some other information not all that relevant
...
{textual_message_1, textual_message_2,...}
}


Firstly, is there a need to send all this information every time? Admittedly it is only every 30 seconds, so maybe it's not a big deal. Make sure your back end remember how often it sends this, so people aren't hacking their page to get updates every 100ms or something.

Obviously all the data in these responses is likely to be the stuff you request most often (500/30 = about 2 such requests a second) and so a rudimentary caching scheme here is fine. (eg. only hit the database for the list of ports if you haven't done so in the last 10 seconds, otherwise return the list you got back last time.) Don't worry about players getting data that is slightly out of date; the very nature of a real-time web-based game is such that this is impossible to avoid anyway, since another player can always act in the time between the database retrieving some data and that data arriving on your browser. You'll have to guard for inconsistent requests on the server for precisely this reason, so you may as well take advantage of that to cache objects a little longer.

Quote:
Ok, that's the frontend. Now the backend. I'll have a daemon to validate and process orders from the queue in chronological order.


You may find there's little point writing orders to the database just to have a daemon pull them out again. It may be more efficient just to pass them across via a named pipe or other IPC mechanism, and if your language has the relevant abstractions, it'll save you from having to reconstruct the object from the DB, etc.

Share this post


Link to post
Share on other sites
Quote:
Original post by Kylotan
Some more thoughts:

Firstly, is there a need to send all this information every time? Admittedly it is only every 30 seconds, so maybe it's not a big deal. Make sure your back end remember how often it sends this, so people aren't hacking their page to get updates every 100ms or something.


Hey, another door for DoS closed. I'll review my requirements. Port information probably doesn't need to be refreshed every 30 seconds.

Also, considering that the database has a good chance of being the bottleneck, it really doesn't make much sense using it. I'll look into IPC as you suggested, or maybe run an in-memory service like beanstalkd. It depends on how stable my daemon turns out to be.

Share this post


Link to post
Share on other sites
Quote:
Five seconds after sending an order, and every thirty seconds after that, the browser sends an information request:
{"unit_info_req", x1, y1, x2, y2, timestamp}
In that request,x1, y1 and x2, y2 are the viewport's top left and bottom right corner coordinates. A regular viewport is 20x20, but I think the added flexibility won't hurt. The timestamp attribute is used to make sure orders sent since the last request were processed, so the user can be alerted if, for any reason, the server is falling behind. The response will be relatively complex:


Quote:
Now the backend. I'll have a daemon to validate and process orders from the queue in chronological order. Some caching here may help.


Why are you running 'queue checks' both ways? You're constantly querying the orders that have been queued in, with your 30-second client-side checks, and you're also running a server daemon to process the orders? Doesn't make sense.

Instead just keep the daemon running, and let it put the changes performed to the universe due to that order (or whatever result you wanted to return) into a database table (comma separated values, maybe, if you have complex results) and then let the clients read the changes relevant to them on their own.

Its not a good idea to use the 30-second checking system (polling is the technical term I think) for each order, because its possible for some malicious user to just write a script that will bombard your server with thousands of invalid "polling" commands per second. That's DoS as each Polling command will cause the server to check if processing is being held up etc, which essentially means each polling command is an expensive one. Do it thousands of times per second, your server dies.

Instead have a unified polling system for your entire application, that will poll the server's "Changes" table (that the daemon wrote to) and extract the updates relevant to each client's "Field of View". So instead of polling for each order, have just a single polling pipeline which will download all the results and apply to your renderer/game engine.

Share this post


Link to post
Share on other sites
Quote:
Original post by Shashank Shekhar
Why are you running 'queue checks' both ways? You're constantly querying the orders that have been queued in, with your 30-second client-side checks, and you're also running a server daemon to process the orders? Doesn't make sense.


It makes sense to me. The client sends orders, which the daemon will process. The client also polls for updates, because it needs periodic updates and polling is how you would get them over HTTP.

Quote:
Its not a good idea to use the 30-second checking system (polling is the technical term I think) for each order


I think the original poster was just trying to say that after an order, you poll soon after (5 seconds) so that you get quick feedback on what you've done. You still need to poll every 30 seconds to get updates that could come from other clients.

Share this post


Link to post
Share on other sites
Quote:
I think the original poster was just trying to say that after an order, you poll soon after (5 seconds) so that you get quick feedback on what you've done. You still need to poll every 30 seconds to get updates that could come from other clients.


Yeah, that's what I suggested in the end right? What's important in this are the last few words of my sentence: "for each order". I still think it will be hard to track errors and view the same synchronized screen with other players (its a battle game, you need to see the same screen state to make correct decisions), and its just cleaner to have a single channel talking (polling) to the server for updates. This way, smarter and more security could be added to the client-server interaction without incurring the additional overhead of authenticating the client 'n' times, where 'n' is the number of orders passed per 30 seconds. This security will be required as this would still be open to malicious spamming, but being a one-point access, it could be better guarded.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!