Jump to content
  • Advertisement
Sign in to follow this  
bentor

MySql as game server storage

This topic is 2946 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi

Firstly not sure if this right place to drop this post.

Just coding for fun and practice 2d hack&slash mog and see where it leads :)

But anyways Im wondering which is better way to handle database transfers in server side.

1. keep necessary data in memory and for example write changed data to database every now and then.

or

2. directly write data to database when there is changes.


Current solution in server is that all data is stored in text based files and Im planing to change that to use MySql

I see something good and bad on both ways, but really like to have some opinions about.


Bentor

Share this post


Link to post
Share on other sites
Advertisement
Commit changes to the database every now and then, and when something interesting (like a trade) occurs. Constantly storing and retrieving records from the database (effectively using it as a main memory) would be incredibly inefficient, and put a lot of unnecessary strain on the database server.

For sensitive operations such as the trade I mentioned above, consider using transactions to guarantee that either the entire trade completes, or none of it does.

Share this post


Link to post
Share on other sites
Hi

Yes, good point for trading to use transactions. Character creations and other critical stuff is good to put thru transactions too.

choice 1. was firstly my intention how to make all. But mixing up both with when required sounds to be a best solution.

thanks for pointing direction.
Im a bit rusty because so long time passed for last time when write code :)


Bentor

Share this post


Link to post
Share on other sites
Generally you'll want to have some sort of caching mechanism to keep oft-read data closer to the CPU. In a non-distributed scenario you may not need a way to invalidate the cache outside of the server application, but for any distributed system or a system where interaction with "hot" data may come from multiple locations (say a player might be making a trade through a web-based interface while also playing the game through the client software) then you'll need a way to invalidate the cached data from the outside (in other words, the system needs to generate messages indicating that the database record has been dirtied, and the server needs to be able to respond to them, though the implimentation could take many forms.)

One way to do this is for each cached item to have a "dirty" flag, which essentially means, "next time I read this, get an update from the database first." Anytime something occurs which causes a database write, you check if that record is already cached and mark it as "dirty" so that the presentation the player recieves is always in step with the database.

Transactions are an important concept, as already mentioned, think of it like a real-world escrow service.

Finally, relational database systems may not be the ideal or whole solution. Other highly-scalable data mechanisms exist, such as non-relational databases or large-scale, distributed dictionaries (key-value pairs, essentially std::map on steroids) -- Google's highly-scalable (and already scaled [grin]) infrastructure includes all of these solutions, all of which serve duty in most of the familiar google applications you know today.

Also keep in mind that perfectly-normalized data tables may not always be the most scalable solution from a performance perspective, and it is also difficult to glean statistical insights from. It would be premature optimization to abandon the typical normalization principles very early, but do be aware that any database structure you come up with may need to change in order to scale up performance wise (and we're talking out past thousands of concurrent users, probably) -- so do plan accordingly. What they teach in your college database course doesn't always meet the demands of the real-world, and not everyone can just throw more and bigger iron at the problem.

Share this post


Link to post
Share on other sites
Quote:
Original post by Windryder
Commit changes to the database every now and then, and when something interesting (like a trade) occurs. Constantly storing and retrieving records from the database (effectively using it as a main memory) would be incredibly inefficient, and put a lot of unnecessary strain on the database server.

You can't commit persistent state changes to the database every now and then because a) if a change would fail you need to know immediately, and b) if the server crashes you lose data that wasn't yet submitted. The way it typically needs to work is when the server attempts to make a change to persistent state, it sends it to the database first and if the change comes back successful, the state is modified on the server.

Quote:
Original post by Ravyne
Generally you'll want to have some sort of caching mechanism to keep oft-read data closer to the CPU. In a non-distributed scenario you may not need a way to invalidate the cache outside of the server application, but for any distributed system or a system where interaction with "hot" data may come from multiple locations (say a player might be making a trade through a web-based interface while also playing the game through the client software) then you'll need a way to invalidate the cached data from the outside (in other words, the system needs to generate messages indicating that the database record has been dirtied, and the server needs to be able to respond to them, though the implimentation could take many forms.)

In the non-distributed case this works automatically by having all changes on the server validated by the database, like I mentioned above. In the distributed case I would maintain dirty flags on the database, and simply have any server changes fail with an E_DIRTY error code if their caches are stale (at which time they can fetch the up-to-date data and re-attempt the change).

Share this post


Link to post
Share on other sites
Quote:
Original post by Zipster
You can't commit persistent state changes to the database every now and then because a) if a change would fail you need to know immediately, and b) if the server crashes you lose data that wasn't yet submitted. The way it typically needs to work is when the server attempts to make a change to persistent state, it sends it to the database first and if the change comes back successful, the state is modified on the server.


Note that real-time games are very different from enterprise databases. Databases are terrible at enforcing game rules. How would a database even know that position (10,20,30) is inside a wall, whereas (10,21,30) is not? Or that moving from (10,21,30) to (12,21,30) is only allowed when the cellar door is in the state "open"?
Those kinds of rules is what game simulation servers are for, and game simulation servers run in-RAM.

Almost every MMO I know of use the "occasional checkpoint" mechanism rather than funneling each property change through a database. When the property is "current hitpoints" or "X coordinate in the world," sending each change through the database is just dumb. If you crash, restoring to an older point in time just isn't a big deal.

Meanwhile, for things that really matter, like trade, you do want to store the data to some durable media (a work queue, a database, what have you) before you tell the user it's completed. For multi-user trade, you even want to do this with transactional semantics, to avoid a loophole for cheating.

Share this post


Link to post
Share on other sites
Quote:
Original post by hplus0603
Note that real-time games are very different from enterprise databases. Databases are terrible at enforcing game rules. How would a database even know that position (10,20,30) is inside a wall, whereas (10,21,30) is not? Or that moving from (10,21,30) to (12,21,30) is only allowed when the cellar door is in the state "open"?
Those kinds of rules is what game simulation servers are for, and game simulation servers run in-RAM.

I never said the database is enforcing game rules. It's enforcing rules on persisted data. It doesn't care about anything that doesn't need to be in the database.

Quote:
Almost every MMO I know of use the "occasional checkpoint" mechanism rather than funneling each property change through a database. When the property is "current hitpoints" or "X coordinate in the world," sending each change through the database is just dumb. If you crash, restoring to an older point in time just isn't a big deal.

I would argue that it's "dumb" to persist those properties in the first place. It's not like the user cares if they're returned to their exact X location when they log in again, or what their exact health was. As a matter of fact, the game rules will probably dictate that they spawn at a hub with full HP. However things that you need to persist need to be funneled to the database first - level, experience, etc., because if those state changes fail on the database for whatever reason, the server can't go on with incorrect data. If something like that happens the server might even have to take more drastic actions like kick the offending user.

Quote:
Meanwhile, for things that really matter, like trade, you do want to store the data to some durable media (a work queue, a database, what have you) before you tell the user it's completed. For multi-user trade, you even want to do this with transactional semantics, to avoid a loophole for cheating.

My point is that if the data doesn't "really matter", it isn't in the database anyway. In which case transactional semantics are the rule and not an exception.

Share this post


Link to post
Share on other sites
Quote:
Original post by hplus0603
It's not like the user cares if they're returned to their exact X location when they log in again, or what their exact health was.
You might possibly be mistaken there. This particular thing (combined with the totally unqualified support) was how Turbine lost me as customer to LOTRO three years ago, and for that matter to any game they might ever make in the future.
Admitted, with their millions of subscribers on a dozen or so titles they probably couldn't care less about losing me as customer, but it always depends. If you have a total of 500 customers and you lose 200, then you do care.

In above example, server issues were not uncommon (once every other day or so), and client crashes were regular (regular, as in 3-5 crashes on an evening).

Now, every single time, you'd find yourself rolled back to an earlier position, which was already annoying because you had to run a long way again. But worse, you would lose all buffs and keep your debuffs. If you had just killed something, you didn't have the loot, but your HP and item degrades were reduced anyway.
I assume that this was probably intended as some super smart anti-exploit measure, but honestly it was the most f...ing retarded thing I've ever experienced.

Unluckily, such inconsistencies that the user perceives as unfair can happen all to easily if you don't get your transactions right, even if you don't do them on purpose. And, unluckily again, I don't know of any good solution either.
Putting all events into one transaction (e.g. fork() and write a memory dump to disk) once every few minutes would probably still be best. You'd lose some minutes in the worst case, but at least everything stays consistent, so the perception isn't that negative.

Share this post


Link to post
Share on other sites
Quote:
Original post by Zipster
Quote:
Meanwhile, for things that really matter, like trade, you do want to store the data to some durable media (a work queue, a database, what have you) before you tell the user it's completed. For multi-user trade, you even want to do this with transactional semantics, to avoid a loophole for cheating.

My point is that if the data doesn't "really matter", it isn't in the database anyway. In which case transactional semantics are the rule and not an exception.


Quite a few MMOs (every single one i've played actully) store less relevant data (such as player positions, active buffs/debuffs, etc) in the database aswell, it doesn't need to be accurate or up to date though. (Pushing it into the database when a player disconnects or enters a new zone is good enough, a MMO server shouldn't crash that often, but you very often want these things to persist across sessions so they need to enter the DB at some point)

Share this post


Link to post
Share on other sites
Quote:
Original post by Zipster
My point is that if the data doesn't "really matter", it isn't in the database anyway. In which case transactional semantics are the rule and not an exception.


And I think that many things do matter, but not enough to require a blocking wait for a transaction commit. Before WoW, many MMOs had the "camp" function where you'd log back in wherever you last logged out. Similarly, if you had a buff of some sort with a limited duration, that duration persisted while you were logged out, and started counting down again when you logged in.

Storing misc character information at regular intervals, and when you log out, in a database makes perfect sense, even if you don't use transactions to update those properties in the meanwhile. As long as the snapshot is internally consistent, and the storage of the snapshot is atomic, you're good.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!