Jump to content

  • Log In with Google      Sign In   
  • Create Account


Ways to managing changing designer data alongside changing player data


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
7 replies to this topic

#1 Kylotan   Moderators   -  Reputation: 3163

Like
0Likes
Like

Posted 13 February 2012 - 06:29 AM

I have an online game where players get to shape the world in some way - eg. Ultima Online's housing, where you get to build your houses directly onto certain parts of the world map. These are changes that should persist over time as part of the persistent world.

At the same time, the design team are adding new content and amending old content to improve and extend the game for new players. They will do this on a development server first during testing and then later have to merge their work in with the players' "work" on the live server.

Assuming we fix the game design issues - eg. players can only build in designated areas, so they never clash geographically with designer edits - what are good ways to handle the data or arrange the data structures so as to avoid conflicts when new designer data is merged in with new player data?
  • You can use 2 separate files/directories/databases, but then your read operations are significantly complicated.
  • You can use 2 different namespaces within one store, but creating those namespaces may be non-trivial and dependencies aren't clear. (eg. In an RDBMS you may allocate all the primary keys below a certain number, eg 1 million, to be designer data, and everything above that point to be player data. But that info is invisible to the RDBMS and foreign key links will cross the 'divide', meaning all tooling and scripts need to explicitly work around it.)
  • You can always work on the same shared database in real-time, but performance may be poor and the risk of damage to player data may be enhanced. It also doesn't extend to games that run on more than 1 server with different world data.
  • ...any other ideas?
It occurs to me that although this is primarily an issue for online games, the concepts may apply to modding too, where the community creates mods at the same time that the developers patch their game. Are any strategies used here to reduce the chance of mods breaking when new patches come out?

Sponsor:

#2 meeshoo   Members   -  Reputation: 508

Like
0Likes
Like

Posted 15 February 2012 - 11:21 AM

In my opinion, the most elegant solution would be to apply some kind of layering to this world. That is, on the base layer you have designer data, then on another layer on top of that you have user data. You can go with different namespaces in the form of two different tables within the database, of the same structure but different (each one with its own set of primary keys). Each time you display the world to the user, you pickup data from the first layer, then apply over it data from the second layer, which should actually contain lot less data than the first layer. Both these could also reference a third table containing the actual location, a coordinate on the map. So when you would query a certain map area, you would get data for each coordinate from both tables and override the data from the designer table with the data from the user table. This way you can ensure no conflict will occur and also maximum performance because if you really want, you could split the data into separate databases on separate servers, so it is also a scale-able solution. Of course reading is more complicated, but I think it has more advantages than disadvantages.

#3 evillive2   Members   -  Reputation: 685

Like
0Likes
Like

Posted 15 February 2012 - 11:45 AM

You can always work on the same shared database in real-time, but performance may be poor and the risk of damage to player data may be enhanced. It also doesn't extend to games that run on more than 1 server with different world data.

Table partitioning (or sub partitioning) on a column that specifies player or core game components could be a great way to separate the two but without knowing the complexity of the database schema this may not be practical.
Evillive2

#4 Kylotan   Moderators   -  Reputation: 3163

Like
0Likes
Like

Posted 16 February 2012 - 01:09 PM

Thanks for the responses guys.

Meeshoo, that's an interesting approach but it doesn't really ensure conflict won't occur, just ensures one thing consistently overrides another (in your example, player data overrides designer data). Generally I'd like a solution that ensures the two data sets are always separate by design - obviously this is not easy to achieve with spatial data without some prior way of segregating the space so that each group can only work in a certain area.

evillive2, some sort of table partitioning is what I'm looking at, but it really can't be on a single shared database for various reasons (cost, performance, stability). So it's more about sharding the data on some key which can guarantee to be unique across the shards - the obvious solution is a composite key using the shard ID, where the shard ID is a constant on each DB, but which means all the DBs can be combined without data loss. It feels clumsy to me, but maybe I'm overthinking it.

#5 Telastyn   Crossbones+   -  Reputation: 3692

Like
0Likes
Like

Posted 16 February 2012 - 02:25 PM

If you're generating data change scripts this shouldn't be a big deal. The changed content is from prod so the IDs match (though if the IDs change per shard, you might need to lookup by not-id...). New content gets new id's and any references use the ID grabbed when the insert is done at the start of the script.

#6 Kylotan   Moderators   -  Reputation: 3163

Like
0Likes
Like

Posted 17 February 2012 - 08:30 AM

Are you talking about writing scripts that perform an initial insert on the remote DB, grab the ID, and then perform further inserts based on that ID? That sounds similar to what we tried on a previous MMO and basically it was too difficult to generate accurate and usable scripts for the data.

#7 Telastyn   Crossbones+   -  Reputation: 3692

Like
0Likes
Like

Posted 17 February 2012 - 08:41 AM

Are you talking about writing scripts that perform an initial insert on the remote DB, grab the ID, and then perform further inserts based on that ID?


Well having a utility generate the scripts based on a DB diff was what I was thinking about, but manually writing them up shouldn't be particularly difficult (though it's likely beyond the people doing the changes in the first place).

#8 Ashaman73   Crossbones+   -  Reputation: 5833

Like
0Likes
Like

Posted 21 February 2012 - 07:22 AM

You have two tasks, updating existing data and data migration. We have a similar problem here with a few thousand tables.

Updating, extending data is handled by update scripts per tasks (some diff tools to handle/support tasks like alterning a table + manually written update scripts to handle special requirements).

Sometimes, when data updating gets really complex in sql, we code data migration/updating with available application logic (batch processsing).

Data migration is handled by generated scripts (inserts etc.). The migrationed data got their own unique ID range to avoid any conflicts with existing data. Sometimes it is necessary to connect migrated data with existing data. In this case you need to automate the migration script generation process to handle data unknown at development time.

And eventually you can try to compress ID ranges when you fear that you run out of ranges.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS