Online Updater (technology behind it)

Started by
8 comments, last by Kylotan 15 years, 9 months ago
Hello everyone, I'd like to know a bit more about online updaters, like for example used in guild-wars or world-of-warcraft. What's the technology behind them - are there any commercially available updaters? What about the server backend? I tried googling, but updater/online etc just brings up to many links, and adding specific things like updater guildwars turns up pretty useless stuff (besides the fact that they have one, which I already knew :) How would you go about making one? Standard web-server? Some custom server-app running & distributing tasks? Or even using sth like Peer-to-peer software? So any pointers in the right direction, anyone? Thanks!
visit my website at www.kalmiya.com
Advertisement
It depends on the type of game, the way assets are implemented and the way your production pipeline is organized.
Let's assume it's for a roleplaying game (not mmo) - with around 10GB
of content, packaged in several 1-2 GB zip files.

Furthermore some additional directories with exe's, dll's and scripts.

The zip files don't need to be bit-identical after an update,
but if the files inside match the 'online current version', that
would already be acceptable.
Alternatively, we could live with not updating release zip-files,
but only adding new (the software would overload with the files
in the later archives, ignoring the old ones in earlier archives)

visit my website at www.kalmiya.com
For the file updating, you can just replace the files within the original archives (zlib and derivatives can alter zip archives) and match every archive with online available checksums that will be available for every released/playable version. Of course, this means that you will have to download the few bytes of checksum numbers every time you start the game, since it could otherwise be altered locally.
Do a checksum of each local file.

Compare it to a checksum of each target file.

If there is a difference, note that the file in question needs updating.

In some cases, the "file" might be a complex unit -- ie, it might be an archive, or even a directory. You can then recurse, and do checksums for the parts of the file in question.

There are a number of ways you can make this faster or more robust:
1> Keep a list of the checksums of the files on the HDD, and their modification dates & sizes. If the modification date hasn't changed, on a "fast" update, treat the checksum you have precalculated as the correct checksum.

2> Also pass the date, size and a version header of the file in question.

...

Once you have figured out what files have changed, there are bunches of ways to distribute them. One option would be to implement a torrent client, with the central server providing a large number of "free seeds". Another is simple FTP style access.

Moving files over the internet is not a hard problem.

...

I'm not aware of anyone having written a auto-patch server of that sort. Might be an interesting hobby project. :) Probably someone has.

I can think of a few reasonably robust things that behave in ways similar to a game patch server, for which source code can be downloaded:

RPM for Red Hat does something somewhat similar, as does most package-management software for Linux distributions.

Cygwin's installer does something somewhat similar. More similar, really.
Three Rings Design have open-sourced their one. It's in Java, just to tell you. You'll find it on their page, click the link for gamesgarden I think it is
Let he who will move the world first move himself--Socrates
Quote:Original post by NotAYakk

Moving files over the internet is not a hard problem.
No, and it's not a problem patch server tries to solve.

Quote:I'm not aware of anyone having written a auto-patch server of that sort. Might be an interesting hobby project. :) Probably someone has.


The problem comes from determining *what* to send.

Consider just the production cycle (development, testing, versioning, QA, I18N, hotfixes, structure changes in data, even changes in the process, 10 teams working on same set, 5 more teams working on future and past versions).

One way is to maintain a VMS with production assets. Clients requesting patch get data directly from there. Merging and replacement is performed client-side. This ties transparently into production, is fairly trivial to implement, and allows for solid versioning. Full patch is guaranteed to retrieve only what's needed and exactly that. Hotfixes are trivial. Client needs to send a single value (date up to which to checkount) to retrieve all content. This approach can have high server demands.

Other approach is file system. Have all assets as files. These are tagged with every release, packed, and delivered to clients. Hot-fixes are minor inconvenience, since they require full production cycle of a 100Mb archive for just a few kb of patches. Full patch requires download of obsoleted resources. Client needs to perform multiple steps to download a patch.

Third approach is on-the-fly caching. Here, raw resources are cached as they are received. Something like SecondLife. Integrity of resources is secondary, since clients know how to work with incomplete state. Updates are constant, there is no strict per-client versioning.


One of bigger problems is segmentation of data. Which is server-side, which is client-side. Put too much on client, and you get a nightmare.

How important is client-side data? Consider recent AoC complaints over DPS. What happens if clients change assets, do these affect game in any way? If they can, how will you share this data, trust client fully, partially, or not send data at all?

How is the data marked? Who maintains that? How much work is required to change a piece of data from server to client-side? What's the full turnaround time for single feature? One week, two, 4?

How will testing and prototype servers be maintained compared to live? How will major addition be made, perhaps those that need to break the format?
Unless you have especially complicated requirements, is there any reason not to just use rsync and be done with it?
Quote:The problem comes from determining *what* to send.

Consider just the production cycle (development, testing, versioning, QA, I18N, hotfixes, structure changes in data, even changes in the process, 10 teams working on same set, 5 more teams working on future and past versions).

How is this different than the standard build system problem of "figuring out which files are needed for the install"?
The standard build problem is solved once and applied to everybody, and is typically not significantly limited by bandwidth. You send all the files and overwrite everything in place.

Patching typically has to work with many people who may be in different configurations, and needs to be able to resume half way, all while attempting to keep bandwidth costs down by only sending part of the data.

This topic is closed to new replies.

Advertisement