Continuous Asset Data Streaming - issues/problems with

Started by
16 comments, last by hplus0603 10 years, 3 months ago

Ive been exploring possible advancements for MMORPGs (with much more complex/detailed game representation on client). One feature is game Asset data being frequently/continously streamed to the Client (not just new objects, but terrain updates as almost the entire world is changeable with very little static map data any more -- this is not just deformable scenery but also a continuing addition of new assets flavors to the game as time goes on - magnitudes more than MMORPGs have these days).

The client would have some huge File dictionaries/encylopedias so that once an instance of an object flavor is transfered it doesnt require a repeat (and the scheme also has alot of hierarchical templating used to define objects which can cut down alot of the data required to be sent)).

Issues I can think of :

- The Volume of data to transfer that can happen in rushes as the player POV moves into a much changed location or one with previously unseen object flavors. Effect upon ongoing gameplay when the gameflow has to share with a background asset data stream (how much is too much)

- Need for placeholder data to handle the delay while the client 'catches up

- Preloading prioritization (to attempt to preload needed data before it is seeable) types of objects and distance (LOD considerations)

- In general a much larger number of objects in the clients simulation (keeping the game processing busy)

- Server loads (alot more data traffic to all clients)

- Dictionary/Encyclopedia lookups and update decisions -- the client has to determine from the server whether there has been an update for an asset flavor (besides NEW asset flavors there are also improvement/fix updates that can happen)

- possibility of offline/background to background generic dictionary updates (if the immediate-need data flow is Idle then send through changed/new data to get likely future-needed data into the client well ahead of time.)

- Platform considerations. Limited disk archiving might require discarding some previously transfered data (keeping whats most relevant to the players situation and unfortunately having to reload some data as it is needed.

- Detail level control scheme - need a way to limit/tune this whole system for targeted hardware having significantly less capability (MMORPGs tend to have a wider range of customer target - heck, tablets continue to advance)

Things helping possible advancements :

- Higher average Internet throughputs

- SSD to speed the disk end of archiving/caching client data

- the usual CPU/Memory improvement for general increases of performance (additional cores etc..)

- heavy use of a Templatuing scheme with alot of reuse which can help shrink the data that is required to be sent fo any particular object instance

All of this is something for the future - where things MIGHT go (current state of MMORPGs if not fossilized, have actually degenerated)

---

Comments.. Ideas... ?

--------------------------------------------[size="1"]Ratings are Opinion, not Fact
Advertisement

Not sure what you're asking really, is this for your own engine? You seem to imply (based on the start of your post "Ive been exploring possible advancements for MMORPGs") that these are not things MMORPGs do. But just about any MMORPG out there performs (and has solved) all the things you have listed, and the items listed are not really suitable for small scale discussion.

n!

Not sure what you're asking really, is this for your own engine? You seem to imply (based on the start of your post "Ive been exploring possible advancements for MMORPGs") that these are not things MMORPGs do. But just about any MMORPG out there performs (and has solved) all the things you have listed, and the items listed are not really suitable for small scale discussion.

n!

This is talking about a magnitude of increase asset update traffic, and sending asset defining data (meshes/textures/animations/soundeffects/etc..) WHILE the game is running (NOT some in between splash screen 'level' data loading, nor UIDs of textures/objects that are already in a static dictiionary on the client as the player meets other players/house and such changeable data).

Im talking about constant sets of new data on Server that then never could previously exist on the client, and is too much data to simply statically upload in data patches the way MMORPG games do now. (the terrain is also now all subdivided objects so the content there is also a maginiture of more dynamic data -- the static baked level is basically gone.

I did some minor work on my own 'engine' years ago but that was more testing chunking on-the-fly procedural generation of terrain between server and client and AI nodes ).

--------------------------------------------[size="1"]Ratings are Opinion, not Fact

WHILE the game is running (NOT some in between splash screen 'level' data loading,

Yes, I understand that is what you meant. The streaming technology you mention is fairly common among games these days.

What isn't so common is streaming data over the network, but that is more a data and bandwidth issue rather than a software implementation problem. Large amounts of data over the network means your bandwidth requirement goes up by an order of magnitude and thus reduces the size of your target audience, which is the main reason it isn't done rather than any technical limitation.

n!

You could take a look at the Second Life client (which is open source C++) for some inspiration. The world consists entirely of user-created content, so there is in general no preinstalled world assets, and data including textures are streamed to the client on the fly, then cached for later access. Whether it's a particularly good or performant implementation, is another matter.

You could take a look at the Second Life client (which is open source C++) for some inspiration. The world consists entirely of user-created content, so there is in general no preinstalled world assets, and data including textures are streamed to the client on the fly, then cached for later access. Whether it's a particularly good or performant implementation, is another matter.

I will have to look at that. I havent seen Second Life since they got rid of the gambling (quite a while ago)

--------------------------------------------[size="1"]Ratings are Opinion, not Fact

You can solve this fairly easily by using a CDN. Upload the files to a CDN like Akamai or Rackspace then simply send the url to your clients. Done.

Sirisian, I think the problem the OP is trying to solve is that the users may modify the environment, and thus static CDN caching won't necessarily work very well.

And, even with a CDN, if the user needs to download more data to see an area than the time it takes to travel there times the user's bandwidth, it won't work, so careful attention to level of detail and environment encoding methods is important.

And finally, when you say "upload," I think you're thinking about a hosting service like S3, which is not a CDN. A CDN works like a web cache, through DNS re-direct, where they will redirect asset requests to a particular domain that you own, to their servers "close" to the user. if the servers don't have a copy, they will let the request through to your main servers to get the data, and likely cache a copy locally for some amount of time to serve further requests for the same data. There is no "uploading" involved, and it doesn't do anything to help users whose bandwidth do not allow downloading enough data "just in time."

enum Bool { True, False, FileNotFound };

I was imagining something similar to second life. Stuff is changeable but it doesn't change often. (Unless all the terrain changes every frame?) I think you're greatly underestimating modern content delivery networks also. They can be changed rather frequently and don't require fallback due to their distributed nature. As fast as you can upload the change and transfer the URLs to the clients is all it would take. I use Rackspace (which uses Akami's servers) for something similar at my job to quickly send data to hundreds of clients. I was talking to one of the Guild Wars 2 developers who uses Akami for their 500 MB patching every week. Allows them to hand the files off and distributes them to their clients.

Essentially this gets rid of a few key issues of ensuring low latency data transfer of content to clients. Especially for content that hasn't changed. It greatly cuts down on the processing required by the servers and switches the priority of the server to simply determining what to tell the client to load. With a CDN you can even upload LOD models. You would still have the option to send quick delta compressed changes to a client for terrain changes local to them, but for large blocks of the terrain a CDN would be ideal.

I think one of the hardest issues if one were to use a CDN type approach would be with:

Effect upon ongoing gameplay when the gameflow has to share with a background asset data stream (how much is too much)

You'd be dealing with a TCP stream that would need to be throttled to not overwhelm the server stream. The client would need to perform this step ideally after they're told what to download.

The idea of streaming data comes up a lot and could probably use similar algorithms to what's used in live streaming from a CDN except instead of variable bit rate codecs it just throttles the stream or switches content in the download queue for lower LOD if it can't keep up.

Essentially this gets rid of a few key issues of ensuring low latency data transfer of content to clients. Especially for content that hasn't changed.


I think we disagree on what "low latency" means. I think two seconds is pretty high latency for game-affecting change propagation.

Also, you don't "Upload" stuff to Akamai. The easiest way to propagate a new file through Akamai (or any other CDN) is to generate a new URL for it. When the CDN gets a cache miss for this new URL, they will slurp the data from your servers, and subsequent requests will likely be served through the CDN. However, you cannot easily change the data that lives at a certain URL; you have to wait for HTTP caching headers to expire, or make an explicit call to flush the resource to the CDN, and that call may have significant latency (5-50 minutes depending on a lot of factors including which CDN.)

So, if the latency of occsaional HTTP downloads is just fine, then make each block of data available through HTTP. Make the URL of that block be something like http://yourserver.com/blocks/<sha256-of-block-data>. Then you can know that a particular version of a block will be uniquely cached (the chance of accidental SHA-256 collision is effectively zero.) Then, you have to come up with a way to let the clients know that "the block for location (15, 30) is called ad987a987987bbaa987234" -- you can't use (15, 30) as the name/address/URL, because that would require invalidating caches when the content changes.

FWIW, we typically serve 3-4 times as much bandwidth out of Akamai than we do out of our own data center. I don't think we publish our hardware specifics, but we have multiple uplinks, each in the 1-10 Gbps range. For example, if we push 2 Gbps out of our data center, we'd be pushing 6-8 Gbps out of Akamai (which it fills on an as-needed, cache-missed basis.) Now, given that a lot of our data is service data, not statically cacheable data, the actual cache hit ratio is higher than the 75-80% you'd expect from just comparing those two numbers.
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement