Sign in to follow this  
  • entries
    45
  • comments
    48
  • views
    51745

Work work work

Sign in to follow this  
hplus0603

477 views

I'm going to do something different today. I won't talk about the experimental game hacking that I usually do. Instead, I'll talk about work.

Work is so busy these days, I don't have time to work on the game. But I gave myself 9 years and 364 days, so I still have a sufficient buffer to finish on time :-)

At work, I've been working for quite some time on the general problem of how to hook together different kinds of virtual worlds. Right now, Second Life wants all of cyberspace to be Second Life islands; Multiverse wants all of cyberspace to use the Multiverse billing and messaging back-end; Qwaq wants all of cyberspace to be unsecured, peer-to-peer based on a VRML runtime (eh, good luck with that :-). That approach is not going to work.

Each big user of virtual worlds will need a diffent virtual world based on their needs, and most of them will want to control infrastructure and security themselves. However, they might still want to hook these worlds together, perhaps ad-hoc. For example, a large chemical company might have a virtual world for collaboration, remote meetings and distance education, and a county might have a virtual world for emergency management and practice. For one particular scenario, they might want to hook these virtual worlds together, to simulate an emergency at a chemical factory located in that county.

Today, doing such a simulation can sort-of be done in the military world, using various flavors of DIS, or HLA. However, it requires each participant to spend about two months with expensive connectivity consultants to ensure interoperability, before the actual exercise -- in fact, with the move from DIS to HLA, the amount of interoperability work has increased, because HLA does not specify the wire protocol, so there exist multiple, incompatible runtimes. DIS is also not so much a persistent virtual world protocol, as just a runtime telemetry protocol; while it works for what it does, it's pretty much run out of steam. Hence, the opportunity to create HLA, and the tremendous sadness of its failure to capture interoperability.

Okay, so there are many problems to solve. Here's one: Right now, each game studio has their own tool pipeline (unless you count Unreal 3 as a "standard"), and builds levels in their own way. Meanwhile, in the geo/sim industry, there are a few standards, although they are differently implemented in different tools, and none of them is really aimed at the kind of on-the-ground, in-your-face quality that you get from avatar-based games. Flight simulation seems to be mostly a solved problem, but that's not where the current action is at.

The standards that exist, are mostly for interchange, rather than runtime. COLLADA, OpenFlight(now Presagis, a division of CAE), X3D and SEDRIS are all fine ways of capturing data for exchange over a possibly lossy medium, but the captured art isn't typically directly useful for real-time needs, or requires significant conversion on load, which affects frame rates and makes streaming new content "live" a challenge.

For the last year, I've been working on a binary runtime terrain/level file standard. For some time now, I've gotten significant help from my company (Forterra Systems), and we're getting ready to present what amounts to a Grand Unified Whole-Earth Level File Format at the end of this year. Okay, so it won't solve everything, but it's defined with the following requirements in mind:

- compact, binary representation
- support for modern data, such as shaders, geometry subsets, etc
- not based on an old-school hierarchical scene graph
- extensive meta data support
- whole-earth support
- streaming/paging support
- multiple-resolution support
- transactional file format (commit or roll-back on edits)
- support for both artist-generated data and source-based geo-specific data
- support for collision acceleration
- support for encryption, authentication, compression, distribution, attribution and leveraging of synergies
- 64-bit clean file format
- documented, open, unencumbered file format
- documented, open source, C/C++ reference library to read/write
- extensible

The big catch-all is the last point. For example, if your game needs portals, then you can easily add that information to the file, even though the file format isn't defined to support portals. Further, other readers of the same file will see the data, minus the portals. You can still do on-demand paging of the portal-based data, while that data will stay out of the way of readers who don't understand it. All internal data storage is defined in a way that allows for transparent extensions. For example, if there is some kind of data that is defined as "some name, plus some number of 3D points" then this would be stored as a header, containing the name, the point count, and a pointer (within the data record) to the point data. The header would be followed by the point data, with possibly some other data inbetween. This allows future versions of the file format to add more data to the header (and even more arrays within the data), without breaking compatibility for a version 1.0 reader, which will read the header, and then read the array based on the offset found in the header.

I was going to present the file format (without the open source implementation, which we're still working the bugs out of) at the Terrain Summit in Huntsville, but as that got postponed, perhaps I could do it at Serious Games Summit in D.C. Except that got cancelled, so I guess y'all will have to wait for that big interservice/industry training, simulation and education conference around thanksgiving for the scoop. Consider yourselves forewarned! :-)

I might post a PDF sooner, though. Stay tuned.
Sign in to follow this  


3 Comments


Recommended Comments

Quote:
I'd be interested in knowing why you believe an unsecured, peer-to-peer system wouldn't work.


As long as you have nothing to secure, and low density of participation, it'll probably work alright. However, once you have anything of value in the system, be it "game gold" or "confidential sales documents," peer-to-peer becomes a liability. With current cryptography, you can prove who made a leak if there's a leak in a peer-to-peer system, but you cannot prevent the leak. With centralized authority, you can serve only the part of data you want, only at the time you want.

As for simulation density, peer-to-peer has the n-squared connection problem. It also has obvious cheating problems, random "split world" problems (where two peers are in the same physical space, but won't see each other), and update latency problems. I know that there are some research groups looking into peer-to-peer technologies, the furthermost of which is likely the VAST group, but they still don't have solutions to these problems.

Share this comment


Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now