Sign in to follow this  
hplus0603

Component Object Models as they relate to Multiplayer Networking

Recommended Posts

hplus0603    11347
Some of you might have seen my journal, that I started a few months ago, chronicling a small toy game project I'm poking at in my scant spare time. I just made another entry which describes how my composable component object model interacts very nicely with a network update model to make for a straightforward and possibly robust networked object model. I just thought I'd share.

Share this post


Link to post
Share on other sites
Antheus    2409
Quote:
When a property updates after being started, a dirty flag is set, and when time comes to update object state, dirty properties are queued for sending. I prioritize property updates such that closer objects get higher priority. I also use a send queue with replacement capability, so if property X updates while it's already queued, I replace that piece of data in the queue with the newer data. This means I can also remove pending updates for objects as those objects are being removed -- no need sending network updates for things that die.


I went with slightly different logic.

All properties are in a std::map< id, void *> (type safety is taken care of, so void* is never a problem.

To obtain the value of the property, I went the GPG3(or 4) article, where, if a property isn't found in this object's map, its parent is queried, and so on... The difference between this is, that I only need one type of object and don't differentiate between template and instance.

When a value is set, if an instance doesn't exist in this object's map, it's added, thereby becoming local, not inherited, and subsequent gets will return local value.

Deltas are stored in the same way (map). This takes care of multiple updates of the same value. It also removes the need for dirty flag, which I found a bit excessive (100 properties per object makes quite a bit of overhead).

I don't support prioritization at object level, only at AoI.

Another thing I added were object views. This makes it possible to define views into an object to determine which properties get sent across the network to whom. A view is simply a set of property IDs. A key may reside in several views. This makes it quite simple to keep AI and bookkeeping on the server, while sending only UI related stuff to the clients. Or, sending quest log only to the player, but not others around.

Deltas are sent in a separate thread, due to obvious overhead of serialization. This means, that adding delta to a map needs to lock the delta structure (very fast operation), but when writing out the deltas, the old map is locked, swapped with an empty one, unlocked (the object can now update again), and old delta map is serialized in a separate thread.

All properties are allocated from static memory pool, keeping the fragmentation in check.

I use templates for defining the IDs and enforcing the types. I've also recently, based on an idea here and GPG6 (also just recently obtained) added support for full component objects, turning this into a fully fledged component/container type of mechanism.

I've done some testing, and while there is overhead, I consider it a trade-off between flexibility (all containers are run-time configurable), memory efficiency (each entity can be expressed with 24 bytes only, and still define hundreds of properties and interfaces) and speed.

Share this post


Link to post
Share on other sites
hplus0603    11347
That sounds nice. You have more overhead at runtime than me (the map look-up), but buy some additional flexibility.

My property read function is (templated) something like:

property::read() {
if (dirty_) source->read(&value_);
return value_;
}


When I said "template" I didn't actually mean a template object; I mean the data that gets used for instantiating the object. Because templates are shared between client and server, values that haven't been changed since instantiation, don't need to be updated. Meanwhile, values that have changed since instantiation, need to be sent as part of initial object state, when a new player connects and wants to see the object.

Share this post


Link to post
Share on other sites
Antheus    2409
Quote:
That sounds nice. You have more overhead at runtime than me (the map look-up), but buy some additional flexibility.


I tested performance of individual operations before settling for the design.

Performance of lookup doesn't notably degrade with increased number of properties (1 .. 100).

The system can perform 30,000 gets per milisecond, or around 10,000 property accesses in real setting. After calculating the worst case, an player will never be able to affect more than 1000 properties per second.

And this is the worst case, most of the player actions don't even access the container. Location, scene-graph containment, orientation, id and a few other run-time details are regular data types.

I've posted various bits of code related to the system on these boards, but the core is something like this:
// Property ID

struct key_base : boost::noncopyable
{
key_base( int index ) : m_id(index) { }

int id() const { return m_id; }

private:
const int m_id;

// additional property attributes
// persistent, transient, constant, ...
bool m_persistent;
....
};

template < typename T >
struct key : public key_base
{
typedef memalloc::BlockAllocator<T, 1024, 4> Allocator;

key( int index )
: key_base(index)
{}

private:
};




The actual container is ugly template code that ensures compile-time safety:

template <typename T>
inline T get( const attr::key< value<T> > &k) const {
return global_reference( k, true )->get();
}

template <typename T>
inline void set( const attr::key< value<T> > &k, T value ) {
local_or_new_reference( k )->set(value);
deltaHandler->delta_value( k, value);
eventHandler->value_changed( k, value );
}

template < typename T >
T *global_reference( const attr::key<T> &k, bool should_throw ) const {
// Is the value stored locally?
T *ref;
ValueMap::const_iterator i = m_values.find(k.id());
if (i == m_values.end()) {
if ( is_base() ) {
// No, but we have no parent
if (should_throw) {
throw std::exception( "Attribute doesn't exist" );
} else {
ref = NULL;
}
} else {
// No, let's check the parent
ref = m_parent->global_reference( k, should_throw );
}
} else {
// Yes, let's return it
ref = static_cast< T * >( i->second );
}
return ref;
}





There's plenty of detail missing with various allocation aspects, but that's the base idea. If a value is found here, return local copy, otherwise search through parent templates.

When setting the value, it's stored into local map, so from then on, that particular property will be retrieved locally.


The annoying part is the unnatural get and set mechanics. So I went with some additional magic to provide more natural representation of these structures.

template <typename T>
inline value_if< attributes, T > operator[]( const attr::key< value<T> > &k) {
return value_if< attributes, int >(this, k);
}

// where

template < class Attributes, typename T >
struct value_if : public attribute_if< Attributes, value<T> > {
typedef attr::value<T> Type;
typedef const attr::key<Type> & key_reference;
typedef Attributes * AttributesPtr;

value_if(AttributesPtr a_ptr, key_reference key )
: attribute_if( a_ptr, key )
{}

inline T get( void ) const {
return get_attributes()->get();
}

inline void set( T new_value ) {
get_attributes()->set( key(), new_value );
}
};




Lots of template mumbo jumbo, but it all boils down to this:


typedef key<int> IntVariable;
typedef key<string> StringVariable;

static const StringVariable EntityName( "entity_name", "", true, false ); // property name, default name, Persist, not read-only
static const IntVariable CreatureHealth( "creature_health", 100, true, false ); // property name, default value, persist, not read-only

...

Creature c( defaultCreature );

std::string name = c[EntityName].get();
c[CreatureHealth].set( 17 );




Despite compile-time key bindings, keys can be configured during run-time, individual properties changed, and so on, while retaining type-safety and all other features.

My next goal is to auto-generate Lua support for these objects. If everything works and I don't hit some weird obstacle, the whole system will look like this:

- XML object definitions (templates and full world state)
- C++ network layer and object management
- Lua (Java, Python, ???) logic

The auto run-time integration didn't strike me until recently, but since the property system already provides all the meta information I need, it currently seems quite possible. And all this without recompilation of the C++ code (with exception of new data types).

As an added benefit, the object model is usable both server and client-side, since all contents are defined during run-time. As such, shared network data is automatically synchronized.

The container is not thread-safe, so proper access must be ensured, and through the use of temporary object returned by operator[], it's possible to cache references to actual values on the client-side, thereby eliminating the map lookup penalty for commonly used properties (such as those used by renderer). This is however an "unsafe" optimization, which is possible for the rare situation where the lookups really are too frequent.

Share this post


Link to post
Share on other sites
lightbringer    1070
Interesting stuff. Ever since I ditched my scene graph and moved to a flat list of composition-based entities, I've been meaning to make my components behave in a more generic manner like what you guys are implementing. But lack of a good tool chain, lack of a good understanding of the system (especially the binding of properties is somewhat mystifying) and a desire for expressive simplicity (let's face it, I have trouble finishing this one game, not to mention ever moving on past it with the same code base) means that I keep going back to hard-coding all properties as explicit members in components and all possible components as explicit members in container classes.

For a component-based system, there are many architectural questions that need answering, such as the granularity of components, the interactions between them and the separation of component data and code, and for someone new to the idea, the task is a bit daunting, although there are some good discussions about it here and over at the SWEng-GameDev mailing list. I'm still very much at the beginning though, and your blog entry has provided me with some food for thought.

Share this post


Link to post
Share on other sites
hplus0603    11347
@Antheus:

How do you deal with the fact that different users have different views of the world? You need to somehow keep a copy of what state is dirty from the point of view of each individual client, and schedule/prioritize the updates differently for each client, right?

Quote:
Location, scene-graph containment, orientation, id and a few other run-time details are regular data types.


Ah! That makes it less of a performance problem to put everything else in hash tables :-) (Note that micro-benchmarks are notoriously hard to do for things like property access, because the cost is all in cache misses, which are hard to emulate when benchmarking.)

Btw: In my system, everything is a property (including position), and properties are dynamically bindable. Thus, if a "mob tracker" object has a "tracked object position" property, it can actually just bind the "position" property of the mob to its own property, and anyone reading that property (including the object itself) will get the position of that mob. Lifetime is also managed, so that stale pointers don't kill you. Because position, orientation, etc, is all properties, I had to make a different trade-off on flexibility. Getting a handle to a property is a hash-table look-up; once you have that handle, a property read or write is a single virtual function call.

I'm not saying you should change (in fact, you probably shouldn't :-); I'm just comparing and contrasting.

@lightbringer:

You don't have to give up a scene graph just because you're using composition. The key is to view the scene graph as just another API that provides rendering services. Thus, I have one component called "rendering component" which has properties for position, orientation, scale and mesh name. When that gets aggregated into the object, it will bind position and orientation to the object physical world position/orientation properties, and it will register the given mesh in the scene graph so it gets rendered. (I use Ogre3D for scene graph, btw).

Share this post


Link to post
Share on other sites
Antheus    2409
Quote:
How do you deal with the fact that different users have different views of the world? You need to somehow keep a copy of what state is dirty from the point of view of each individual client, and schedule/prioritize the updates differently for each client, right?


That's what I use object views for.

Each type of object (abstract term) has instristic properties describing the views. A view is simply a vector of keys, stored as a property.

Which keys belong into which view are defined at definition point.

// No hard inheritance, all defined through runtime
BaseObject object( root );
object[ObjectType]->set( "OBJECT" );
object[Visuals]->add( Size );

// creature inherits from base object;
BaseCreature creature( baseObject );
creature[ObjectType]->set( "CREATURE" );
creature[Visuals]->add( HairColor );
creature[Visuals]->add( BodyColor );
creature[Visuals]->add( Health );
object[AI]->add( Speed );
object[AI]->add( Aggresssive );





ObjectType is intristic to all objects. It's the type_id equivalent.
Visuals and AI are views. Object has only Size, but Creature has Size, HairColor, BodyColor and Health.

Client updates are managed through subscriptions by proxy server. This can be local, or remote, but the updates are triggered by every update.

When something changes, the delta handler goes through views and builds per-view updates. The rules as to who subscribes to which view are defined based on relation between objects. A player controller would receive their own quest log, for example, but other players received same object wouldn't.

AI manager might only subscribe to AI view, since it doesn't care about appearance.

And since each object knows exactly what to belongs into each view, and proxy server knows what who is elligible for what, the updating process becomes straightforward.

The proxy then maintains subscriptions, and forwards them to applicable clients. Due to design, nothing bad happens if someone oversubscribes.

[quote]Getting a handle to a property is a hash-table look-up; once you have that handle, a property read or write is a single virtual function call.[quote]

The reason I can't cache values (in general case) is to take benefit of inheritance and keep base object footprint low. Due to rather verbose object descriptions (many properties use std:: containers, which have base overhead even when empty) the overhead of empty or unused members, even if just pointers, would be too high.

The reason for (value_if = Value Interface) is exactly caching.

Each container can support interfaces as well, which use Inversion of Control and follow the same inheritance rules.

// base game object
class Entity extends Container {
Vector3 location;
...
};

class AIManager {
virtual void update( Entity &c );
}

// Not done like this, just an example
class AIManagerIf {
AIManagerIf( Entity *e ) : m_entity(e) {}
void update() {
m_entity->update( *e );
}
}

CreatureObject c(root);
c.set( KeyAIManager, new SimpleAIManager() );

// c[KeyAIManager] returns AIManagerIf
c[KeyAIManager]->update();
CreatureObject smartCreature(c);
smartCreature[KeyAIManager]->set( KeyAIManager, new SmartAIManager() );




This allows for logic to benefit from inheritance rules as well. If I now create instances of creatures, they all inherit from default AI behaviour, yet they may override it as needed.

This is why the accessor interfaces are used to access individual values, and where caching is possible.

Due to implicit rules (a property cannot be removed from existing object outside from re-creating the object, and pointer to storage may not change), the interface can cache the value.


PS. The system isn't perfect, and there is overhead. The design was based around the following priorities:
1) Low memory overhead (fragmentation is an issue)
2) Typesafe, C++ oriented syntax (that's why not the GPG Turbine design)
3) Performance (someone needs to be last.

2) has implied the template hacks and all that, as well as special return objects. The ammount of code that needs to be written to use them doesn't change, but behind the scenes things look complex.

1) Is the reason for this no-overhead design. To allow complete flexibility, objects can have hundreds of properties, and (for example script writer) has full freedom to add new properties to existing objects, possibly scripts, or more.

This is the reason why I mentioned the benchmark. While there is obvious overhead, it's not as bad as it sounds, and definitely not a deal breaker. After getting more comfortable with that, I just went with full-blown component design.

Quote:
Getting a handle to a property is a hash-table look-up; once you have that handle, a property read or write is a single virtual function call.


In my case, the C++ equivalent would be aliases.

struct X {
int *a;
std::string *b;
};
X x;
int *i = x.a;
for (int i = 0; i < 100; i++) {*x)++; // if a is local to x, no overhead
std::string *s = x.b;




The usual pitfalls of using such approach apply. For complex calculations, one has the luxury of caching the pointer, but needs to be aware that in multi-threaded environment X may get de-allocated. This isn't currently an issue.

This caching is possible whenever a or b are stored locally in x, and are not inherited. For client - this will almost always be the case for commonly used properties.

The overhead in that case is simple function call, or, since no polymorphism is used, possible none at all.

At the same time, changing the value of these cached variables will still trigger the change events on the container, although the individual properties do not manage the dirty state or change listeners themself.

[Edited by - Antheus on July 12, 2007 7:21:12 AM]

Share this post


Link to post
Share on other sites
lightbringer    1070
Quote:
Original post by hplus0603
You don't have to give up a scene graph just because you're using composition. The key is to view the scene graph as just another API that provides rendering services. Thus, I have one component called "rendering component" which has properties for position, orientation, scale and mesh name. When that gets aggregated into the object, it will bind position and orientation to the object physical world position/orientation properties, and it will register the given mesh in the scene graph so it gets rendered. (I use Ogre3D for scene graph, btw).


Those were actually two separate design decisions that just happened to coincide. I wanted to get rid of both the deep inheritance hierarchy and the hierarchical nesting of nodes at the same time. I'll see how it will work out from now on - it's not a problem to merge my TransformModel and RenderModel components later on and make them hierarchical again. But having had a working scene graph plumbing in place and having used it for a bit, I want to explore the flat approach hands-on. Plus it makes more sense to me to keep things flat when using components - for instance, I would normally attach a particle system as a child node, but having to do that now would require two separate entities - not very pretty.

Share this post


Link to post
Share on other sites
hplus0603    11347
Antheus: Thanks for the details; you clearly have thought your solution through for your requirements. You should consider submitting to GPG, as an alternative to the Turbine design :-)

I'm assuming there either is a proxy subscription per connected viewing client, or the proxy subscription in turn has fan-out for the connected clients. I guess an alternative would be to stuff all updates down the network pipe as soon as they happen, and use reliable delivery, but that generally leads to too much traffic; you need to prioritize traffic based on relevance (such as client proxmimity to the event/object).

Another question: If your alias is a pointer, does that mean that, if I get an alias to a property on an object, that object must split from its parent template in order to return the pointer to local data? Else one of two things could happen: 1) I could accidentally change the parent value instead of the object instance, or 2) someone could split the object from the parent, and the alias is no longer valid for the object?


lightbringer: We probably mean different things by "scene graph." For me, a scene graph doesn't need to use recursive containment; a scene graph is something which organizes and optimizes what you render and how you render it. A scene graph can be flat, or octree, or BHV, or whatever on the inside. A scene graph provides insulation and abstraction of rendering, basically. The opposite is something where each object renders itself.

Share this post


Link to post
Share on other sites
Antheus    2409
Quote:
Another question: If your alias is a pointer, does that mean that, if I get an alias to a property on an object, that object must split from its parent template in order to return the pointer to local data? Else one of two things could happen: 1) I could accidentally change the parent value instead of the object instance, or 2) someone could split the object from the parent, and the alias is no longer valid for the object?


The alias or reference is an object, returned by value, which has same interface as the property (for simple value get/set, for list add/remove/size, for set ...).

This takes care of the consistent API. While this does require code duplication, each of these is templated, and needs to be written only once, and then specialized for the type.

The alias or reference itself looks like this:

template < class T >
class value_if
{
public:
value_if( container *c, const key<T> &k );

T get() {
if (pointer == NULL) {
// cache value
pointer = container->global_pointer(k);
}
return pointer;
}
T set(T value) {
pointer = NULL; // clear cache
container->set(k, value);
}
private:
container m_container;
const key<T> k;

// optional if caching is used
T *pointer;
};

And in container:
template < class T >
value_if<T> operator[]( key )
{
return value_if<T>( this, key );
}





So it simply serves as a wrapper to container function calls. If desired (possible) it will cache the pointer to value, and flush the pointer if it's no longer valid. This cannot be made thread-safe, so there's also no need for reference counting.

The xxx_if (interface) is a temporary, which will in most cases be discarded. It's merely a temporary object, which holds pointer to the container. This way, only container needs to maintain a list of listeners, and notify them when appropriate set(k) is called.

I'm currently also considering re-designing the internal interface to move to pure message-based commands - rather than using set(key, value), I'd pass the send( k, SetValueCommand( value ) ) to the container. I'm especially considering this to possibly ensure thread-safety, but I need to test how this would impact the logic, mostly causality. If I find a decent solution, then entire object model will become completely concurrent, and even distributable at component level.

It should also be noted that despite returning wrapper object by value, using extra class and extra methods, there is no overhead in final code. It even improved by a few percent for whatever reason from set( key, value ) approach. Don't know why.

Quote:
I'm assuming there either is a proxy subscription per connected viewing client, or the proxy subscription in turn has fan-out for the connected clients. I guess an alternative would be to stuff all updates down the network pipe as soon as they happen, and use reliable delivery, but that generally leads to too much traffic; you need to prioritize traffic based on relevance (such as client proxmimity to the event/object).


I use "NetEvents" for these objects, using broadcast. The fan-out server monitors these events, and spies on which object is interested into what, and maintains a subscription list. The fan-out server is a standalone piece of code, it can run locally or remotely, and is just a simple message router. Subscriptions will generally be determined based on which proxy got sent which objects (during zoning or other means, where object data is sent to a single client only). I haven't yet dealt with fine-grained AoI management.

[Edited by - Antheus on July 13, 2007 7:45:03 AM]

Share this post


Link to post
Share on other sites
Tesshu    713
I thought I would chime in here since I am also using a component style system for my MUD. It's based off of the Scott Bilas presentation. I was wondering how far into the project's you two are? My project is at the point were I am into game play and this really starts to put theories to the test.

1) I am using a variant style system and it sounds like your guys aren't. My components implement a set/get property that passes a variant on the stack. Note that my components don't actually have to store the property as a variant. I did this so that C++ code could run faster. I have stuff like this:

void Sim_ComFrame::setValue( const STL_String& name, const Prop_Value& value )
{
if ( Sim_IsName( "pos" ) == true )
{
setMatrix( value.getMatrix4x3f() );
}
}

Prop_Value Sim_ComFrame::getValue( const STL_String& name )
{
if ( Sim_IsName( "pos" ) == true )
{
return Prop_Value( getMatrix() );
}
else

return Prop_Value();
}


I am thinking of changing this because it's becoming a hassle to manage the code. The only real use I have left for the variant (Prop_Value), is to allow for my data layout to use XML without knowing anthying about the object. Also it allows for an object inspection editor to written pretty easy. It all works but it nags of YAGNI. I keep getting the feel like I am reinventing lua tables, which is even funnier since my scripting languga is lua and I have written bindings to setValue/getValue :)
So I am mulling over writting all the data layout and editor code by hand for each component. My engine is starting to stink of middleware instead of writing my specific game. Have you guys already dealt with this sort of stuff?

2) I did the networking part of my components using a scope flag per component. So a component is flagged as client, server or both. The problem with this is that I am finding that I need to conditionally compile some componets and this makes me feel dirty. Maybe I should spit components based on their scope?

Share this post


Link to post
Share on other sites
Antheus    2409
I've changed a few things to adapt everything to actor based model.

Each entity that is a part of game state has the following base elements:
1) VTable - a per-instance configurable multiple dispatch function table, that maps incoming messages to functions
2) Subscription Table - per-instance subscription table updated by world managers that sends internal state updates
3) Property Table - strongly typed property storage
4) Member variables

All except 4 will be inherited from base class if undefined. This leads to extremely low memory foot-print of live objects.

After introducing the Actor model, in combination with 1), 4) became a viable option for frequently accessed fields.

I have no distinction between server and client objects. All logic outside of process scheduler operates on Actor references, and can only pass messages to other actors - it can never obtain reference to actual object. As such, there's no difference between making an in-process call, shared memory inter-process call or remote call to either other cluster nodes or client.

The inheritance part comes very handy in certain situations, especially vtable. Due to large number of messages that get passed around, base object can have well over hundred handlers. Keeping all those around on per-instance basis would be very redundant, through inheritance I only take a log n penalty, where n is a single digit number. And, in addition, I can spawn instances with custom functionality.

This approach has downsides. Everything is loosely coupled. Nothing stops me from sending commands to objects that can't handle them, for example. So this needs to be logged during run-time. Other down-side is that functionality is scattered over hundreds of handlers, each oblivious to everything else. This isn't downside per-se, but it does cause an explosion of files implementing handlers.

Share this post


Link to post
Share on other sites
hplus0603    11347
Quote:
The problem with this is that I am finding that I need to conditionally compile some componets and this makes me feel dirty.


I have some cases where a component will do:

  if (IsAuthoritative()) {
... server side code ...
}


I prefer this to separate compilation, because it allows me to debug client and server within the same executable. The server will typically be authoritative for the object that the component lives in, whereas the client is not.

One example of such a case is the "spawn" component, which only actually spawns new objects on the server side (and the new objects are discovered using the regular object network scoping).

Share this post


Link to post
Share on other sites
Tesshu    713
Antheus:

I am going to assume that since you're using this style of system, that your game is fairly large. So that would lead me to wonder how you are creating object definitions. I use an offline XML file that loads component and object definitions using variants. Since my properties are generic it's pretty easy to do. Then objects get created using these definitions. So how do you do it? Do you hand write something for each component type since your properties are strong typed?

Also I guess what you call views, are how you distinguish ownership. I was going to ask if the view flag was per property or at the component level, but now I am starting to wonder if you have components. Is you system basiclly an object with a bag of properties? I ask because my system is an object with a bag of components and each component is a bag of properties.

hplus0603:

Since I am a huge fan of doing things to make the code easier to debug, I can see why you might do that. The problem I have is that I am going to assume that my client will be hacked at somepoint (hopefully it will be worth the time). So I am making the client as dumb as I can get away with.

Antheus & hplus0603:

So have you guys got something playable? I ask because I am wondering if you're past the graphics demo phase and into the game logic part. That's were I have notice things start to fall apart. I am not being negative, I just want to know how tested these ideas are.




Share this post


Link to post
Share on other sites
Antheus    2409
Quote:

Also I guess what you call views, are how you distinguish ownership. I was going to ask if the view flag was per property or at the component level, but now I am starting to wonder if you have components. Is you system basiclly an object with a bag of properties? I ask because my system is an object with a bag of components and each component is a bag of properties.


Yes, objects are bags of properties, and subscribers listen to a subset of those. I don't have ownership as such, objects are atomic entities.
Views determine which updates get sent to which clients - I no longer need intra-cluster shadowing, so that part is now irrelevant.

Quote:
So have you guys got something playable? I ask because I am wondering if you're past the graphics demo phase and into the game logic part. That's were I have notice things start to fall apart. I am not being negative, I just want to know how tested these ideas are.


I've recently deployed the system across several machines, spawning several million objects with basic logic running. That part works as expected and predicted with regard to server and network loads, but I'm also using a fairly reliable and proven model that's currently functionally identical to stackless python.

The problems I need to solve now will be based on observations made from there, and various samples regarding data load, data locality and other aspects.

I don't deal with client or data, so that part is not my concern. Each object inherits all properties from a parent. Different object types define additional properties on top of that. Each property identifier is a unique tuple (namespace, name) value. Instances may access only the properties identified by their respective object type, and it's not possible for descendants to remove properties, they can only revert to defaults.

The ideas I'm building on are long tested in all large-scale MMOs, and all of it is document across various publications. This isn't as much about size itself, as providing scalability from ground up.

In recent testing I observed that due to transparent distributivity of objects the processing is best performed by spawning a small number of processes (2 or 3) for each physical processor, and physically distribute objects among them. Since the cost of communication through shared memory is zero (unlike network), the overall response rate improves due to shorter incoming queue sizes (which results in better cache coherency due to higher locality), despite additional context switching.

Here's where the scalability pays off. With no change to code, and without any knowledge of distribution, objects are scattered around available resources.

An added benefit of purely message-driven approach is the ability to profile exact behavior on actual per-message basis. This allows for extremely fine-grained statistics acquisition on where and how bottlenecks occur.

Share this post


Link to post
Share on other sites
hplus0603    11347
Quote:
The problem I have is that I am going to assume that my client will be hacked at somepoint


But what does that matter? Yes, someone with a disassembler can figure out the specific rules of the game. Likely, good players do that anyway by just playing. If someone toggles the comparison and runs the authoritative code on their own machine, they will have things happen that don't match the server, which means they're giving themselves a bad experience. Seeing as their clients aren't actually servers, that won't affect anyone else.

Share this post


Link to post
Share on other sites
Tesshu    713
I would agree that the rules being exposed isn't much of an issue, since pen and paper games have been working forever with both GM and players knowing the rules. I am more concerned about the data the rules are working on. The less they know the better off your going to be.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this