Hierarchical Programming

Started by
3 comments, last by jujumbura 11 years, 12 months ago
Hello all; this is more of a "musing" than a question, but I was hoping somebody else might give me some feedback on my thoughts.

Throughout my experience in game development, there is a particular philosophy that continues to prove true to me, which is the following: the best systems are always hierarchichical. That is to say, flow of execution starts at the top and moves downward, not upward, and not horizontally. As a simple example, take a UI Panel which manages a set of Buttons as children. The Buttons control their internal state based on input, but the Panel handles focus issues, and ultimately runs the updates. Systems like these are easy to understand, easy to extend, and often reuseable. Plus, since the code and data is encapsulated, these models make high-level control and multi-threading much easier.

But unfortuately, outside of elements like a UI ( which is naturally a hierarchical abstraction ), real-life game programming rarely fits into this box. The problem is all those darn interactions! On my projects, there is usually a simulation composed of objects which are "semi-autonomous", but also influenced by all sorts of other systems such as player input, physical collisions, scripted sequences, and specialized game rules. The result is that everybody is mucking with everybody else's state, constantly. To facilitate this, I typically turn to singletons. As an example, the "TouchManager" will find objects from the "ObjectManager" based on screen location, then will manipulate the objects ( drag & drop ), which may have the side-effect of impacting other objects on release. The result of all this is that I'll call an Update() somewhere, which ends calling functions across multiple systems without any upper-level coordination. Usually this is all works fine, as long as everybody is careful, but occasionally this will produce some nasty problems, like removing from an object from an update-list during update, or breaking reentrancy restrictions. It feels like spaghetti code, and I keep thinking there's got to be a better way.

The only other "way" though that I can think of is deferred execution. In some cases this works pretty well; I will frequently use a CollisionManager that will detect collisions between objects, but won't actually make any immediate changes. Instead, it produces records which are later processed by the objects themselves to make changes to state. This is great, but it is effective primarily because it only entails one extra pass, in which all handling can be performed.

But there are more complex "rules-based" interactions which would be very tedious to defer. For example, object A casts a "spell" on object B, which causes objects C1, C2, and C3 to be created. And on that same frame, the whole lot of them need to simultaneously enter a coreographed animation ( this is a real example! ). In order to accomplish this without object A directly calling functions on B and C1-3, there would need to be 3 rounds of message-passing to be handled, with a ton of parameters being passed around. Theoretically possible, but painful. Yet, perhaps I just haven't found the right framework for this sort of system? I'd be open to working this way, if it can be proven to be flexible and involve minimal overhead.

Has anybody else found themselves in the same gameplay programmer rut, and emerged with a better plan? A way to get features accomplished without singletons everywhere? I'd love to hear your feedback if so. Thanks much for reading!
Advertisement
A common solution to this is to divide your state into two halves: a "current" half and an "updating" half. When you process a frame, you use the current frame data to generate the updating frame data. You only read from current, and only write to updating. Once you have finished computing a frame's changes, you turn "updating" into "current" and start a new frame.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

I think your analysis is absolutely correct. And I think the only "answer" is to invent techniques for making the "collection of systems" approach as safe as possible. It seems to me the move away from hierarchical control may only increase, the more important it becomes to find processing parallelism amongst all those systems.

Of course you'll still employ hierarchies of control. It's just "update processing" which won't follow that path. I tend to think of that hierarchical control in terms of controlling objects making function calls on controlled objects. And those function calls produce "input data", to be processed by the controlled objects' managing systems. That, I think, is deferred execution of a slightly different style.

One example of a technique to make systems processing safe is to give systems authority over the lifetime of the objects they process. So, for instance if you decide to destroy a car, and the car utilizes a physics component, rather than destroying that component directly, you tell the physics system to destroy it. This sort of approach guarentees that the physics system will never try to process physics components which have already been destroyed, etc.
The result of all this is that I'll call an Update() somewhere, which ends calling functions across multiple systems without any upper-level coordination
This sounds like typical "bad" game code.

The only other "way" though that I can think of is deferred execution. In some cases this works pretty well; I will frequently use a CollisionManager that will detect collisions between objects, but won't actually make any immediate changes. Instead, it produces records which are later processed by the objects themselves to make changes to state. This is great, but it is effective primarily because it only entails one extra pass, in which all handling can be performed.[/quote]This is a much better style -- find some operation (e.g. collision handling) and cut it out into it's own isolated unit - the decreased coupling produces better, faster, more predictable, smaller code.
Since the PS3 came out, console developers have almost been forced to working in this way, as the SPU's can't handle code that randomly calls functions or access random memory -- each SPU only has 256KiB of RAM (which is often reduced to 128KiB to allow streaming of jobs), and all of the code/data that you want to operate on at one time has to fit in that tiny area. This really forces you to break up processes into a flow of buffers and data transforms.

For example, object A casts a "spell" on object B, which causes objects C1, C2, and C3 to be created. And on that same frame, the whole lot of them need to simultaneously enter a coreographed animation ( this is a real example! ). In order to accomplish this without object A directly calling functions on B and C1-3, there would need to be 3 rounds of message-passing to be handled, with a ton of parameters being passed around. Theoretically possible, but painful. Yet, perhaps I just haven't found the right framework for this sort of system? I'd be open to working this way, if it can be proven to be flexible and involve minimal overhead.
The above collision system example should be what you strive for in all systems -- taking a large input buffer (e.g. shapes+positions) and producing a large output buffer (e.g. collision results). I'm sure you could decompose the above into a "new effect" message buffer, and a "apply spell damage" message buffer, etc, etc...

There are more general purpose approaches to deferred execution though, for example check out the Actor Model, which is basically OOP, but every method call is written to a queue before being executed at the appropriate time. I've implemented this model in a game engine before, and it can be done very efficiently.

In my current engine, I'm half-way through implementing by general-purpose message buffering system, which works as below:
A Lua script can write code that looks and behaves like typical code:b:TakeSpellDamage(a, 42)
anim = new.AnimationTimeline()
c1 = new.Effect(anim)
c2 = new.Effect(anim)
anim.Start(0, 42);


But the actual C++ code that's being called under the hood looks like:TaskQueue& q = ...;
Push_B_TakeSpellDamage(q, *b, a, 42);//copies a function pointer, the argument arguments and 'this' into a queue, which can be run later or concurrently in another thread.
Future anim = Push_AnimationTimeline_New(q);
Future c1 = Push_Effect_New(q, anim);
Future c2 = Push_Effect_New(q, anim);
Push_AnimationTimeline_Start(q, anim, 0, 42);

//later...
q.Execute();//actually execute the above calls
The crazy code to bind C++ classes to this system looks something along the lines of:class AnimationTimeline //example class
{
public:
AnimationTimeline();
void Start(float beignTime, float endTime);
};
//bind the class to the scripting system
eiMethod2( void, AnimationTimeline, Start, float,beginTime, float,endTime );
//etc...


#define eiMethod_( R, T, Name, C, Args, List, Store, Get ) \

struct T##_##Name##_Args { Store }; \
void Call_##T##_##Name(void* user, void* blob) \
{ \
T##_##Name##_Args n = {}; \
T##_##Name##_Args& a = blob ? *((T##_##Name##_Args*)blob) : n; \
((T*)user)->Name(Get); \
} \
void Push_##T##_##Name(TaskQueue& q, T& o C Args) \
{ \
T##_##Name##_Args a = { List }; \
q.Push( &Call_##T##_##Name, &o, &a, sizeof(T##_##Name##_Args) ) \
} //

#define eiMethod0( R, T, Name ) \
eiMethod_( R, T, Name, , , , , ); //

#define eiMethod1( R, T, Name, A0,N0 ) \
eiMethod_( R, T, Name, COMMA, A0 N0, N0, A0 N0;, a.N0 ) //

#define eiMethod2( R, T, Name, A0,N0, A1,N1 ) \
eiMethod_( R, T, Name, COMMA, \
/*Args */ A0 N0 COMMA A1 N1, \
/*List */ N0 COMMA N1, \
/*Store*/ A0 N0; A1 N1;, \
/*Get */ a.N0 COMMA a.N1) //
This sounds like typical "bad" game code.[/quote]

Ouch! But point taken. That is more or less why I am writing this post.

The above collision system example should be what you strive for in all systems -- taking a large input buffer (e.g. shapes+positions) and producing a large output buffer (e.g. collision results). I'm sure you could decompose the above into a "new effect" message buffer, and a "apply spell damage" message buffer, etc, etc...[/quote]

That's actually a great way of putting it: thinking of systems in terms of designated input, processing, and output is ideal. I do feel, however, that there are some elements of gameplay which naturally fit into that design ( like collision ), and other game logic that is considerably harder to implement that way. As a gameplay programmer, I have to balance what the "best" approach is with what I actually have time to do. The nature of our games is totally unpredictable, and we never start from an established genre in which the neccessary systems can be determined and specced out before production. But I do like the sound of your generic messaging system, and I can see how iterative processing should be able to solve even chain-reaction game logic.


A couple questions for you though, since you have experience working this way:

1) Messages, viewed as commands for other sytems, make a lot of sense. But what about "Queries", where you need to test certain state from other systems in order to determine how to behave? I am usually less concerned about being invasive with Queries, as they won't ( or shouldn't ) affect the state of the external system. But, if I expose the system enough to start checking state, then I have exposed it enough to start changing state too. And it still wouldn't fly in the PS3 environment on an SPU as you mentioned earlier.

Potentially I could send Queries just as I would send a Message, and then defer the handling of that Query. But particularly in AI, this seems like it could make certain descisions really laborious. If I have an object A who needs to check his surroundings for object B's, and then if the B's are in state N, cast spell X, having to defer that logic over 2 queries ( and store additional state for proper stage indicators of query handling ) is a lot more code than just a few testing function calls within the scope of a single update. How do you arrange code that needs to test state?

2) One particular facet of my work often involves the "freezing" of certain systems while other systems are in particular states ( for example, AI pausing while touch input is active ). I am still considering how best to perform this "freezing". At one point, I had the active systems change state on the systems which were becoming "frozen", and then restore them after it was finished. However, this approach can be quite difficult to debug when something goes wrong; there is a magic "freeze" flag that gets set, with no indication why set it or why. So for things which are incorrectly frozen or unfrozen, tracking down the cause involves a lot of replaying the sequence. Recently I have started making the systems test the other systems themselves during update, and will bail if they determine that conditions are such that they should behave as "frozen". This is much easier to debug, although it currently requires that I go trapsing around from singleton to singleton testing state. How would you handle this sort of requirement?


Thanks all for the replies!

This topic is closed to new replies.

Advertisement