Input handling design

Started by
5 comments, last by ApochPiQ 12 years, 10 months ago
Hello,

I have just set up my render loop and it looks like this (simplified):
while(mIsAppRunning) {
while(currentTime() > nextUpdateTime) {
WindowUtilities::messagePump();
captureInput();
update();
}
renderOneFrame();
}

I am rendering as often as possible and call update() every N milliseconds (constant tick rate).

messagePump() just calls PeekMessage() in a while loop and captureInput() captures and stores the current keyboard and mouse state in a global state.
But I have problems implementing the input system. Currently I can only think of 2 systems:
1) Call a function that does all the updates (in my code above update()). So update() uses the global keyboard/mouse state and manipulates all dynamic/moving scene graph objects
2) All scene graph objects that require input have a method like Object::input() and in update() the whole scene graph is traversed, input() is called for all the objects and the objects handle input themself.

I do not like 1) very much, because this would lead to ONE big function that handles all the input/update logics.
I do not like 2) either, because that would mean the user has to derive from an abstract object in order to implement virtual void input(). Let's say me engine has a class Box for a simple box mesh. Now the user had to derive (MyBox : public Box) just to be able to implement the input/update code...

What do you think of my ideas? How do you implement input/update?

Oh, another question: Do you separate processing input and the real update or do you make both in one function. For example the separation could look like this:
void processInput() {
if(key == W)
object->action = MOVE_FORWARD;
}

void input() {
if(object->action == MOVE_FORWARD)
object->position = += ...
}

Or do you just have one update for input processing AND the update?
Advertisement
I implemented it in a fairly contrived way. At low level, it's probably just as you're doing it. Pull out the OS specific messages and pack them in a global, engine-specific, os-independent structure.

At the other end of the spectrum I have some ad-hoc objects needing input.
Now, I'm not sure this is correct and I'm not advising to use this method, but just FYI.
Those objects don't ask for input. Instead, they ask for a "ValueProvider" object of some kind (they bind by string, for easiness).
They create those value providers passing the data they need. The system intercepts all the calls and looks at the parameters. What it reads is stuff like
"binary result, bound to default-mouse, button0"
"dynamic float result, bound to analog-input-device, axis 1"
"binary result, bound to default-keyboard, glyph 'w' "
"binary result, bound to default-keyboard, position 'w' " (note they are not the same thing)

Those objects are at worse in the order of 10s and generally less.
What this separation does is to allow your "big function" to only know about a single object type while giving the clients access to input without requiring them to implement a certain interface. The big function ticks on those objects and updates them accordingly.

Previously "Krohm"

I'm following a data driven approach because of developing an engine with mainly a point-&-click like level editor. It is similar but not identical to Krohm's approach (if I understood his post correctly).

On the game object level there is no knowledge of user input, as there is no knowledge of other controllers like animation or geometric constraints. Instead there are typed variable (modeled as objects) with a specific meaning to the game object (or better: some of its components; but this play no role in this context). Such variable objects may be controlled by the game object itself or by specific controllers. Think of e.g. the placement (a variable) of a game game object in the world. With no controller, the object's placement is static. With a Parenting (the kind of geometric relationship for forward kinematics), the game object becomes related to a super-ordinated placement. Or with an animation as controller the placement becomes controlled by a time dependent, pre-canned manipulation.

Control by input is a similar thing. When attaching a controller for input (what Krohm has named "ValueProvider"), you say that the variable is expected to be controlled by input and how input is expected to manipulate the variable. The latter aspect means e.g. switching between 2 extreme values (be it on/off or -1/+1 or whatever), whether the value is continuous or discrete in a range, ... Additional aspects can be read from the controller, too, e.g. a verbal description of what the input is good for. All aspects together allow the Input sub-system to determine which and how logical input can be bound to which physical input device. This drives a generic input configuration.

I'm no longer a fan of scene graphs for a while now. Instead, I think that linking objects to the sub-systems they are relevant for is the way to go. And so do I with input. The game objects that need input of any kind declare this by providing input based controllers. Such controllers are registered with the Input sub-system when being enabled. The Input sub-system receives physical input from physical input devices, abstracts this input, and hands it over to the controller bound to this device by configuration. The controller translates this input accordingly to the type of variable it controls, and feeds the variable with the new value. (IMHO looking at the USB HID gives some useful insights into abstracting input devices.)



I do not like 1) very much, because this would lead to ONE big function that handles all the input/update logics.

With the concept above there is a centralized location where input handling is processed (as is useful because of the invocation from the main loop), but the actual evaluation is distributed (and configurable). The update() is a whole new ballgame. The update() can often not be monolithic because of inter-object dependencies. E.g. the animation sub-system may need to be processed before the physics sub-system, and the collision sub-system may introduce a post-processing step to the update().


I do not like 2) either, because that would mean the user has to derive from an abstract object in order to implement virtual void input(). Let's say me engine has a class Box for a simple box mesh. Now the user had to derive (MyBox : public Box) just to be able to implement the input/update code...

Err, this is IMHO already a bad basic approach. A class Box to hold a box-like mesh? However, why should a mesh process input? Does this even mean that a static mesh (the mountain beneath) receives input? IMHO a clear no-go. So I second your opinion.
Here's my main loop- seperate 'Step' (game updates), 'Frame' (rendering) and 'Doinput' (input) calls
It seems to work all right...Also these functions are actually function pointers so they can be swapped out. I have so far 3 different Doinput functions that can be used.
The most simple is suitable for keyboard input and only maps keys(and upkeys) to functions.
The second is similar but also includes a bit of code to update a player by reading mouse movement and keys and updating Transform and Mobile (contains tri-val toggles -1 - 1) objects.
The third is a special loop to read text input from the user.

int MainLoop( Program& aprogr ){
int lreturn = 0;

Context* lpcontext = aprogr.mCstack.top();

while( lpcontext->mMode != OFF ){
while( aprogr.mTimer.mAccum > aprogr.mTimer.mTick ){
lreturn = lpcontext->vStep();
lreturn = UpdateTimerStep( aprogr.mTimer );
}
lreturn = lpcontext->vFrame();
lreturn = UpdateTimerFrame( aprogr.mTimer );
lreturn = lpcontext->mfDoinput( lpcontext->mInput );

assert( lreturn == 0 );
lpcontext = aprogr.mCstack.top();
}
}



Edit: was working on this recently and it's been updated to be more readable.
Thanks for your answers!

@haegarr and Krohm: I have read your postings several times but to be honest I didn't get it:/ Could you please provide a little (pseudo)code snippet to show what you mean?
Sorry, I cannot reasonably do that hassle-free.
Think at it in terms of interfaces.


[font="Courier New"]class ValueProviderInterface {[/font]
[font="Courier New"] public:[/font]
[font="Courier New"] virtual bool IsTriggered() = 0;[/font]
[font="Courier New"] bool Trigger() = 0;[/font]
[font="Courier New"]};[/font]

[font="Courier New"]...[/font]

[font="Courier New"]// in the manager[/font]
[font="Courier New"]if (Released(btn_x)) someObject->Trigger();[/font]

It is actually more complicated than that. It works by creating two associations, one from button to memory slot to set. The second association maps the memory slot used to store button state to the various "ValueProvider" interfaces associated. The code is spread around in various components and actually wouldn't make much sense unless familiar with the underlying architecture. You could do with a single mapping. I used two as a implication of a previous design decision.

[size="1"]When the source tags will work?

Previously "Krohm"

Since this is something of a FAQ around here, have an article smile.gif

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

This topic is closed to new replies.

Advertisement