• ### Announcements

#### Archived

This topic is now archived and is closed to further replies.

# GDC = AI_Fest; while(GDC == AI_Fest) {CheerWildly();}

## Recommended Posts

Timkin    864
quote:
Original post by Geta
Noted, and it will be brought up in at least 2 of my roundtables: the general AI topics and the FPS specific.

Actually, I''d be interested to hear what the RT participants thought the domain of applicability was for deliberative and reactive agents. Do they think these domains differ. Is one paradigm more suited to FPS, while the other more suited to RPG, for instance?

Timkin

##### Share on other sites
quote:
Original post by Timkin
Actually, I''d be interested to hear what the RT participants thought the domain of applicability was for deliberative and reactive agents. Do they think these domains differ. Is one paradigm more suited to FPS, while the other more suited to RPG, for instance?

Careful that you don''t overestimate the RT attendees. Why do you think we want you guys there so badly?

Dave Mark - President and Lead Designer
Intrinsic Algorithm -
"Reducing the world to mathematical equations!"

##### Share on other sites
alexjc    457
Timkin, ok. I see your perspective now. A SPA architecture is modular in the same way as a rule-based system is; you have components that may be designed and implemented separately (eg.g working memory, rule-base, and the interpreter). While it's arguable whether that's monolithic or not, you take away one component and the whole system collapses and produces nothing useful. I wouldn't class most sense-plan-act architectures as distributed (in terms of functionality) because the components are so tightly coupled. If you had two parallel planners, then you'd have traces of distributedness.

I too would be interested in hearing the roundtable participants opinions. But I doubt I'd be surprised with the reply... The number of games I know that use a generic planner (not for paths) I can count on one hand. I know I've missed some, but that's still not very many.

So, four more statements Timkin:

1. A reactive behavior based on sensory input is equivalent to a plan of length 1
2. A planner generates non-deterministic actions based on sensory input only
3. A planner is a deterministic mapping from current state S and sensory input I to the resulting action A
4. A reactive technique can achieve the same mapping (S+I->A) as a planner using memory instead of computation

I realise that last point is a bit more controversial, but it's discussed at length in the Sutton and Barto book.

Alex

[edited by - alexjc on March 5, 2004 7:39:31 AM]

##### Share on other sites
Geta    136
quote:
Original post by alexjc
I too would be interested in hearing the roundtable participants opinions. But I doubt I''d be surprised with the reply... The number of games I know that use a generic planner (not for paths) I can count on one hand. I know I''ve missed some, but that''s still not very many.

I would agree. Just to restate what I have said many times before, that is Computer Game AI (especially at the commercial level) is about the illusion of intelligence and not about the underlying processes that form a solid foundation for the methods involved. When every ns and cycle is counted, getting the most for the least is often more important than getting the foundations right.

Anyway, you can count on me to be sure the attendees at my RTs get a chance to discuss these issues.

Eric

##### Share on other sites
Guest Anonymous Poster
This year GDC will be great for AI.
GDC 2004 will welcome for the first time 2 AI middleware products :
- RenderWareAI (Kynogon)
- AI-Implant (Bio Graphics Tech)
That is a sign.
William Tambellini

##### Share on other sites
BrianL    530
Use of a planner is more about keeping the system simple and easy to modify than about what the player sees.

It is very possible (and reasonable!) to build a FSM or HFSM that accomplishes all of the behaviors speced out for a typical game AI. The more behaviors start depending on eachother, the more unmanagable this system becomes.

Lets say we had a few atomic states that implemented behaviors. These states don''t know about other states at all. They only implement a bhavior internally, and set a flag when they are done:

State_Attack // Handles using a weapon
State_Run // Handles moving from one location to another
State_Draw // Handles equiping a weapon
State_Holster // Handles unequiping a weapon
State_Greet // Handles saying hello

Using these basic states, we have another set of states. These states are higher level; they control the flow of low level behaviors states. Instead of worrying about implementation of a behavior, they worry about when it is executed, and what dependencies there are. Lets call them goals to keep them separate:

Goal_Attack
if NoWeaponInHand and HasWeaponHolstered
EnterState(State_Draw)
else if WeaponEmpty and HasAmmo
else
EnterState(State_Attack)

Goal_Greet
if WeaponInHand
EnterState(State_Holster)
else
EnterState(State_Greet)

Now, the problems start coming up as more variations of the goals are introduced; we end up with the large ammunts of very similar code in different goals to handle control flow. Instead of 5 lines of it, it tends to be 100+ per goal. It also becomes challenging to use derivation, as exceptions in the flow control become very difficult to work with.

Using this sort of a FSM based system fof behaviors and flow control also happens to be _very_ fast to execute; the only disadvantage is that modifications later on become more difficult.

If a planner is used, the Goals simply set a desired end state to accomplish, and generate that flow control. This can be quite a bit slower, but it makes major modifications simple.

Over all, planners are wonderful tools, but that doesn''t make them ideal. There are a massive number of games out there which probably don''t need (or want) that level of complexity. I am very glad I am working with one, but the AI systems I work are shared between two projects, get fairly massive, and are reused for multiple games; not exactly the norm.

##### Share on other sites
Timkin    864
quote:
Original post by alexjc
If you had two parallel planners, then you''d have traces of distributedness.

Okay, I can see that our definitions of ''distributed'' vary slightly... I''m not going to worry too much about that, since I think we both know what the other means now...

Alex, I understand that you know most of the stuff I''m about to say... this is more the benefit of anyone reading along that doesn''t know the terminology...

quote:

A reactive behavior based on sensory input is equivalent to a plan of length 1

Functionally, yes.
quote:

A planner generates non-deterministic actions based on sensory input only

No, a planner also requires a model of the domain in order to select the sequence of actions with highest value, according to some measure. I''m not sure what you mean by a ''non-deterministic action''? Do you mean that a planner returns a set of actions from which one can be actuated after a random choice? Or do you mean something else?

quote:

A planner is a deterministic mapping from current state S and sensory input I to the resulting action A

That''s typically called a policy , or conditional plan , since for every state in the domain (which can include sensory states), there exists a function, f(s), such that a < -- f(s), where a is an element of the set of possible actions, A.
quote:

A reactive technique can achieve the same mapping (S+I->A) as a planner using memory instead of computation

Yes, I would certainly agree with that1. Planners (other than policy generators) are typically designed to produce plans at run time, based on the latest information available. This is in part due to the fact that planners are often utilised in domains where the state space and associated information is too large to store in memory.

Reactive agents are typically expressions of conditional plans, even though there may not be an explicit list of state-action pairs. There may be a function of the form f() (from above) that, given the state, produces the action - known as the agent function .

Unless the reactive agent''s function is globally optimal then it''s plans (the sequence of actions that achieve the goal given the starting state) are typically suboptimal and no guarantees can be drawn as to the likelihood of a globally optimal sequence given a locally optimal choice.

quote:

I realise that last point is a bit more controversial, but it''s discussed at length in the Sutton and Barto book.

An excellent book!

.1: One must consider that most often, reactive agents implement sub-optimal policies, particularly if they implement an agent function. For example, consider an ANN controller (a very commonly implemented agent function). If the ANN is trained on only a subset of the state space, then it will almost certainly be a sub-optimal classifier for the entire state space. Attempts to learn an optimal agent function from limited state space knowledge are essentially fruitless. It has been shown that the optimal function is theoretically possible, however computationally impossible (I beleive Marcus Sutter has published several papers on the issue of optimal agents for limited and infinite state space horizons).

Mmm, I think I went off on a bit of a tangent there!

Cheers,

Timkin

##### Share on other sites
alexjc    457
quote:

No, a planner also requires a model of the domain in order to select the sequence of actions with highest value, according to some measure.

I''m considering the planner as a black box, as if it was a reactive technique but with a state, goal and world model built in sneakily.

This was part of my first points, that both reactive and deliberative techniques produce sequences of actions, regardless of how they are implemented. The rest of my argument assumes this, so I hope you''re comfortable with this.

quote:

I''m not sure what you mean by a ''non-deterministic action''? Do you mean that a planner returns a set of actions from which one can be actuated after a random choice?

My definition of non-deterministic isn''t too exotic! I mean that you can''t expect the same deterministic action if you feed your planner with the same input values at two different points in time, specifically because it has an internal state.

You seem very keen to point out suboptimality of reactive techniques. There''s no theoretical reason for it; usually, they are suboptimal by design. And this is the major reason why they are so well suited to games. The designer doesn''t necessarily have an optimal plan in mind anyway, so all he has to do is adjust an approximative behavior until it is acceptable. That''s guaranteed provide the best ratio of results to processing power, and it explains why they are used so often in games.

A second point is that sense-plan-act techniques are active processes, which is why they do well at goal directed tasks. However, this also means that they are not particularly efficient as data driven architectures (passive instead), where stimuli from the environment are events that must be taken into account. You can do this very efficiently with reactive architectures, using (so called) "asynchronous" message handlers.

This gets back to my original point. If you were going to design a complex AI in a game, using a sense-plan-act model upfront is shooting yourself in the foot. You''d setup a reactive architecture at first, which can deal with the variety of stimili and events from the environment efficiently. Then, if a reactive AI technique takes too much memory or effort to create, you substitute that component with a planner instead (e.g. A*).

Alex

AiGameDev.com

##### Share on other sites
Timkin    864
quote:
Original post by alexjc
The rest of my argument assumes this, so I hope you''re comfortable with this.

I''m happy to think of planning in this way...

quote:

My definition of non-deterministic isn''t too exotic! I mean that you can''t expect the same deterministic action if you feed your planner with the same input values at two different points in time, specifically because it has an internal state.

Okay, but presumably, if the internal states were the same and the stimulus were the same, the outputs should be the same for a deterministic plan generator and not necessarily the same (i.e., only the same in a statistical sense) for a non-deterministic plan generator. Is this what you meant?

quote:

You seem very keen to point out suboptimality of reactive techniques. There''s no theoretical reason for it

Actually, there is. I''ll have to dig out some references for you, but it''s an accepted (in the planning community) and mathematically provable fact that local optimality in no way guarantees global optimality. What this means is that no local selection of action based on local stimulus can guarantee the satisfaction of a global goal (i.e., something that presumably requires a sequence of actions to achieve). It should be intuitive to you that this is the case. Local information cannot tell you about future states UNLESS you have a globally optimal model of the domain from which you derive an optimal state-action mapping (in which case, that derivation is a deliberative action of generating a policy).

Off the top of my head, Agre & Chapman (circa 1980 I think) might be a place to start. I think the original proofs might have come from dynamic programming (value and policy iteration), but my memory is a little fuzzy on that respect. I''ll follow this up during the week and let you know.

quote:

However, this also means that they are not particularly efficient as data driven architectures (passive instead), where stimuli from the environment are events that must be taken into account.

That is only true insofar as your ''canned plans'' (or the plan steps available to the planner) are able to handle such events. The first step of sense-plan-act involves perception of events in the environment and the effects these have on the agents state. I think you''re still thinking of such systems in terms of older implementations that would sit and deliberate about whether they can still enact the current action of their plan given their currently perceived state. Goal-directed action does not mean that an agent blindly follows its plan until the plan breaks (at which time it would replan). Indeed, within the planning community today, goal-directed action means continuously re-evaluating the value of a plan given a current state and a goal state and making alterations accordingly. A good example of this is schedule debugging, wherein the scheduler must adjust its schedules dynamically based on events within its domain. These events may have been caused indirectly by the agent, or may have originated from causes within the environment.

quote:

You can do this very efficiently with reactive architectures, using (so called) "asynchronous" message handlers.

I''m not disagreeing with (or debating) what you can and cannot do with reactive architectures. As I have said previously, they have their place... as do deliberative systems. Obviously context and application are the important factors in choosing one technique over the other.

quote:

Then, if a reactive AI technique takes too much memory or effort to create, you substitute that component with a planner instead (e.g. A*).

...and yet we see so many commerical games where this is not the case... A* is used for pathfinding far more often than reactive pathfinders!

The problem that is faced when trying to implement a reactive system is that...

quote:

You''d setup a reactive architecture at first, which can deal with the variety of stimili and events from the environment efficiently.

...is by no means a trivial task. Indeed, it is often impossible to determine how the individual components of such a system will react together in all but a small subspace of the full state space. On the other hand, it is VERY difficult to partition the state space into a set of orthogonal dimensions such that each reactive component deals with a single state (or subset of states) and thus does not affect any other state when making action recommendations. Not being able to do this means that actions have nonlinear effects on state transitions that are not easily identified without a LOT of training instances.

I would be happy for you to prove me wrong, since it would mean that reactive agents could deal with complex real world domains. Unfortunately, the evidence of the current state of the art in reactive systems is that they only work well in contrived, simplified environements (I''m not trying to say that deliberative systems are necessarily any different, although there are very good examples of deliberative agents working successfully in the real world). What this means is that, for the time being at least, our games need to be contrived and simplified if we''re to use reactive agents in them with any sort of guarantees as to the behavioural characteristics of these agents.

Cheers,

Timkin

##### Share on other sites
alexjc    457
To clarify, there are two major points of contention.
1) How to design an AI architecture (sense-plan-act vs. distributed reactive components)
2) Technique selection (deliberative vs. reflexive approaches)

quote:
Original post by Timkin
Okay, but presumably, if the internal states were the same and the stimulus were the same, the outputs should be the same for a deterministic plan generator and not necessarily the same (i.e., only the same in a statistical sense) for a non-deterministic plan generator. Is this what you meant?

If your planner is non-deterministic in terms of inputs+state, then you''ve forgotten to model something! There''s a hidden variable that''s making it non-deterministic. (Note: stochastic is a different issue, but you can consider the RNG as part of the "state" too).

Ok, so just remember a planner is deterministic mapping from state+inputs to output.

quote:

but it''s an accepted (in the planning community) and mathematically provable fact that local optimality in no way guarantees global optimality

Aha, but who said local? With reactive techniques, you can do A) input to output mapping -- which is an approximation, granted. Or B) mapping input+state to output. In this second case, reactive planning I believe it''s called, there are no theoretical reasons for suboptimality. In fact, SOAR is based on such ideas.

Again, this is important. Understand that a reactive technique mapping state+inputs to outputs will just use more memory to get to the same result as a planner, which will use computation instead. (do read Sutton & Barto )

quote:

That is only true insofar as your ''canned plans'' (or the plan steps available to the planner) are able to handle such events.
...

Hmmm... it''s not about being able to handle the problem in theory. I think by now we''ve agreed it''s fundamentally the same solution regardless of the technique.

While, I acknowledge that experts in probabilistic planning can do a good job of making their stuff efficient in theory, I''m talking about implementation. How you get your data into the system, and basically get the highest throughput.

Again, for this purpose using a reactive architecture is more efficient since they are passive, and react to stimuli when the game engine decides it''s convenient. The sense-plan-act approach actively aquires data, so it''s actively gathering stuff instead (you call it the "perception" phase). Once you''ve decided to use the S-P-A, you''re stuck with it. But if you write your system to be able to deal with incoming data passively using handlers (e.g. OnPlayerAppear, OnWeaponFire), it''ll be more efficient and then you can always include a nested S-P-A architecture if you so desire...

It''s for the same reasons that GUI toolkits are based on signal/slot pattern, every widget gets notified in a timely fashion.

quote:

...and yet we see so many commerical games where this is not the case... A* is used for pathfinding far more often than reactive pathfinders!

First, I''m not sure about your assertion that steering behaviors are less popular than A*. Secondly, you''ll find that game developers often use path-lookup tables, which are essentially reactive approximations of the search. And thirdly, it''s just a planner component; it says nothing about using sense-plan-act as a design paradigm for the architecture.

Actually, I''m glad you brought this up because it''s a perfect example of how reactive approaches are better in games. If you can apply steering behaviors, you get almost constant overhead instead of your A* search. Second, if you use lookup tables (reactive planning), you also get more efficiency by avoiding a search. Only after these two options fail do you consider a planner.

quote:

...is by no means a trivial task. Indeed, it is often impossible to determine how the individual components of such a system will react together in all but a small subspace of the full state space.
[...]
I would be happy for you to prove me wrong, since it would mean that reactive agents could deal with complex real world domains.

You''re considering this from an AI point of view. It''s not, it''s software design; splitting problems into manageable sizes is what developers have been doing for decades. We''re not trying to create an AI agent that works well on huge unidentified problems. We''re writing AI bots for relatively small well-identified tasks within a game simulation. And that is a task that''s much easier, and perfectly suited to being split up into modular components. Most game AI systems do this: navigation, decision making...

AiGameDev.com

##### Share on other sites
Timkin    864
I'll tackle the rest of your post tomorrow (it's after midnight now)... but I just wanted to point out two things before I hit the sack...

quote:

Aha, but who said local? With reactive techniques, you can do A) input to output mapping -- which is an approximation, granted. Or B) mapping input+state to output.

At least within the planning community, reactive planning is assumed to utilise only local knowledge. If you utilise global knowledge, then you're doing deliberative planning. Perhaps it's another terminology issue in this discussion...

quote:

The sense-plan-act approach actively aquires data, so it's actively gathering stuff instead (you call it the "perception" phase). Once you've decided to use the S-P-A, you're stuck with it. But if you write your system to be able to deal with incoming data passively using handlers (e.g. OnPlayerAppear, OnWeaponFire), it'll be more efficient and then you can always include a nested S-P-A architecture if you so desire...

You're thinking only of S-P-A in terms of finite interval iterative implementations. Certainly, early work in filtering and dynamic belief networks (used for modelling and inference in dynamic processes and dynamic decision problems) utilised finite time step updates of the system, however this is not the case today. S-P-A implementations don't require continuous iterative updates and can easily be written to accept data infrequently based on environmental events (stimulus).

Tomorrow I'll take another read of your post and get back to you...

(btw, is anyone else reading along with this discussion? Does anyone else have any input?)

Cheers,

Timkin

[edited by - Timkin on March 16, 2004 8:24:02 AM]

##### Share on other sites
Geta    136
I am reading it while compiling. But have no time to comment. Got to get the AI in the game done.

Eric

##### Share on other sites
Seems more like an extended definition of terms more than how to do something.

Dave Mark - President and Lead Designer
Intrinsic Algorithm -
"Reducing the world to mathematical equations!"

##### Share on other sites
BrianL    530
quote:

Actually, I'm glad you brought this up because it's a perfect example of how reactive approaches are better in games. If you can apply steering behaviors, you get almost constant overhead instead of your A* search. Second, if you use lookup tables (reactive planning), you also get more efficiency by avoiding a search. Only after these two options fail do you consider a planner.

1) Steering does not guarentee that the agent will either get to a destination, or will fail. In fact, there is no such thing as a destination, as a even the idea of a destination (in my opinion) suggests a S-P-A system.

2) How are lookup tables any different than A*? In both, the agent is making the decision 'I want to get to location X'. The planner simply is generating it, while the lookup table is precomputed. This is just an implementation detail.

How would you classify a system that operated like this, updated either every frame or every tenth of a second?

OnUpdateBehavior(){    UpdateSensors()    ValidBehaviorListBasedOnInternalState = GenBehaviors()    sortByUtility(ValidBehaviorListBasedOnInternalState)    for behavior in ValidBehaviorListBasedOnInternalState:        if Utility(behavior) < Utility(CurrentBehavior):            # Failed to find a better behavior, continue currrent            break        if (CanExecute(behavior)):            # Found a supperior behavior, execute it.            CurrentBehavior = behavior    CurrentBehavior.Update()}

In my mind, this would be deliberative system because it is caching information in the UpdateSensors() call which may be used this frame, or any later frame.

A reactive version would be would be mapping a stimlus more directly to a behavior:

OnUpdateBehavior(){    GetStimulus = GetBestStimulii()    CurrentBehavior = BehaviorMap[GetStimulus]}

Yes, I am simplifying and am bias, but I am curious how your definition would be different.

- Updated to fix tags

[edited by - BrianL on March 16, 2004 11:18:17 AM]

##### Share on other sites
alexjc    457
quote:
Original post by BrianL
In fact, there is no such thing as a destination,

The seek steering behaviour uses destinations, and combined with wall hugging behaviors reaches the destination in a great majority of layouts.

quote:

as a even the idea of a destination (in my opinion) suggests a S-P-A system

Not at all. Just because you use some internal knowledge doesn't make it a S-P-A architecture. Also note that using an internal state means the approach is no longer purely reactive as a whole (though individual components may be).

quote:

2) How are lookup tables any different than A*?

It's more than an implementation detail, it's fundamental theory. In one case, you have a decision available immediately (reactive) in the other case you have to search for a decision (deliberative).

quote:

How would you classify a system that operated like this, updated either every frame or every tenth of a second?

The fact that you're "pulling" the data from the environment with UpdateSensors shows traces of the "S" in S-P-A, but it really depends what your behaviors are actually doing. It could be some sort of subsumption architecture...

(Edit: trying to get the sig. right Where's the list of legal tags!!)

AiGameDev.com

[edited by - alexjc on March 16, 2004 11:37:44 AM]

##### Share on other sites
BrianL    530
I think we are arguing sematics; specifically, the definitions of reactive and deliberative are. One of your comments though jumped out at me:

quote:

In one case, you have a decision available immediately (reactive) in the other case you have to search for a decision (deliberative).

Is that the difference, for you, between an agent based on an deliberative and reactive behaviors? Any time a search is involved, the agent is deliberative? If there is no search, it is reactive?

In the end, it seems like the fuzzy area between strictly deliberative and strictly reactive agents will be the best solution for most problems. It seems like SPA itself could be implemented as a reactive system though, so am unsure if the elements in this argument are even lined up against each other.

For instance:

Sense: Find the nearest wall
Plan: Apply the ruleset for combining destination and wall
Act: Move

Would this be a S-P-A that is reactive? What change would be required to make it reactive?

(...and I am glad I am not the only person continuously messing up tags! )

##### Share on other sites
alexjc    457
quote:
Original post by BrianL
Is that the difference, for you, between an agent based on an deliberative and reactive behaviors?

Any time a search is involved, the agent is deliberative? If there is no search, it is reactive?

Reactive behaviours are generally local. But an agent based on reactive techniques uses immediate lookup instead of search. Finally, a reactive architecture is event driven rather than query based.

quote:

Would this be a S-P-A that is reactive? What change would be required to make it reactive?

That''s a reactive technique because it seems you''re not planning. I wouldn''t call that an S-P-A for the same reason.

But doing the sense-plan-act in that order means your architecture is not event driven (not a reactive architecture).

What''s the tag to get smaller fonts for my footer?

AiGameDev.com

##### Share on other sites
BrianL    530
quote:

But doing the sense-plan-act in that order means your architecture is not event driven (not a reactive architecture).

Something has to generate those events; an external to the AI system can generate them, or the AI can monitor the world and generates events internally.

External events (agent took damage) are nice and easy to treat as events which potentially trigger a state change in an reactive system.

Other events seem more problematic; how would you handle ''AI just received line of sight to an enemy because the enemy turned the corner''? Something like this would have to be monitored by ''something''. It may be a system internal to the AI, or the AI could issue a request an event when this occurs from a shared visibility system. Either way, something is in a sense loop at this point, checking for this visibility continuously.

In the context of event driven vs query driven agents, any deliberative system could be converted to a reactive system by moving all of the sensing to exterior systems, provided the exterior systems provided arbitrarily complex queries themselves which an AI could request, and which were translated into events sent to the AI who then reacted to them.

Is that correct? If so, how is this an advantage over allowing each AI to do this on its own?

##### Share on other sites
alexjc    457
quote:
Original post by BrianL
Other events seem more problematic; how would you handle 'AI just received line of sight to an enemy because the enemy turned the corner'? Something like this would have to be monitored by 'something'.

I'm glad you brought that up. I think it's a very important issue.

Having the AI query for that information is like an annoying user; it just requires a small amount of information for itself, and is oblivious to the rest of the system. It's innefficient because the queries are dispatched at the convenience of the AI, and require immediate attention.

On the other hand, an AI that deals with data when it is provided is like a patient user that understands that there's an entire game to be run on the same processor. This is more efficient because the game engine can decide when it's most convenient to compute the information and pass it to the AI.

Who would generate the events? It could be the physics engine, or possibly during the game logic update. But it doesn't matter; decide that along with the engine programmers. If they decide how to do it, it'll be more efficient as a batch process -- instead of one off queries. And all it takes is for you to design your AI as a reactive architecture...

quote:

In the context of event driven vs query driven agents, any deliberative system could be converted to a reactive system by moving all of the sensing to exterior systems, provided the exterior systems provided arbitrarily complex queries themselves which an AI could request, and which were translated into events sent to the AI who then reacted to them.

Exactly. You can nest a SPA within the reactive architecture by storing the data you need, and still have the advantages I listed above.

I'm glad we're on the same page! What shocked me about Brian's original rountable summary was that he separates the "phases" sense-plan-act, which implies queries rather than an event driven approach (There are no distinct phases with a reactive architecture.)

Any ideas how to print this signature with a smaller font?

AiGameDev.com
• The Book - Synthetic Creatures with Learning and Reactive Behaviors

• Tutorial Series - Exercises in Game AI Programming

• (Edited by mod to adjust font size...Alex, edit this post to see how its done...)

[edited by - Timkin on March 17, 2004 6:24:08 AM]

##### Share on other sites
Timkin    864
Great to see other people wading into the conversation!

Okay, Alex, I finally think I understand the issue now... I''ve been talking about reactive planning vs deliberative planning and how these differ in terms of SPA, while you''ve been talking about reactive agents (in terms of information gathering) and how these differ from SPA. There are some subtle and some glaring differences between reactive agents (by your definition) and reactive planners, the most obvious being your notion of a reactive agent: an agent that deals with stimuli when it it presented to it, rather than actively aquiring it as part of an iterative algorithm;

This is different to a reactive agent that acquires a stimulus from the environment and reacts to that stimulus with no consideration of the global optimality of the action choice (but perhaps with local consideration). This is how the planning community would think of a reactive agent/planner. Thus, both and SPA agent and your reactive agent could be reactive planners (where the SPA agent uses a plan of length one and only local information about the domain, gleaned from the stimulus).

So, now I might be able to talk about your notion of a reactive agent in terms of SPA and not confuse the situation by thinking that we''re talking about reactive planners!

What''s your biggest bone of contention with SPA? That it''s iterated and involves polling the environment? I can easily envisage an agent that utilises SPA that fits into your notion of reactive (i.e., continues in its current plan/action until a stimulus is provided by the environment, at which time it evaluates the effects of the stimulus on its plan to determine whether changes to the plan are required... the plan is certainly just part of the internal state of the agent, since it represents a desire to achieve some goal, to steal from the BDI literature).

I don''t think SPA is the antithesis of reactive (in your sense) in this problem scenario; I think that iterative polling is. However, whether that polling is done by the environment or by the agent, doesn''t mean that it''s not happening... it''s just shifted responsibility. Why is shifting responsibility for monitoring the occurance of events to the environment a better solution in terms of software engineering? You still have to do the work...

Unless, of course, you can tell me how we get stimuli from events without a polling mechanism?

Getting back to the SPA ''reactive'' agent... why would we want our reactive agents to NOT use global information and deliberate about the global effects of a stimulus just acquired (other than the obvious computational load)?

Cheers,

Timkin

##### Share on other sites
alexjc    457
quote:
Original post by Timkin
What's your biggest bone of contention with SPA? That it's iterated and involves polling the environment?

My problem is with the sequence:

Sense(); Plan(); Act();

It's ok for systems that have only one agent, but for games separating these phases is just an incredibly rigid approach... Polling doesn't help either. More below.

quote:

I don't think SPA is the antithesis of reactive (in your sense) in this problem scenario; I think that iterative polling is.

I've always seen SPA linked with polling, as it originally was, and the papers I've read that fix the problem use reactive approaches and call their systems hybrid... so pure SPA is evil!

quote:

Why is shifting responsibility for monitoring the occurance of events to the environment a better solution in terms of software engineering? You still have to do the work...
Unless, of course, you can tell me how we get stimuli from events without a polling mechanism?

I contend that it's much easier to design a system using the event driven approach. Yes, many things can be expressed as events very easily with few changes in the code. In the other cases, it's just a matter of reorganizing code, but it gives you more flexibility.

Ok, here are some reasons:

• Fine grain control. Event handlers are coded for very specific purposes. As a consequence, functionality is divided naturally over many functions. This allows you to control execution of the AI better. Say an NPC is far outside of the range of the player, you just don't send it messages. If it's nearby, you just send it sound events. As it gets closer, you can enable more and more of the AI functionality incrementally by deciding what messages should be sent. With the SPA polling, you end up with a coarse 'if (active) Sense();' or really ugly control flow within the sensing procedure.

• Decoupling of code. With messages, your AI doesn't depend on the engine as rigidly. During development, you can enable and disable messages, and the AI would work. You also get the advantages of hot-pluggable AI if you code it right... just more flexibility basically, but that applies to software in general.

quote:

Getting back to the SPA 'reactive' agent... why would we want our reactive agents to NOT use global information and deliberate about the global effects of a stimulus just acquired (other than the obvious computational load)?

If you're going to ignore computational load, it'd have each planner consider actions globally, but within the internal world model of each NPC. So the planning would reflect beliefs (BDI) and personallity, etc. Yeah, there's nothing wrong with that.

In fact, I've been researching this over the last few days, applying SOAR's chunking to get reactive planning within BDI... cool stuff.

AiGameDev.com: Artificial Intelligence from Theory to Fun!

Edit: I will get this sig. right! Font size doesn't work within lists unless you duplicate the tag...

[edited by - alexjc on March 17, 2004 7:22:41 AM]

##### Share on other sites
BrianL    530
The Sense state polling can operated on just as fine a level of granularity, and at times may be more efficient.

If the sense stage can definitely be broken up into steps. For instance, there may be several independant sensors, each of which senses a single ''type'' of event (enemy, good position to attack from, heartbeat, damage, etc) and translates it into information on the AIs blackboard/working memory. (see the work by the MIT Character Group:

http://characters.media.mit.edu/publications.html

specifically, ''A Layered Brain Architecture for Synthetic Creatures''.

With a system like this, the sensors may be attached and detached as needed. Each sensor may have its own polling frequency, or may even be event based. However it is implemented, I contend that a polling system, when well designed, will be just as easy to work with as an well designed event based system.

##### Share on other sites
alexjc    457
quote:
Original post by BrianL
I contend that a polling system, when well designed, will be just as easy to work with as an well designed event based system.

Polling and queries are efficient for chunks of data that are readily available in the world representation, or computed with trivial operations. Otherwise, it'll be more efficient to batch up the queries and dispatch the results as messages.

You could design a modular polling based sensory system, but it wouldn't be as easy to manage. For example, you'd have to have each agent keep track of the state of the game so it knows when to hear and when to see (all the time is innefficient). You get a level-of-detail AI paradigm for free with messages. Another example, how would you notify many AI soldiers about the sound of a player knocking on the door using sensing via polling?

If you've worked with messages before, you quickly realize their power... It's what makes Smalltalk such a flexible programming language. Sure, it takes a bit of skill to get a nice event-driven system in procedural languages like C++, but if you know what you're doing it pays off in elegance and efficiency. (Anyone can do polling )

Also note that the C4 reference you gave relies heavily on message passing via "DataRecords."

I'm not going to preach about this any further . Polling isn't very efficient in software generally (see Windowing Toolkits like QT), and AI is the same to me.

Alex

AiGameDev.com: Artificial Intelligence from Theory to Fun!

[edited by - alexjc on March 17, 2004 12:31:51 PM]

##### Share on other sites
Timkin    864
quote:
Original post by alexjc
Say an NPC is far outside of the range of the player, you just don''t send it messages. If it''s nearby, you just send it sound events. As it gets closer, you can enable more and more of the AI functionality incrementally by deciding what messages should be sent.

This is the one problem I have with messaging systems... you STILL need an oracle watching over the environment and deciding when events occur. This oracle is going to need to run at a fine resolution to ensure that small duration transient events are not missed (like bullets ricocheting off walls and producing sound events for nearby agents). Now, instead of having agents poll their local environment, we have the environment essentially polling agent states and checking them against environmental states. I don''t see that the computational workload is improved.

From the perspective of system design, yes I do understand the benefits of messaging systems... they do permit clean event driven architectures that are ''relatively'' easy to debug... however they don''t diminish the computational load... so they''re just shifting design difficulties into object event difficulties.

Personally, I think we need a whole new paradigm... something along the lines of an agent-centered message system, placing the onus on agents to notice events in the environment, rather than have the environment tell them that they notice them. Such a system might be built as an interface between the environment and the agents inference system, so it looks like sense(), but works by filtering scenes of the environment (here, I mean scenes to be things that can be sensed... sounds, images, smells, etc) and passing messages to the inference engine based on the filter results. The resolution and accuracy of the filter determines whether certain things get noticed and passed into the inference engine. This places the onus of perception back on the agent, where it should belong (from an AI perspective at least).

Such a paradigm again separated sensory action from the decision making architecture of the agent, but doesn''t remove it completely, allowing different interfaces to be designed for different classes of agent.

Anyway, that''s just my looney idea for the day!

Cheers,

Timkin

##### Share on other sites
"Events" is too broad a term anyway. A bullet hitting a player is an "event". It is the function of the bullet and should notify the player. It would be foolish for the player or any other NPC to poll all of the bullets on the screen to see if any of them happen to be hitting him.

For that matter, "agent" and "object" are being loosely used here. The player, the NPC and the bullet are all objects. Which object should be polling the environment? Should only intelligent things "poll" the environment because only sentient things should notice stuff? We are back to the PC/NPC polling the bullets if that were the case.

Again, we are at the point of a definition of terms that may or may not give us concrete 1:1 results. e.g. TermA always uses MethodA. It is this sort of trying to "solve the world" through definitions and theory that hems us in a little bit. The bottom line is, each situation is going to have its challenges based on the needs of the design and the needs of the AI. Solve each of those challenges individually. If you happen to reuse tools such as the ones you guys are talking about, fine... do that. However, there are going to be situations where (to oversimplify my example) having the bullet poll the world is better and others where polling for bullets would be favorable. None of that can be boiled down to an answer of "my way is better than yours".

That''s largely why I didn''t want to even bother with this discussion... you aren''t solving anything. You aren''t designing anything. You are playing with words and theories. Real heady stuff but about as useful as trying to narrow down all of America''s political issues to a two-party system. There are just too many situation gray areas.

When it comes to a real in-game problem, this thread could only be used as a banner stuck in the ground and forgotten. Rah-Rah-Yay-Yay-whatever... Now let''s get to work of figuring out how to do THIS particular AI problem in THIS particular genre with THIS particular design pattern and architecture with the needs of THIS particular game design.

Dave Mark - President and Lead Designer
Intrinsic Algorithm -
"Reducing the world to mathematical equations!"