Jump to content

  • Log In with Google      Sign In   
  • Create Account


GDC = AI_Fest; while(GDC == AI_Fest) {CheerWildly();}


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
71 replies to this topic

#21 alexjc   Members   -  Reputation: 450

Like
Likes
Like

Posted 03 March 2004 - 10:03 PM

Timkin, you''re the moderator! Isn''t it possible to fork the thread?


I''m still very confused about what you consider a S-P-A architecture. If it does those actions in order, then for all intents and purposes, it behaves as one monolithic component, no?


Anyway, I think it''s best if we argue bottom up. I''ll write down what we agree upon and build from there

Sponsor:

#22 alexjc   Members   -  Reputation: 450

Like
Likes
Like

Posted 03 March 2004 - 10:45 PM

Timkin, things we both agree on (I think):


  1. Both deliberative and reactive techniques have their own domains that they are applicable to/suitable for.

  2. Neither pure reactive or pure deliberative system can be considered the best approach for most non-trivial real-world problems.

  3. Hybrid systems have proven themselves the most useful over the past decade.


Next, points up for debate.

#23 Geta   Members   -  Reputation: 136

Like
Likes
Like

Posted 04 March 2004 - 10:46 AM

quote:
Original post by Timkin
For those going to the GDC roundtable... and this is particularly directed at Steve and/or Eric, I''d be interested in hearing the results of discussing this topic:

"Which is better for computer game agents: deliberative architectures or heirarchical, distributed architectures"?

The points to consider should be something like:

Ease of design, implementation, debugging and verification;
Availability and understanding of known methods/algorithms;
Can we simply transplant techniques from robotics into game agents;
What are peoples experiences with either/both paradigms;
What has been successful in the past.




Noted, and it will be brought up in at least 2 of my roundtables: the general AI topics and the FPS specific.

Eric


#24 Timkin   Members   -  Reputation: 864

Like
Likes
Like

Posted 04 March 2004 - 02:14 PM

quote:
Original post by alexjc
Timkin, you''re the moderator! Isn''t it possible to fork the thread?



No, I can close, open, moved and delete threads or delte/edit posts, but there is no tool in the current forums to fork a thread. I''ll check as to whether its a feature in the new forums... if not, I''ll pass on the suggestion!

quote:
Original post by alexjc
I''m still very confused about what you consider a S-P-A architecture. If it does those actions in order, then for all intents and purposes, it behaves as one monolithic component, no?



Okay, let me be really clear then, so hopefully there will be no confusion as to what I mean...

Sense-plan-act as a linear algorithm has been around for a long time (certainly since the days of SHAKEY). There are current researchers who still follow this linear paradigm, although typically speaking, planning and acting are often interleaved (and this was the standard approach to autonomous deliberative agents for many years). Nowdays, sense-plan-act is more a label given to autonomous deliberative agents than a paradigm, since most implementations have continuous sensing and either interleaved planning and acting, or run all three actions concurrently.

I think it''s important to realise that the modern view of sense-plan-act has to reflect our changed notions of what these things mean. ''Sense'' is really now a two stage process: ''obtain observation'' and ''revise beliefs''. ''Plan'' just means to select a sequence (of length >= 1) of actions given some criterion. This can be online deliberative, or it can be a lookup from a policy that was precomputed before acting (policies definitely being one grey area where reactive and deliberative paradigms are blurred together somewhat). ''Act'' is pretty obvious! If we''re talking about s-p-a in terms of agents like SHAKEY the robot, sure, their implementations were monolithic. However, the paradigm now means something different to what it did 30 years ago and reflects the way in which we now go about the various aspects of the paradigm. So, perhaps we should change ''sense-plan-act'' to ''sense/plan/act'', meaning they don''t necessarily have to happen one after the other.

It''s on this basis - and looking at current research - that I say that ''sense-plan-act'' (by which I mean the possibility of ''sense/plan/act'') is not monolithic. I''m talking about current state of the art, rather than comparing an implementation of a paradigm 30 years ago to current paradigms like Embodied Agents utilising heirarchical, distributed architectures of reactive components.

I hope that clears up my position. Feel free to disagree with me!
Timkin

#25 Timkin   Members   -  Reputation: 864

Like
Likes
Like

Posted 04 March 2004 - 02:18 PM

quote:
Original post by alexjc
Both deliberative and reactive techniques have their own domains that they are applicable to/suitable for.



I think thats a bit of a narrow view... I do believe that these domains overlap to some blurry extent, which is why hybrid systems tend to outperform single paradigm systems over the broader domain union.

As for the other two points, yes I would agree with those.

Timkin

#26 Timkin   Members   -  Reputation: 864

Like
Likes
Like

Posted 04 March 2004 - 02:28 PM

quote:
Original post by Geta
Noted, and it will be brought up in at least 2 of my roundtables: the general AI topics and the FPS specific.



Actually, I''d be interested to hear what the RT participants thought the domain of applicability was for deliberative and reactive agents. Do they think these domains differ. Is one paradigm more suited to FPS, while the other more suited to RPG, for instance?

Timkin

#27 IADaveMark   Moderators   -  Reputation: 2396

Like
Likes
Like

Posted 04 March 2004 - 03:54 PM

quote:
Original post by Timkin
Actually, I''d be interested to hear what the RT participants thought the domain of applicability was for deliberative and reactive agents. Do they think these domains differ. Is one paradigm more suited to FPS, while the other more suited to RPG, for instance?

Careful that you don''t overestimate the RT attendees. Why do you think we want you guys there so badly?



Dave Mark - President and Lead Designer
Intrinsic Algorithm -
"Reducing the world to mathematical equations!"

#28 alexjc   Members   -  Reputation: 450

Like
Likes
Like

Posted 04 March 2004 - 11:58 PM

Timkin, ok. I see your perspective now. A SPA architecture is modular in the same way as a rule-based system is; you have components that may be designed and implemented separately (eg.g working memory, rule-base, and the interpreter). While it's arguable whether that's monolithic or not, you take away one component and the whole system collapses and produces nothing useful. I wouldn't class most sense-plan-act architectures as distributed (in terms of functionality) because the components are so tightly coupled. If you had two parallel planners, then you'd have traces of distributedness.


I too would be interested in hearing the roundtable participants opinions. But I doubt I'd be surprised with the reply... The number of games I know that use a generic planner (not for paths) I can count on one hand. I know I've missed some, but that's still not very many.


So, four more statements Timkin:

  1. A reactive behavior based on sensory input is equivalent to a plan of length 1
  2. A planner generates non-deterministic actions based on sensory input only
  3. A planner is a deterministic mapping from current state S and sensory input I to the resulting action A
  4. A reactive technique can achieve the same mapping (S+I->A) as a planner using memory instead of computation


I realise that last point is a bit more controversial, but it's discussed at length in the Sutton and Barto book.

Alex

[edited by - alexjc on March 5, 2004 7:39:31 AM]

#29 Geta   Members   -  Reputation: 136

Like
Likes
Like

Posted 05 March 2004 - 04:25 AM

quote:
Original post by alexjc
I too would be interested in hearing the roundtable participants opinions. But I doubt I''d be surprised with the reply... The number of games I know that use a generic planner (not for paths) I can count on one hand. I know I''ve missed some, but that''s still not very many.



I would agree. Just to restate what I have said many times before, that is Computer Game AI (especially at the commercial level) is about the illusion of intelligence and not about the underlying processes that form a solid foundation for the methods involved. When every ns and cycle is counted, getting the most for the least is often more important than getting the foundations right.

Anyway, you can count on me to be sure the attendees at my RTs get a chance to discuss these issues.

Eric


#30 Anonymous Poster_Anonymous Poster_*   Guests   -  Reputation:

Likes

Posted 05 March 2004 - 05:00 AM

This year GDC will be great for AI.
GDC 2004 will welcome for the first time 2 AI middleware products :
- RenderWareAI (Kynogon)
- AI-Implant (Bio Graphics Tech)
That is a sign.
William Tambellini


#31 BrianL   Members   -  Reputation: 530

Like
Likes
Like

Posted 05 March 2004 - 05:51 AM

Use of a planner is more about keeping the system simple and easy to modify than about what the player sees.

It is very possible (and reasonable!) to build a FSM or HFSM that accomplishes all of the behaviors speced out for a typical game AI. The more behaviors start depending on eachother, the more unmanagable this system becomes.

Lets say we had a few atomic states that implemented behaviors. These states don''t know about other states at all. They only implement a bhavior internally, and set a flag when they are done:

State_Attack // Handles using a weapon
State_Reload // Handles reloading an empty weapon
State_Run // Handles moving from one location to another
State_Draw // Handles equiping a weapon
State_Holster // Handles unequiping a weapon
State_Greet // Handles saying hello

Using these basic states, we have another set of states. These states are higher level; they control the flow of low level behaviors states. Instead of worrying about implementation of a behavior, they worry about when it is executed, and what dependencies there are. Lets call them goals to keep them separate:

Goal_Attack
if NoWeaponInHand and HasWeaponHolstered
EnterState(State_Draw)
else if WeaponEmpty and HasAmmo
EnterState(State_Reload)
else
EnterState(State_Attack)

Goal_Greet
if WeaponInHand
EnterState(State_Holster)
else
EnterState(State_Greet)

Now, the problems start coming up as more variations of the goals are introduced; we end up with the large ammunts of very similar code in different goals to handle control flow. Instead of 5 lines of it, it tends to be 100+ per goal. It also becomes challenging to use derivation, as exceptions in the flow control become very difficult to work with.

Using this sort of a FSM based system fof behaviors and flow control also happens to be _very_ fast to execute; the only disadvantage is that modifications later on become more difficult.

If a planner is used, the Goals simply set a desired end state to accomplish, and generate that flow control. This can be quite a bit slower, but it makes major modifications simple.

Over all, planners are wonderful tools, but that doesn''t make them ideal. There are a massive number of games out there which probably don''t need (or want) that level of complexity. I am very glad I am working with one, but the AI systems I work are shared between two projects, get fairly massive, and are reused for multiple games; not exactly the norm.

#32 Timkin   Members   -  Reputation: 864

Like
Likes
Like

Posted 08 March 2004 - 12:26 PM

quote:
Original post by alexjc
If you had two parallel planners, then you''d have traces of distributedness.


Okay, I can see that our definitions of ''distributed'' vary slightly... I''m not going to worry too much about that, since I think we both know what the other means now...

Alex, I understand that you know most of the stuff I''m about to say... this is more the benefit of anyone reading along that doesn''t know the terminology...

quote:

A reactive behavior based on sensory input is equivalent to a plan of length 1


Functionally, yes.
quote:

A planner generates non-deterministic actions based on sensory input only


No, a planner also requires a model of the domain in order to select the sequence of actions with highest value, according to some measure. I''m not sure what you mean by a ''non-deterministic action''? Do you mean that a planner returns a set of actions from which one can be actuated after a random choice? Or do you mean something else?

quote:

A planner is a deterministic mapping from current state S and sensory input I to the resulting action A


That''s typically called a policy , or conditional plan , since for every state in the domain (which can include sensory states), there exists a function, f(s), such that a < -- f(s), where a is an element of the set of possible actions, A.
quote:

A reactive technique can achieve the same mapping (S+I->A) as a planner using memory instead of computation


Yes, I would certainly agree with that1. Planners (other than policy generators) are typically designed to produce plans at run time, based on the latest information available. This is in part due to the fact that planners are often utilised in domains where the state space and associated information is too large to store in memory.

Reactive agents are typically expressions of conditional plans, even though there may not be an explicit list of state-action pairs. There may be a function of the form f() (from above) that, given the state, produces the action - known as the agent function .

Unless the reactive agent''s function is globally optimal then it''s plans (the sequence of actions that achieve the goal given the starting state) are typically suboptimal and no guarantees can be drawn as to the likelihood of a globally optimal sequence given a locally optimal choice.

quote:

I realise that last point is a bit more controversial, but it''s discussed at length in the Sutton and Barto book.


An excellent book!

.1: One must consider that most often, reactive agents implement sub-optimal policies, particularly if they implement an agent function. For example, consider an ANN controller (a very commonly implemented agent function). If the ANN is trained on only a subset of the state space, then it will almost certainly be a sub-optimal classifier for the entire state space. Attempts to learn an optimal agent function from limited state space knowledge are essentially fruitless. It has been shown that the optimal function is theoretically possible, however computationally impossible (I beleive Marcus Sutter has published several papers on the issue of optimal agents for limited and infinite state space horizons).


Mmm, I think I went off on a bit of a tangent there!

Cheers,

Timkin

#33 alexjc   Members   -  Reputation: 450

Like
Likes
Like

Posted 13 March 2004 - 12:14 AM

quote:

No, a planner also requires a model of the domain in order to select the sequence of actions with highest value, according to some measure.



I''m considering the planner as a black box, as if it was a reactive technique but with a state, goal and world model built in sneakily.

This was part of my first points, that both reactive and deliberative techniques produce sequences of actions, regardless of how they are implemented. The rest of my argument assumes this, so I hope you''re comfortable with this.


quote:

I''m not sure what you mean by a ''non-deterministic action''? Do you mean that a planner returns a set of actions from which one can be actuated after a random choice?



My definition of non-deterministic isn''t too exotic! I mean that you can''t expect the same deterministic action if you feed your planner with the same input values at two different points in time, specifically because it has an internal state.



You seem very keen to point out suboptimality of reactive techniques. There''s no theoretical reason for it; usually, they are suboptimal by design. And this is the major reason why they are so well suited to games. The designer doesn''t necessarily have an optimal plan in mind anyway, so all he has to do is adjust an approximative behavior until it is acceptable. That''s guaranteed provide the best ratio of results to processing power, and it explains why they are used so often in games.

A second point is that sense-plan-act techniques are active processes, which is why they do well at goal directed tasks. However, this also means that they are not particularly efficient as data driven architectures (passive instead), where stimuli from the environment are events that must be taken into account. You can do this very efficiently with reactive architectures, using (so called) "asynchronous" message handlers.


This gets back to my original point. If you were going to design a complex AI in a game, using a sense-plan-act model upfront is shooting yourself in the foot. You''d setup a reactive architecture at first, which can deal with the variety of stimili and events from the environment efficiently. Then, if a reactive AI technique takes too much memory or effort to create, you substitute that component with a planner instead (e.g. A*).

Alex



AiGameDev.com

#34 Timkin   Members   -  Reputation: 864

Like
Likes
Like

Posted 15 March 2004 - 02:29 AM

quote:
Original post by alexjc
The rest of my argument assumes this, so I hope you''re comfortable with this.



I''m happy to think of planning in this way...

quote:

My definition of non-deterministic isn''t too exotic! I mean that you can''t expect the same deterministic action if you feed your planner with the same input values at two different points in time, specifically because it has an internal state.



Okay, but presumably, if the internal states were the same and the stimulus were the same, the outputs should be the same for a deterministic plan generator and not necessarily the same (i.e., only the same in a statistical sense) for a non-deterministic plan generator. Is this what you meant?


quote:

You seem very keen to point out suboptimality of reactive techniques. There''s no theoretical reason for it



Actually, there is. I''ll have to dig out some references for you, but it''s an accepted (in the planning community) and mathematically provable fact that local optimality in no way guarantees global optimality. What this means is that no local selection of action based on local stimulus can guarantee the satisfaction of a global goal (i.e., something that presumably requires a sequence of actions to achieve). It should be intuitive to you that this is the case. Local information cannot tell you about future states UNLESS you have a globally optimal model of the domain from which you derive an optimal state-action mapping (in which case, that derivation is a deliberative action of generating a policy).

Off the top of my head, Agre & Chapman (circa 1980 I think) might be a place to start. I think the original proofs might have come from dynamic programming (value and policy iteration), but my memory is a little fuzzy on that respect. I''ll follow this up during the week and let you know.

quote:

However, this also means that they are not particularly efficient as data driven architectures (passive instead), where stimuli from the environment are events that must be taken into account.



That is only true insofar as your ''canned plans'' (or the plan steps available to the planner) are able to handle such events. The first step of sense-plan-act involves perception of events in the environment and the effects these have on the agents state. I think you''re still thinking of such systems in terms of older implementations that would sit and deliberate about whether they can still enact the current action of their plan given their currently perceived state. Goal-directed action does not mean that an agent blindly follows its plan until the plan breaks (at which time it would replan). Indeed, within the planning community today, goal-directed action means continuously re-evaluating the value of a plan given a current state and a goal state and making alterations accordingly. A good example of this is schedule debugging, wherein the scheduler must adjust its schedules dynamically based on events within its domain. These events may have been caused indirectly by the agent, or may have originated from causes within the environment.

quote:

You can do this very efficiently with reactive architectures, using (so called) "asynchronous" message handlers.



I''m not disagreeing with (or debating) what you can and cannot do with reactive architectures. As I have said previously, they have their place... as do deliberative systems. Obviously context and application are the important factors in choosing one technique over the other.

quote:

Then, if a reactive AI technique takes too much memory or effort to create, you substitute that component with a planner instead (e.g. A*).



...and yet we see so many commerical games where this is not the case... A* is used for pathfinding far more often than reactive pathfinders!

The problem that is faced when trying to implement a reactive system is that...

quote:

You''d setup a reactive architecture at first, which can deal with the variety of stimili and events from the environment efficiently.



...is by no means a trivial task. Indeed, it is often impossible to determine how the individual components of such a system will react together in all but a small subspace of the full state space. On the other hand, it is VERY difficult to partition the state space into a set of orthogonal dimensions such that each reactive component deals with a single state (or subset of states) and thus does not affect any other state when making action recommendations. Not being able to do this means that actions have nonlinear effects on state transitions that are not easily identified without a LOT of training instances.

I would be happy for you to prove me wrong, since it would mean that reactive agents could deal with complex real world domains. Unfortunately, the evidence of the current state of the art in reactive systems is that they only work well in contrived, simplified environements (I''m not trying to say that deliberative systems are necessarily any different, although there are very good examples of deliberative agents working successfully in the real world). What this means is that, for the time being at least, our games need to be contrived and simplified if we''re to use reactive agents in them with any sort of guarantees as to the behavioural characteristics of these agents.

Cheers,

Timkin

#35 alexjc   Members   -  Reputation: 450

Like
Likes
Like

Posted 15 March 2004 - 04:33 AM

To clarify, there are two major points of contention.
1) How to design an AI architecture (sense-plan-act vs. distributed reactive components)
2) Technique selection (deliberative vs. reflexive approaches)


quote:
Original post by Timkin
Okay, but presumably, if the internal states were the same and the stimulus were the same, the outputs should be the same for a deterministic plan generator and not necessarily the same (i.e., only the same in a statistical sense) for a non-deterministic plan generator. Is this what you meant?



If your planner is non-deterministic in terms of inputs+state, then you''ve forgotten to model something! There''s a hidden variable that''s making it non-deterministic. (Note: stochastic is a different issue, but you can consider the RNG as part of the "state" too).

Ok, so just remember a planner is deterministic mapping from state+inputs to output.


quote:

but it''s an accepted (in the planning community) and mathematically provable fact that local optimality in no way guarantees global optimality



Aha, but who said local? With reactive techniques, you can do A) input to output mapping -- which is an approximation, granted. Or B) mapping input+state to output. In this second case, reactive planning I believe it''s called, there are no theoretical reasons for suboptimality. In fact, SOAR is based on such ideas.

Again, this is important. Understand that a reactive technique mapping state+inputs to outputs will just use more memory to get to the same result as a planner, which will use computation instead. (do read Sutton & Barto )


quote:

That is only true insofar as your ''canned plans'' (or the plan steps available to the planner) are able to handle such events.
...



Hmmm... it''s not about being able to handle the problem in theory. I think by now we''ve agreed it''s fundamentally the same solution regardless of the technique.

While, I acknowledge that experts in probabilistic planning can do a good job of making their stuff efficient in theory, I''m talking about implementation. How you get your data into the system, and basically get the highest throughput.

Again, for this purpose using a reactive architecture is more efficient since they are passive, and react to stimuli when the game engine decides it''s convenient. The sense-plan-act approach actively aquires data, so it''s actively gathering stuff instead (you call it the "perception" phase). Once you''ve decided to use the S-P-A, you''re stuck with it. But if you write your system to be able to deal with incoming data passively using handlers (e.g. OnPlayerAppear, OnWeaponFire), it''ll be more efficient and then you can always include a nested S-P-A architecture if you so desire...

It''s for the same reasons that GUI toolkits are based on signal/slot pattern, every widget gets notified in a timely fashion.


quote:

...and yet we see so many commerical games where this is not the case... A* is used for pathfinding far more often than reactive pathfinders!



First, I''m not sure about your assertion that steering behaviors are less popular than A*. Secondly, you''ll find that game developers often use path-lookup tables, which are essentially reactive approximations of the search. And thirdly, it''s just a planner component; it says nothing about using sense-plan-act as a design paradigm for the architecture.

Actually, I''m glad you brought this up because it''s a perfect example of how reactive approaches are better in games. If you can apply steering behaviors, you get almost constant overhead instead of your A* search. Second, if you use lookup tables (reactive planning), you also get more efficiency by avoiding a search. Only after these two options fail do you consider a planner.


quote:

...is by no means a trivial task. Indeed, it is often impossible to determine how the individual components of such a system will react together in all but a small subspace of the full state space.
[...]
I would be happy for you to prove me wrong, since it would mean that reactive agents could deal with complex real world domains.


You''re considering this from an AI point of view. It''s not, it''s software design; splitting problems into manageable sizes is what developers have been doing for decades. We''re not trying to create an AI agent that works well on huge unidentified problems. We''re writing AI bots for relatively small well-identified tasks within a game simulation. And that is a task that''s much easier, and perfectly suited to being split up into modular components. Most game AI systems do this: navigation, decision making...




AiGameDev.com

#36 Timkin   Members   -  Reputation: 864

Like
Likes
Like

Posted 16 March 2004 - 01:20 AM

I'll tackle the rest of your post tomorrow (it's after midnight now)... but I just wanted to point out two things before I hit the sack...

quote:

Aha, but who said local? With reactive techniques, you can do A) input to output mapping -- which is an approximation, granted. Or B) mapping input+state to output.


At least within the planning community, reactive planning is assumed to utilise only local knowledge. If you utilise global knowledge, then you're doing deliberative planning. Perhaps it's another terminology issue in this discussion...

quote:

The sense-plan-act approach actively aquires data, so it's actively gathering stuff instead (you call it the "perception" phase). Once you've decided to use the S-P-A, you're stuck with it. But if you write your system to be able to deal with incoming data passively using handlers (e.g. OnPlayerAppear, OnWeaponFire), it'll be more efficient and then you can always include a nested S-P-A architecture if you so desire...



You're thinking only of S-P-A in terms of finite interval iterative implementations. Certainly, early work in filtering and dynamic belief networks (used for modelling and inference in dynamic processes and dynamic decision problems) utilised finite time step updates of the system, however this is not the case today. S-P-A implementations don't require continuous iterative updates and can easily be written to accept data infrequently based on environmental events (stimulus).

Tomorrow I'll take another read of your post and get back to you...

(btw, is anyone else reading along with this discussion? Does anyone else have any input?)

Cheers,

Timkin

[edited by - Timkin on March 16, 2004 8:24:02 AM]

#37 Geta   Members   -  Reputation: 136

Like
Likes
Like

Posted 16 March 2004 - 02:01 AM

I am reading it while compiling. But have no time to comment. Got to get the AI in the game done.

Eric


#38 IADaveMark   Moderators   -  Reputation: 2396

Like
Likes
Like

Posted 16 March 2004 - 02:19 AM

Seems more like an extended definition of terms more than how to do something.

Dave Mark - President and Lead Designer
Intrinsic Algorithm -
"Reducing the world to mathematical equations!"

#39 BrianL   Members   -  Reputation: 530

Like
Likes
Like

Posted 16 March 2004 - 04:17 AM

quote:

Actually, I'm glad you brought this up because it's a perfect example of how reactive approaches are better in games. If you can apply steering behaviors, you get almost constant overhead instead of your A* search. Second, if you use lookup tables (reactive planning), you also get more efficiency by avoiding a search. Only after these two options fail do you consider a planner.



1) Steering does not guarentee that the agent will either get to a destination, or will fail. In fact, there is no such thing as a destination, as a even the idea of a destination (in my opinion) suggests a S-P-A system.

2) How are lookup tables any different than A*? In both, the agent is making the decision 'I want to get to location X'. The planner simply is generating it, while the lookup table is precomputed. This is just an implementation detail.

How would you classify a system that operated like this, updated either every frame or every tenth of a second?


OnUpdateBehavior()
{
UpdateSensors()
ValidBehaviorListBasedOnInternalState = GenBehaviors()
sortByUtility(ValidBehaviorListBasedOnInternalState)
for behavior in ValidBehaviorListBasedOnInternalState:
if Utility(behavior) < Utility(CurrentBehavior):
# Failed to find a better behavior, continue currrent
break

if (CanExecute(behavior)):
# Found a supperior behavior, execute it.
CurrentBehavior = behavior

CurrentBehavior.Update()
}


In my mind, this would be deliberative system because it is caching information in the UpdateSensors() call which may be used this frame, or any later frame.

A reactive version would be would be mapping a stimlus more directly to a behavior:


OnUpdateBehavior()
{
GetStimulus = GetBestStimulii()
CurrentBehavior = BehaviorMap[GetStimulus]
}


Yes, I am simplifying and am bias, but I am curious how your definition would be different.

- Updated to fix tags

[edited by - BrianL on March 16, 2004 11:18:17 AM]

#40 alexjc   Members   -  Reputation: 450

Like
Likes
Like

Posted 16 March 2004 - 04:35 AM

quote:
Original post by BrianL
In fact, there is no such thing as a destination,



The seek steering behaviour uses destinations, and combined with wall hugging behaviors reaches the destination in a great majority of layouts.


quote:

as a even the idea of a destination (in my opinion) suggests a S-P-A system



Not at all. Just because you use some internal knowledge doesn't make it a S-P-A architecture. Also note that using an internal state means the approach is no longer purely reactive as a whole (though individual components may be).


quote:

2) How are lookup tables any different than A*?



It's more than an implementation detail, it's fundamental theory. In one case, you have a decision available immediately (reactive) in the other case you have to search for a decision (deliberative).


quote:

How would you classify a system that operated like this, updated either every frame or every tenth of a second?



The fact that you're "pulling" the data from the environment with UpdateSensors shows traces of the "S" in S-P-A, but it really depends what your behaviors are actually doing. It could be some sort of subsumption architecture...



(Edit: trying to get the sig. right Where's the list of legal tags!!)


AiGameDev.com

[edited by - alexjc on March 16, 2004 11:37:44 AM]




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS