• Advertisement
Sign in to follow this  

FSM with fuzzy controller?

This topic is 2343 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I had this idea for how an FSM might be traversed using a fuzzy system rather than discrete logic. The idea is that each state consists of two functions: an update function, and a scoring function. Following each update function, the data of the agent being controlled is input to the scoring function, and the state with the highest score is the next state in the sequence. This would be the top level behaviour in a subsumption like architecture.

Has this been done before? If so, what is it called? I looked into fuzzy state machines, but they seem to be something slightly different - what im talking about is using a fuzzy controller to traverse a finite state machine.

Share this post


Link to post
Share on other sites
Advertisement
FSMs control a sequence with modal decisions for (one or more) transitions to subsequent states (including resuming with no transition).

Fuzzy logic could be used to calculate/decide if the requirements for a transition is met (or which is the best) but the transition is still discrete.
The decision is based on the locality of the current state (testing a set of transitions for that context).

What you describe might just be competing transitions within the parent state and yes fuzzy logic might be useful for more flexible decision calculation.

Share this post


Link to post
Share on other sites
One of the console games I worked on in the past had its combat AI decide between attacks, movement, etc using pretty much exactly what you're describing. It worked horribly. There were too many tuning variables in the state machine that the fuzzy selection code used to make decisions; it was nearly impossible to get it to actually be effective (we didn't have any kind of automated training for the tuning variables - they were all tweaked by the design team, who were not AI-savvy).

It also used a HUGE chunk of our CPU budget (this was back in PS2 and PSP days), so we had to limit the state-switch frequency down to where it was ALSO very unresponsive. Most of the CPU time was spent doing massive amounts of line-of-sight checks. Some was spent doing pathfinding. Quite a bit was spent evaluating the ridiculous amounts of tuning variables to see if a state should be continued/entered.

For the boss AI I worked on, I avoided using the fuzzy controller, tossed out as many tuning variables as I could, hardcoded a lot of behavior directly in C++, and got a much more satisfying boss fight out of it.

We didn't use a special name for the fuzzy system. It was just called the Enemy State Machine.

Share this post


Link to post
Share on other sites
This kind of stuff is done all the time. Recently, the term seems to be a "utility-based system" -- which I believe I am unwittingly responsible for via my book. One obvious example of this is the implementation in the Sims games where a variety of "scores" are combined to create a single decision. I use that example because people can actually see it happening through the design of the game and UI. (e.g. My hunger bar is going up so finding something to eat is more of a priority.) Strategy games are another common example of this. The Civ games are another place where this sort of process is displayed to an extent in the UI. You can see the plusses and minuses that the other leader has for you and it explains why he will or won't ally with you.

Of course, there are plenty of other games in a wide variety of genres that do this. The key thing that I want to point out in your request, however, is that you need to conceptually separate "state" from "reasoner". A state is analogous to a behavior. A reasoner is a thought processes that selects a behavior. In a FSM, the reasoners are in each state. In other methods (a behavior tree for example), the reasoner is its own entity that selects a state based on its decisions.

Start with that line of thinking.

Share this post


Link to post
Share on other sites
A bit of context might be good here.

What I want to do is implement a few different AI agents, for different types of enemy, in a vehicle combat game. I wanted to try something different to an FSM, because in the past I've had a lot of hassle programming and debugging FSM's. I want each agent to use a subsumption architecture; tasks which have a series of logical steps will have a traditional FSM, and the top layer would be a fuzzy controller which selects, based on what is happening, what the priorities are, from a list which is different for each entity.

Scanner bots need to spread out over an area, doing a more concentrated scan (i.e. more bots in that area) if a particularly suspicious event occurs; events are things like the appearance of tire tracks,"enemy" bullets appearing, and things blowing up. They also need to return to base occasionally to recharge.

Enemy tanks work in patrols; each patrol has a "boss" vehicle, and that vehicle can work from a traditional FSM because its main job is to follow a predetermined patrol path and occasionally order an en-masse attack if the player gets spotted.

Other vehicles in the patrol need to be a bit more subtle. They need to keep close to a formation relative to the boss vehicle, dodge projectiles, and attack the player.

Finally, if I get time, I want to add helicopters. These things work in bombing runs, responding to any alarmed scanners, and providing support to any patrol which is under attack.

I wanted the overall effect of all this to be that the world feels "alive" and the player must learn how each enemy behaves in order to defeat the level. FSM's feel clunky, and do stupid things like getting stuck in particular states, or having to go through several redundant states before they reach their desired state.

Share this post


Link to post
Share on other sites

[quote name='ApochPiQ' timestamp='1316149130' post='4862317']
Sounds like a utility-based architecture would indeed suit you well, as Dave already mentioned.

Yep. It's all about defining the behavior through mathematics. I bet if you look around... look down really, you can find a book on the subject. cool.gif
[/quote]

Yeah... thanks, but I'm not going to do that. I don't need to define all the behavior through mathematics. I encountered formalism in university and I found it incredibly time consuming and unproductive to boil everything down to equations, especially for non deterministic systems or systems with emergent behavior.

What I want is an architecture, not a notation, which should have been made clear by my original post.

Share this post


Link to post
Share on other sites
Utility is a meta-architecture. You use utility as a control factor to decide how to progress through a more traditional set of discrete states, where instead of explicitly modeling transitions in a directed graph (ala FSMs) you model transitions as implicit changes in the currently highest-scoring state. (Note that highest score isn't the same as highest utility factor; you may choose less immediately useful options as a means to introduce "flavor" to your AI, or as part of larger-scale planning operations - e.g. what is most opportune now might be best deferred by choosing something slightly less opportune that leads to a chain of better outcomes.)

Nobody's suggesting that you have to write your system down as a bunch of greek letters and weird symbols...

Share this post


Link to post
Share on other sites

Yeah... thanks, but I'm not going to do that. I don't need to define all the behavior through mathematics. I encountered formalism in university and I found it incredibly time consuming and unproductive to boil everything down to equations, especially for non deterministic systems or systems with emergent behavior.

What I want is an architecture, not a notation, which should have been made clear by my original post.


blink.gif

Uh... wow. You start off asking about a "fuzzy controller" and yet you don't want to use math? So if your FSM transition logic said, "if health < 10..." you would eschew it because it contains math? How the hell else are you going to define behavior logic other than bland if/then statements on boolean criteria?

If I offered you three prizes: A is worth $10, B is worth $7, and C is worth $4 and asked you to pick the most valuable prize, all other things being equal you would pick A. That is math. You have just defined your own behavior through math... namely the utility that each prize represented in monetary value. In a nutshell, that's all a utility system is. In this case, it would be executed simply by picking the greatest value... hardly rocket science.

Methinks you should read the book before dismissing the single most important concept in decision theory. There is surprisingly little complicated math.


Share this post


Link to post
Share on other sites
Reading back through your posts, I saw this:

... and the top layer would be a fuzzy controller which selects, based on what is happening, what the priorities are, from a list which is different for each entity.

That, sir, is a utility system. Read my book.

Share this post


Link to post
Share on other sites
[font=arial, verdana, tahoma, sans-serif][size=2]In that case, I'm not entirely sure what you mean by "defining the behavior through mathematics" because to me, this sounds like I have to use some kind of math notation, such as ITL, to define the behavior of the entire system. I've done this in the past for highly deterministic systems, and it worked OK there for the most part, but only if I made a lot of assumptions about the input data.

For my degree I learned a language called anatempura which is an executable form of ITL. It was fun, but not productive - what you said set my mind back to when I was told that one could define the entire system using mathematical notation, and test the notation using tempura. Needless to say, I wasn't very impressed by the results. For the most part, it seemed be just academic masturbation with little practical application. Your signature also caused me to think you were saying the same thing - "reducing the world to mathematical equations".

[/font]
OK so here's what I plan to do: my FSM will be controlled by a controller which has one function for each task, the output being a single numerical score. I'll average the scores over a number of frames to avoid sudden spikes, and ill switch to the task with the highest score on the list. To simplify things, I won't try to execute more than one task at a time, or else the tasks would fight over the state of the entity.

Does this make sense, or do I need to buy a book to find out?

Share this post


Link to post
Share on other sites
Sounds like a pretty vanilla utility architecture.

Where the mathematics comes into play is in generating the scores (typically normalized to [0,1] or [0,100] or whatnot) for various actions. You do need to do some equation modeling to come up with good metrics for this, but it's a far cry from "express everything in greek letters" style formalism. Really it just boils down to questions like "how do I generate a score value to decide when to run away from an overwhelming enemy force?" and stuff along those lines. Having some good mathematics experience is very helpful here; for instance, you might want to use different types of interpolation (linear, piecewise linear, cubic, logistic, etc.) to get the score to ramp up in urgency when certain factors come into play, and so on. One good example is "run away from 10 guys with machine guns or 2 guys with rocket launchers" - you need to do some math to blend these numbers onto a normalized output.

That's really all we're saying here.


For the record - I haven't personally read Dave's book, but if you have any uncertainties about tricks or techniques for generating/tuning the equations involved for modeling your scores, it's a fantastic starting point.

Share this post


Link to post
Share on other sites
"They need to keep close to a formation relative to the boss vehicle, dodge projectiles, and attack the player."

these are things that happen after decidsion are made -- flocking behavior and pathfinding, influence mapping to effect pathfinding
(network nodes adjusted with how risky areas are within range/boresites of enemies or close to supporting units)


Some of these methods make micro decisions (pathfinding decide next step of best path) but the high level decision are as to what targets and
what overall action/tactic (attack/retreat/digin/scout/delay etc...) the units/groups of units are to attempt.


Long ago I looked at planners (hierarchical solutions to decide) between options (strategies/tactics) and how to unify metrics of judging situational factors
(how to calculate the priorities of solutions->actions and how to estimate especially with alot of unknowns). Cost and risk versus rewards and how the
different factors changed these. Much was just observation within the game mechanics and adjusting the decision calculatons - there are AI factoring techniques
to do this while playtesting scenarios (LOTS of scenarios).

Many decision factors go into curve functions that escalate importance before being summed to the
single value that is to be compared between competing solutions (also may be a vector of a few summary values that are further adjusted by overriding
functions (like a damaged unit might be 'cautious' and thus exagerate the negative aspect of risk or an overall 'do or die' order might minimize it and likewise
a payoff of attaining a target might be minimized in importance if there is little value to what is gained)

It gets even better when you start projecting possible actions of the enemy in the near future and how to not necessarily take the optimal
action for the current situation but often the position with the highest utility for the future (ie- setting up an ambush in a pass that you see the
enemy is moving towards (expending resources on a strong defensive position instead of a risky frontal charge)

Share this post


Link to post
Share on other sites

Sounds like a pretty vanilla utility architecture.

Where the mathematics comes into play is in generating the scores (typically normalized to [0,1] or [0,100] or whatnot) for various actions. You do need to do some equation modeling to come up with good metrics for this, but it's a far cry from "express everything in greek letters" style formalism. Really it just boils down to questions like "how do I generate a score value to decide when to run away from an overwhelming enemy force?" and stuff along those lines. Having some good mathematics experience is very helpful here; for instance, you might want to use different types of interpolation (linear, piecewise linear, cubic, logistic, etc.) to get the score to ramp up in urgency when certain factors come into play, and so on. One good example is "run away from 10 guys with machine guns or 2 guys with rocket launchers" - you need to do some math to blend these numbers onto a normalized output.

That's really all we're saying here.


For the record - I haven't personally read Dave's book, but if you have any uncertainties about tricks or techniques for generating/tuning the equations involved for modeling your scores, it's a fantastic starting point.



The equations frequently have to be 'modal' with if then logic to select which equations apply depending of situational factors AND to eliminate use of irrelevant factors to
specific situations.

Share this post


Link to post
Share on other sites

Sounds like a pretty vanilla utility architecture.

Yep... he's on the right track.

...but it's a far cry from "express everything in greek letters" style formalism. [/quote]
Which I'm well-documented as finding distasteful anyway.

One good example is "run away from 10 guys with machine guns or 2 guys with rocket launchers" - you need to do some math to blend these numbers onto a normalized output.[/quote]

Almost exactly the example from chapter 14... which brings me to...

For the record - I haven't personally read Dave's book
[/quote]
ohmy.gif
After all the quality time we've spent together? Of course, you've seen my GDC lectures on the subject... *shrug*


Share this post


Link to post
Share on other sites
I used such a system on a project at work, but it wasn't really typical AI. We used a state machine where each node had one or more fuzzy logic controllers built in to simulate interaction of complex machinery. So for example, Widget A would have a state machine and each node of Widget A's state machine had fuzzy logic graphs for tank Level, oil pressure, coolant level, input pressure, output pressure, etc.

Share this post


Link to post
Share on other sites

I used such a system on a project at work, but it wasn't really typical AI. We used a state machine where each node had one or more fuzzy logic controllers built in to simulate interaction of complex machinery. So for example, Widget A would have a state machine and each node of Widget A's state machine had fuzzy logic graphs for tank Level, oil pressure, coolant level, input pressure, output pressure, etc.

Similar. The difference is that you were using actual concrete values. Utility is often a measure of something intangible -- i.e. what it means to us. Often converting a concrete value to a utility value is a good idea. For example, the number of bullets in our gun can be mapped onto the "desire to reload". And its not necessarily linear.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement