Rethinking orders

Started by
5 comments, last by Dauntless 21 years, 1 month ago
While I was trying to figure out the real time loop of my game and how orders fit into it, I started to think that perhaps I was thinking of my Order system all wrong. In a nutshell, there are two kinds of AI systems in my game. The first is a finite state machine in which the Commanders execute the Orders based on certain rules and conditions. For example, an Order can tell a Commander to attack an enemy target if he closes within 1000m, otherwise he is to move to a certain position then the Commander does this to the best of his ability. There are a few monkey wrenches thrown in to the mix here...but I''ll get to those a little bit further. The second form of AI is the autonomous AI built into to all the Commanders. It is autonomous and learning because the Commanders will at times have to react to situations that were unaccounted for. Some of these can be rules based, but many of them are more "fuzzy" in nature. For example, let''s say that a Commander was given an Order to attack some enemy units. During the course of the real time phase however, another hitherto hidden unit is discovered before it launches an ambush attack. Now the Commander is faced with a dilemma...does he continue on with his original Order, or does he respond to the new threat? Again, it is possible to hard code rules for situations like these...but eventually a scenario will come up that the designer didn''t think about. In which case it makes more sense for the AI Commander to try to figure things out based upon several factors. And now we get to the meat of my problem....what Orders exactly are. At first, I envisioned them to essentially be a set of instructions for the Commander to follow out, with some conditional logic thrown in. The conditional logic are basic things like IF event A happens, THEN do action B. Or, While event D occurs, DO action C. These very normal english sounding statements will be built into the GUI menu which will contain both actions (commands such as attack, move, rally, form, resupply, etc), modifiers to actions ( for example, suppressive fire vs. Assault fire, or stealth move vs. all-out move), events (actions of the target or outcomes of actions), and finally, conditional logic expressions (IF-THEN, IF-ELSE, DO-WHILE, etc). The player "builds" his Order on the GUI menu which the Commander then recieves. I''m now thinking that perhaps this isn''t the best way to do this. Instead, when I really thought about what "Orders" are...I boiled it down to its essence: Goals: This is what you want the the unit to accomplish. It could be to destroy a target, to follow it, to find all enemy positions, to move to a position, to get resupplied etc. Freewill: How much free reign does the Commander have in accomplishing the goal? Must he do EXACTLY as the player ordered him to do it, or does the player just give him a rough guideline and let the AI Commander do it himself? Priority: How important is this goal? Must it be done at all costs, or if an alternative is available, will that path be taken instead? What is the threshold at which the goals are deemed to costly to pursue? And that''s pretty much it. The player can assign a primary (or long term) and secondary (or short term) goal, how much freewill the Commander has, and the priority of the Order. All the details will then be worked out by the AI of the Commander. This way, you essentially eliminate the FSM AI system, and instead rely on how well the Commander understands what he''s supposed to do. The disadvantage I see here is that it may put too much decision-making in the hands of the AI, and if the AI is stupid, then it will frustrate the player. The advantage is that the player still has a chance to correct actions during the real time phase. Also, the added flexibility means that the player can concentrate more on strategy, and less on tactics. Also, I wanted the Commanders to almost be like NPC''s that the player should be concerned about. By having self-learning systems, it would be possible for the player to "reward" smart behaving Commanders. I''m not too familiar with NN or GA''s, but I know that each of them rely on "fitness tests" or weighted structures that determine the effectiveness of a course of action. Therefore the player could help reinforce good behaviors, and punish bad ones (along with the AI learning on it''s own). In time, if you keep your Commander''s alive, they will become more and more effective...not to mention his command, who will become more experienced and disciplined. Yet another way to get rid of the "cannon fodder" mentality that I hate in strategy gaming.
The world has achieved brilliance without wisdom, power without conscience. Ours is a world of nuclear giants and ethical infants. We know more about war than we know about peace, more about killing than we know about living. We have grasped the mystery of the atom and rejected the Sermon on the Mount." - General Omar Bradley
Advertisement
I am at work so I''ll keep this short.

Heuristics.

Basicly weight each action with a value and situations are used as modifiers. Add them up and the action with biggest value get chosen.

For example, A commander has a choice to fight or flee.
Action - Stay and fight 10
Situation - Out numbered -3
Situation - Ambush -2

Action - Flee 5
Situation - No backup +2

Stay and fight = 5
flee = 7
Our commander should flee.

Individual commanders can have diferent personas. If a commander is trigger happy then his ''stay and fight'' base action could 15 instead of 10 and would still be fighting in our example.

This would in turn efect gameplay as you wouldn''t send commander Trigger Happy on a recon.
Just another random thought.
As always in these situations, we look first at real life and then make abstractions.

In real life, an order is issued to a commander in direct control, occasionally with a few constraints, who then decides exactly how to accomplish the objective. In other words, Freewill is always virtually at a maximum (depending, of course, on the level of constraint). The commander then evaluates knowns, such as the objective, identified obstacles, geographical data, etc and creates a plan or course of action.

This plan is then executed with continuous reevaluation (the plan will consist of a number of stage markers, so it would logically be reviewed at each stage attained without incident). If an unexpected quantity is encountered before a stage marker is reached, a decision must be made. For example, if the commander comes across enemy units, does he take evasive action or engage? It depends on his objectives - if stealth is an objective, then evasive action is the logical course. If the company is discovered while taking evasive action, then the new decision is whether to engage or retreat - evaluated in the light of higher objectives.

Obviously, the system "devolves" (ie, is abstracted to) a response evaluator taking bits of "knowledge" or "information" as input and generating a course of action as output. This level of inference is only necessary for the AI commander, while the individual troops can respond on a much more primitive level (avoid incident fire, gain strategic advantage, eliminate enemy, follow orders, etc).
StaticVoid-
Good idea on the Commander''s set of "statistics" by which he can gauge the appropriate course of action. Since I''m not too familiar with GA''s or NN''s though, isn''t it possible to have the Commander himself alter these statistics?

For example, let''s say the Commander''s "stay and fight" statistic is set to 15. However, after several battles in which his units always get mauled, some kind of "fitness" metric is used to analyze his performance. Therefore the Commander slowly figures out, "gee, maybe it''d be better if I fall back this time", and his "stay and fight" score now drops to a 13.
The world has achieved brilliance without wisdom, power without conscience. Ours is a world of nuclear giants and ethical infants. We know more about war than we know about peace, more about killing than we know about living. We have grasped the mystery of the atom and rejected the Sermon on the Mount." - General Omar Bradley
Oluseyi-
I''m guessing I''m going to have to come up with some sort of "World_Knowledge_Domain" which basically deal with the "evaulation" aspect of carrying out Orders.

For example, the Commander must have some way of evaluating the threat level of another group, he must understand the pro''s and cons of certain kinds of terrain, the pro''s and cons of which weapon''s are at his disposal, what formation is best to deal with certain groups etc etc. The other aspect I was thinking that you brought up was the problem of reevaluating steps.

In my game, real time phases are broken up by Order phases. Basically for a minute, the game progresses in real-time, and all Commanders and Units update their positions, their status, and their last actions in various Manager classes. During this minute, situations can arise that were not accounted for, in which case the Order may need to be reassesed. The trick is in figuring out how this reassesment is triggered.

Going back to my earlier ambush example, it would imply that the Commander continuously polls the environment updating his own internal knowledge of the world. When a new as yet unknown target appears...a corresponding change in his "knowledge" occurs. Now the Commander has to evaluate whether the priority of his Order overrides the threat, whether his own survival instincts kick in, or if an "Opportunity" presents itself (what if the target turns out to be a lightly guarded command headquarters for the enemy?).

Now, this seems like an awful lot of computing cycles going on, because each Commander is continuously polling the environment to check to see if there are any changes going on which may require a reassesment of his Orders. But I''ve already decided that the RealTime_Manager class will in turn go through a container class of all the Commanders in the game (I''m not sure what kind of container, but I''m guessing some kind of map would be best so that it can hold certain key values...though I''m not sure how fast maps are). Now that the RealTime_Manager has access to the Commander, it calls the Commander''s AI. This AI class in turn will do six things in this sequence: 1) Update World_Data 2) Update Status_Info (things like morale, UnitIntegrity, supplies) 3) Examine Order object 4) Do a Threat assesment 5) Do an Opportunity Assesment 6) Carry out action. All steps but 3 will cycle through the loop during the real time phase.

My concern with this is that it''s not really simultaneous action for all the Commanders. Instead one Commander goes through all the steps, then the next Commander, etc etc. I suppose I could do steps 1-5, then store the action away...then and only then have all Commanders execute the action. This way, the actions of the first Commander won''t trigger the Threat Assesment stage of the 5th Commander that goes.
The world has achieved brilliance without wisdom, power without conscience. Ours is a world of nuclear giants and ethical infants. We know more about war than we know about peace, more about killing than we know about living. We have grasped the mystery of the atom and rejected the Sermon on the Mount." - General Omar Bradley
quote:Original post by Dauntless
For example, the Commander must have some way of evaluating the threat level of another group, he must understand the pro's and cons of certain kinds of terrain, the pro's and cons of which weapon's are at his disposal, what formation is best to deal with certain groups etc etc.

In real life, yes. In a simulation, however, we abstract this. Each unit/group/object should define its own advantage and disadvantage - potentially in several categories - and its affiliation (including neutral), such that evaluating the threat/potential gain becomes simple arithmetic. The base values and rules for appreciation and deprecation of [dis]advantage should be developed by the designers and placed in data that is dynamically bound to the game world (meaning that new units can be added or values can be tweaked without needing to rebuild the game engine).

quote:In my game, real time phases are broken up by Order phases. Basically for a minute, the game progresses in real-time, and all Commanders and Units update their positions, their status, and their last actions in various Manager classes. During this minute, situations can arise that were not accounted for, in which case the Order may need to be reassesed. The trick is in figuring out how this reassesment is triggered.

Asynchronous execution. It's like the collision detection thing where people wonder if it makes sense to have all their 1000 objects polling the environment for collisions every frame. The solution in each case is to forecast when the next "collision" will occur (even if the Commander is unaware, the game world knows that given his current trajectory he will encounter hostile forces as juncture X). Every time trajectory changes, a recomputation of the altered group's impending collisions occurs (constrained to a reasonable radius, so you're not performing the equivalent of seeing whether a car in Brooklyn, NY will hit a pedestrian in Trenton, NJ) and the new value is entered into the world data table. At the point where a collision is registered, the Commander is then messages to reevaluate circumstances.

I keep coming back to this thing of maintaining "internal" knowledge "externally"...

quote:[Text intentionally omitted; context established.]

My concern with this is that it's not really simultaneous action for all the Commanders. Instead one Commander goes through all the steps, then the next Commander, etc etc. I suppose I could do steps 1-5, then store the action away...then and only then have all Commanders execute the action. This way, the actions of the first Commander won't trigger the Threat Assesment stage of the 5th Commander that goes.

Look up round-robin processing (also look up multitasking). You let each task execute for a short amount of time, save its state and move on to the next task, giving the illusion of simultaneous execution. You can implement it manually, or you can resort to multithreading to accomplish it (note that eventually the overhead of context switches in multithreading make it less than ideal). I've also read about it being done using sockets (wait, accept, block-type thing).

[Edit: Formatting.]

[edited by - Oluseyi on March 18, 2003 8:41:16 AM]
The more I think about it, the more I think it would make more sense to have the World_Data and Status_Info classes be external managers which the Commander has access to. I think it''d be easier for the RealTime_Manager loop to query a seperate World_Data_Manager and Status_Info_Manager to retrieve the necessary information rather than go through each and every Commander object. Now, if I do make the World_Data a friend of the Commander class...should it be a two way relationship, or would this mess up the very data encapsulation properties that make it an advantage? Personally, I don''t really see the World_Data class ever needing to access the Commander directly, but the Commander does very often need to access World_Data.

As for the multi-taksing bit, I guess that''s what I was crudely envisioning when I thought about doing steps 1-5, saving the data, for each Commander, going to the next Commander, and so on until all Commanders have gone through steps 1-5. Then step 6 gets executed for each Commander.

Oh man, my head hurts already thinking about this stuff...and I haven''t even really thought about how the AI works, or how I''m going to link all of this to the graphics component....sigh. I definitely think creativity is my stronger suit as opposed to programming (not saying that programming ISN''T creative, but it''s also a lot more formal and technical than my brain wants it to be)
The world has achieved brilliance without wisdom, power without conscience. Ours is a world of nuclear giants and ethical infants. We know more about war than we know about peace, more about killing than we know about living. We have grasped the mystery of the atom and rejected the Sermon on the Mount." - General Omar Bradley

This topic is closed to new replies.

Advertisement