Learning system design.

Started by
3 comments, last by wodinoneeye 14 years, 4 months ago
I am going to have an AI system that gradually learns what techniques and combat tendencies you use. But I am working on balancing the design. Your men will also have learning and since they will most likely have a longer lifetime than the AI enemies, it will work well for them. But my question is this, would it be better to have enemies: (A) After a few get killed by a certain combo or something that they have a hard time blocking, the others learn from the mistakes of the few that were killed, and then be able to adapt as a group. (B) Have fewer enemies that have higher hit points so that they stay alive longer. And they would be fast learners. (C) Something else. Thanks.
Advertisement
I say do it whichever way is the most fun to play. You're not designing your whole game just to show off your "learning" AI, you're adding learning to make the game more fun (presumably), so it seems counter-intuitive to me to change the mechanics of your game just to make the learning "fit".
Well learning enemies are more fun than normal enemies. The problem is, if all the enemies of a certain type suddenly all learn from your move against another of their type, it would seem unrealistic.
Replicators are known to be incredibly adaptive and intelligent adversaries. :-P


It will depend alot on how your game mechanics work and how complex your game is (as in number of factors which determine results of actions).


If you have scenery/terrain factors involved or item buff/adjustments those things would have to be considered (factored in) by the AI you have learning OR they will often be making incorrect assumptions about the situation. Unless you tell the AI ahead of times what factors are significant, having it figure out which ones have effected the outcomes it watches has to be done -- and THAT is a fairly difficult AI task in itself).

Similarly, if your results for say damage_points are calculated on a bellcurve or have an open ended exponential calculation (or things like criticals that do 2X 3X etc.. the normal damage) you have to either have the AI recognize exceptional results (understand that things work that way) or take longer samples to average out possible outcomes before creating the decision analysis functions.

The more factors invloved in generating game action results, the more variables need to be in the equation used to evaluate what the AI sees so that it can draw a conclusion as to what its counteractions should be. From there there is then the task of deciding how effective the counter actions were (which are subject to the same evaluation difficulties I mention above).

Another difficulty comes multiple actions lead upto the result. Games often have combinations where one action sets up/enables a following action. Others have results that have effects that happen over time (ie- leeching HP) and the entire rsult is a summation of the effect over time.


Having a game where the game mechanics are simple -- where cause and effect are likewise simple then programming the 'learning' is easier. But then such a system doesnt offer the player many option and thus little variation of actions for an AI to react to.

When the system is complex (and you DO have a competant learning AI system) then there is the problem of getting enough samples to generate a valid conclusion.
Will you run the player through the same flavor of situation many many times to get enough training info. And the more complex the situation the more CPU resources are needed to do the analysis. Real time might not even be possible.



--------------------------------------------[size="1"]Ratings are Opinion, not Fact

This topic is closed to new replies.

Advertisement