Final Year AI research (Why is AI hard?)

Started by
5 comments, last by willh 12 years, 1 month ago
hi

I'm planning Artificial Intelligence research for my final year which mainly ask why it has been so difficult to get AI that behaves *rationally*,more like unpredictably and dynamically. As opposed to the pattern behaviour that can be learnt as players player over a certain period of time. And moreso, and related to the first question, why it has been difficult for the AI learning systems to behave humanly has opposed to the wierdness (dicovered over playing for a period of time again).

Why has it been so hard to get really good AI?
Plus what good academic literature is there in this department.
Advertisement
You need to define what types of games you are talking about (chess programs have been extremely successful by pretty much any measure). Then you have to wonder what "really good AI" means. In most games it means "AI that produces behavior that adds to the player's enjoyment of the game". By that measure, there are many examples of successful AI.

Also, unpredictable AI is a nightmare for testers, not an objective by itself.
Rationality does not produce human-like behavior. In fact, pure rationality is, theoretically, purely deterministic as well because there is the premise of a "best solution" to any given collection of criteria.

As for "learning behaviors over time" from players, Go average the actions of people in situations and you won't develop "intelligence" that can handle other situations that are similar. Therefore, using some sort of aggregate system to "learn" behaviors isn't going to provide anything other than a vanilla pattern-matching device.

And if you are asking a forum for "good academic literature" recommendations, you're doing your research wrong.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

I'll add a very simple concept very often missed. AI doesn't need to be deterministic or even intelligent, it just has to "not look dumb" constantly. I don't code complicated AI, I code AI which "looks" complicated. It sometimes does some OMG you bastard "smart" things, but in general it is mostly just emergent behavior with minor smarts applied to where failures occurred. (I.e. just a modification of influence mapping.) Excepting big strategy games, I've never needed anything much more complicated than "don't be dumb ai".

Reactive/emergent behavior is much more likely to produce more humanistic responses if you give the AI limited memory via influence maps and a very simple concept of desire. (I.e. expansionist desire means build up forces and attack a town to take it over, improvement desire means ignore expansion and just build bigger better stuff, etc etc.)

It is all hard coded behavior and only the influence maps and the desires make the AI do things intelligently or not. Just "not" repeating failed actions is generally the key. (Very huge generalizations, but training is viable via code and/or data tables.)

Why has it been so hard to get really good AI?


From experience a main deterrent is that very few clock cycles are budgeted for AI. On the few professional games I've worked on, I've rarely had more then an average of 2 ms per update frame to dedicate to the AI of all NPC behavior, with a max operation time of 4ms per frame and severe memory constraints (this is from my projects specifically, I'm certain others games have different budgets).

At first this might not seem so bad (just distribute the AI operations over multiple frames), but a skilled player has infuriating reaction time, so as a quick fix you tend to dumb down your AI so it can react more quickly.

Then there is the ever present marketing decisions that throw a wrench in well designed systems like "we need the AI to do this undesigned feature and we need it for the demo next week" or "you know that cover and self preservation thing, we want it taken out and have the enemies charge forward so you can gun them down like Rambo".

I have yet to meet an AI designer who does not understand design and implementation of complex AI, but I have yet to be on a project that allows it.
There are some very good points above, but I feel a few other points need to be made:
- Humans are adept at noticing patterns in behaviour, and exploiting them.
- A human may have a totally different goal than what your AI anticipates. A great example is all the bizarre griefing style behaviour in many MMOs which don't actually benefit the player in an in-game sense.
- The early developments in computer science focussed on optimality, which is by it's nature predictable.
- Often games allow much more complicated meta-games than the designer anticipated. Therefore you're attempting to write an AI that solves problems not known at design time.

There are some very good points above, but I feel a few other points need to be made:
- Humans are adept at noticing patterns in behaviour, and exploiting them.
- A human may have a totally different goal than what your AI anticipates. A great example is all the bizarre griefing style behaviour in many MMOs which don't actually benefit the player in an in-game sense.
- The early developments in computer science focussed on optimality, which is by it's nature predictable.


Since this is for school, and while I agree with what you've posted, I would like to add some contrarian leaning observations. :)

People are great at _some_ patterns, but they suck at conditional probabilities (see the Monte Hall problem), and are easily suckered in to a Dutch Book scenario (this is the basis of modern consumer banking). We, collectively, are also very susceptible to confirmation bias. This applies to games, science, politics, etc.. We foolish souls tend to look for confirmation of a theory rather than seek out contradictory evidence.

The 'Griefer' falls in to what's called Byzantine Game Theory. Their reward is maximum loss for the opponent without any regard to personal cost. The challenge, theory wise, is detecting a Byzantine early. IIRC you can find an equilibrium for the defender(s) in these scenarios.

Optimality doesn't imply predicatbility. You can find an 'optimal' solution given all possiblities, but it will likely provide less reward than a solution that assumes only some possibilities. Many solutions work by limiting what 'all' is, or how 'all' looks numerically. Kernal methods are a great example of how 'all' can be redefined in a way that makes 'all' more conducive to certain tasks. Outside of that the trick is a balance of risk / reward. If you are willing to make assumptions (take risk) then you can optimize against the set of assumptions.

This topic is closed to new replies.

Advertisement