Why not learning AI?

Started by
22 comments, last by EJH 13 years, 5 months ago
My friend and colleague, Kevin Dill, posted this column over on Game/AI today. It is a well thought-out essay on why learning AI doesn't help us much in game AI. Figured it might be of interest here.

Why Not Learn?

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Advertisement
Quote:Original Post by sconzey
Nevertheless, we did some good research, got some really interesting results from analysing the data, applying some naive techniques (like K-means clustering) to discover high-level strategic decisions. We wrote a 60 000 word paper on why "this is not easy," and still when we presented our paper, one of the lecturers said "so wait, you never actually made an AI?"

Awesome! That guy deserves an A!!

Beginner in Game Development?  Read here. And read here.

 

[Interesting... thought that there would be more comments on this article either on here or on the site itself.]

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Quote:Original post by InnocuousFox
[Interesting... thought that there would be more comments on this article either on here or on the site itself.]


Although undoubtedly true, the article is not all that original. The general message echoes through the AI forums of this very website on a regular basis, leading my brain to file it under old news and registering me to move on without comment.
Quote:Original post by Moomin
Quote:Original post by InnocuousFox
[Interesting... thought that there would be more comments on this article either on here or on the site itself.]


Although undoubtedly true, the article is not all that original. The general message echoes through the AI forums of this very website on a regular basis, leading my brain to file it under old news and registering me to move on without comment.

True enough. It was just in more detail and a more fluid manner than the piecemeal treatments we see here.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

I agree that ML is not currently a good fit for games but for completely different reasons than Kevin.

The only of his reasons I agree with is the part about lack of creative control. This inability is definitely a hindrance but for some games what you might gain in return is cool unscripted behaviour.

But 4 and 5? Something that no one has done before because it is so hard is exactly the thing that is looking to be disrupted, worked on and then joked about how people ever found such things difficult.

Then there is Issue numbers 3 and 2, basically the same root issue - what he calls rigid black boxes. Machine learning is not limited to these approaches and there are frameworks like PAC to aid in understanding or techniques like entropy or margin based models that have clear mathematical underpinnings. And the issues he speaks about, changing game parameters should only be a problem if the algorithm in uses is sensitive to over fitting and generalizes poorly or the game parameters vary very wildly during production. There are techniques such as online learning and reinforcement learning that are resilient to such unstable environments. In particular reinforcement learning is not a supervised technique and probably the most applicable of ML techniques to games. Also, I feel he misuses the term fitness function as he uses it more generally than I have encountered.

And issue 1 is a matter of using what works best. A controller for a car could use fuzzy logic, which is something thats not machine learning but arguably more robust than hand tuning a set of equations + if statements. But approaching machine learning like its anything like the name suggests is the wrong mental model. It is just statistics, differential equations and linear algebra. And fairly basic (as far as such things go) at that. Saying maths is not applicable to AI sounds a bit silly. And this brings me to the core issue I feel we agree with. Why machine learning is not enough for games.

Games are actually still trying to create AI but machine learning, especially the supervised kind is really just statistics. Not enough for the purposes of games. Games need agents that look smart but also make mistakes and are defeat-able and are fair. Machine learning excels where there is an enormous amount of data about some unknown condition but games are almost the opposite by definition. They are environments built from the ground up from scratch by people who become intimiate with many of its details. The need for statistics is little and a hand built HFSM is much more appropriate to those conditions with a need for creative but defeatable behaviour than ANNs, SVMs or what have you.

But if I would suggest something from ML to games people its Reinforcement learning. The fact that FSM are so prevalent is suggestive to its applicability. Many game environments are well modelled by Markov decision processes an area where reinforcement learning operates. Reinforcement learning is a lot less adhoc and with a much sounder mathematical underpinning, an ability to be reasoned about and incorporate planning and learning. It can be combined with game theory and tuned for mixed strategies - which are not predictable to a situation.

[Edited by - Daerax on November 5, 2010 11:07:24 AM]
@Moomin: Absolutely agreed. This discussion has been had to death. My intent here wasn't so much to present anything new, as to gather my thoughts on the subject in one place.

@Daerax: If you're a researcher, who's job is to try hard things that haven't been done before and figure them out, or if you're a hobbyist, who wants to do something for the first time in order to get the props from the community, then sure, go for it. If you're somebody who actually wants to build games, on the other hand, then the fact that lots of people have attempted this and nobody has yet pulled it off is probably a strong argument to choose a different path - at least until somebody from one of those first two groups cracks it.

You're not the first to suggest RL. It's a better choice than NNs or GAs to be sure. And there are small domains within games where it makes sense. We used very limited and simplistic reinforcement learning in Kohan 2 to prevent the AI from continuously attacking with the same sized force and losing, for example. But as your primary AI, intended to learn the high level behavior for your character? I'd be amazed if you can pull that off with an out-of-the-box learning algorithm. *Particularly* if you want any personality out of your characters (like, say, the AI that made the Halo franchise such a success).

I do agree with the difficulties you mention - that's some of what I was trying to get at when I said that it's hard to generate the data, or to figure out what the parameters for the learning should even be.
Quote:Original post by Kevin Dill
I do agree with the difficulties you mention - that's some of what I was trying to get at when I said that it's hard to generate the data, or to figure out what the parameters for the learning should even be.

Which gets to the heart of the problem with AI algorithms that expect a set of inputs, when the real problem to be solved is figuring out what constitutes a worthwhile input. Most Significant Dimensions type of problem, where sorting out the details is a subset of categorizing, and then determining the value of axes on those categories constitutes the actual merit of problem solving.

There's a group of enemies approaching: From the South, 10 minutes into the game, about a dozen, mixed units of soldiers with a few heavies, tightly spaced ... And it will probably turn out that only a small subset of the possible variables actually matter. Determining the most significant subset that contributes to victory is IMO a feature of AI that needs some love.

[Edited by - AngleWyrm on November 6, 2010 2:42:48 AM]
--"I'm not at home right now, but" = lights on, but no ones home
I had no idea that this was a such a deep polarizing issue within the game AI community.

I agree with much of what Kevin says, especially the part about keeping things simple. If I were developing a game and didn't think a big investment in AI would increase sales revenue, I'm pretty sure I wouldn't make the investment in AI. :)

One thing I think Kevin forgot to mention was CPU time. Usually a lot of things need to happen at once in order to make a great game look and feel even greater. Some ML stuff can eat precious cycles that may be better used elsewhere. Nobody would want a super complicated polynomial regression running behind Call of Duty slowing the game down, even if it did make the dogs way more difficult to kill.. (assuming an online ML model).

DISCLAIMER: The remainder of this post may not be in agreement with Kevins conclusions from the article. This post is meant to reflect a difference of opinion only. If you are easily offended by differences of opinion please discontinue reading this post.

While I was reading the article I kept thinking about a ML approach requiring a monsterous set of inputs, with a multitude of different outputs, and many thousands of data points needed for analysis. Framed that way I don't think anyone could argue with any of his conclusions.

That said, ANNs and other complex learning algorithms do not make up the majority of machine learning approaches.

Simple effective ML
Lets take a simple example from the kids game BattleShip. Bobby -vs- Computer. Computer is able to employ one of several different strategies. To get progressively harder (without cheating and frustrating Bobby) which strategy is best? We can test each strategy against Bobbys old games and pick the best performer. This is a machine learning algorithm. It goes by different names, different variations, and different levels of complexity. Google 'forecast skill'. Google 'Regret minimization'. These are ML approaches. Sometimes simple, always documented, well studied, and growing.

Lack of flexibility, creative control, and the monolith
This is more likely due to poor design than a failure of a particular ML approach. Using the driving example; why use one ML monolith to solve for all at once like the article suggests? It would be more effective (and realistic) to build one for gas, one for break, and one for steering, and one to alter the input used for the gas/break and steering-- I'm generalizing, but this is often how these sorts of things are applied in non-game environments. I saw a helicopter controller built this way (an RC helicopter, not a video game helicopter). A biologist trying to determine how cell-growth responds to a stimulus will try everything he/she can to reduce the experiment down to its lowest number of factors. For those of us with the luxury of dealing with the virtual this is often much easier than the biologist or physicist dealing with the messy real world.

Volume of data, and define fun
I agree that data volume is a real issue IF you need lots and lots of samples. This has a been a problem facing every discipline. Due to the growth of the Internet many problems are just now getting the attention they deserve. 15 years ago it would have been very hard (labour intensive) to collect a good database of car images. These days there is an abudance of data (or at least the capacity to collect it) and I think this is one area where the game AI of the future might start to sow its seeds.

For example, Kevin mentions defining 'fun'. I agree that it's a pretty tough thing to define. But what if we wanted to determine a players frustration level to automatically adjust game difficulty (I may not know what fun is, but I do know what it isn't). Maybe EA could start analyzing (anonymously of course) play from online matches, annotating said matches, and then build a model of 'angry' player. I don't really know because I've never really given it much thought, but I do know that similar problems outside of games have been solved (to varying degrees of success).

Too hard?
As far as not being any easier; Well, if your point is to save effort and cut costs then I recommend not starting at anything at all. (I'm being sarcastic). If your goal is to make more interesting games then applying ML to new problems might be worth investigating. Going one step further we should all agree that building 3D games is a lot more labour intensive than 2D tile based side scrollers... The production value of games these days is simply amazing! The QA alone is staggering when compared to what happened in the 80s. :) Definately not any easier though. So I can't say I agree with Kevins conclusion..

Never done before
Lots of not-done-before's have happened in front of our very eyes. For me that's not a very convincing argument against something. :) I like to be optimistic though.


Quote:Original post by AngleWyrm
Determining the most significant subset that contributes to victory is IMO a feature of AI that needs some love.


You're right about that. You might want to check out Adaptive Boosting though. It is one method of dealing with that exact problem and has worked very well in machine vision. The hard part is determining what you're going to measure (i.e. the feature whose response you'll measure).

[Edited by - willh on November 7, 2010 11:27:01 PM]
Quote:Original post by willh
Lots of not-done-before's have happened in front of our very eyes. For me that's not a very convincing argument against something. :) I like to be optimistic though.


I did a thread a while back on a method that generates artificial morality at a very atomic level, even modelling what morality is in a mechanical sense. People were up in arms about it, which I hadn't expected from a programming community. Surprising, eh? Maybe not.
--"I'm not at home right now, but" = lights on, but no ones home

This topic is closed to new replies.

Advertisement