Jump to content
  • Advertisement
Sign in to follow this  
Schwartz86

Machine Learning with Multiplayer games

This topic is 2742 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello GDNet Community,

I am software engineer and I have become quite interested in learning more about artificial intelligence and machine learning. My interest was initially in the domain of robotics, however, I am currently interested in how machine learning might be applied to multiplayer/online games. Is anyone aware of any research that uses machine learning techniques to create 'smarter' enemies. For example, take 3D FPS like Halo. It seems that games could benefit by 'watching' players when they compete online. If it is possible to identify patterns over several iterations that lead to a winning strategy (i.e. 60% of the time the winner of the match is invisible and has a rocket launcher for 70% of the match duration) perhaps the 'game' could take note of this and attempt to derive new strategies when a player competes against the computer rather than a human.

This could offer several advantages:
- Gameplay could change overtime. The more you play, the smarter the opponent gets
- Computer AI could possibly begin to mimic that of human players and thus even when you aren't playing 'real' opponents, the gameplay will feel the same.
- If this strategy could be successfully implemented, it could allow for a completely new gaming scenario. I am imagining players conducting 'bot' wars where each player would pit there 'trained' bot against another's. Theoretically, the player with the most experience would have the 'better trained' bot.

There would clearly be some disadvantages as well...
- Takes control away from game designers. Who knows what the AI would do!
- Player could get frustrated by 'dumb' opponent while the game attempts to learn and possibly deploy ridiculous strategies based incorrectly using recognized patterns through previous game play

I realize this may only be a pipe dream and I know there are a large number of issues with what I proposed above (also with machine learning in general). However, I am only looking for more information about what machine learning techniques have been used in games (if any) and trying to figure out to what depth this topic has been explored.

Thanks!
Chris

Share this post


Link to post
Share on other sites
Advertisement

Is anyone aware of any research that uses machine learning techniques to create 'smarter' enemies.

There are literally dozens on academic and independent AI researchers working on this. There are likely hundreds of papers that have been written on it over the past 10 years. So, yes. We are aware of research.

The bottom line is that it is not all its cracked up to be. You can do far better with hand-crafted rules with tunable parameters.


Share this post


Link to post
Share on other sites

You can do far better with hand-crafted rules with tunable parameters.


Exactly... and most Game Designers, as you stated, want to keep control over the NPCs so for most "professional" games, it's not an option.

As for your point about "dumb opponents", keep in mind that most studios would let QA play with the game for a while, and package with the game whatever data came from this learning phase, so they would probably start off as quite good opponents.

It's just too risky to rely on learning for a production that costs millions...

Share this post


Link to post
Share on other sites
Strategic-level AI in a game like Halo is trivial: get the biggest gun, sit in the hottest kill spots, troll the spawn points.

Every high-level human player will act like this, and every decent bot will follow roughly the same pattern. Maybe dependent on game rules there might be some variations, but that's not the hard part at all.


Coming up with an intelligent way to play the game is pretty easy. What gets messy is when you need to actually act on the high-level strategy and come up with good behaviour for your agents. Suppose you play a 4v4 Halo game of Capture the Flag. Pick a basic strategy (a common one I see online is the 4-man rush, another is the divide-and-conquer) and go for it.

Now... deal with the situation where the enemy gets the rocket launcher before you do.

Deal with the situation where one of your allies lags out and you have to pick up the slack.

Deal with the fact that some players will react realistically to cover fire (i.e. hide) and others will react in true Rambo fashion (i.e. pop out guns blazing).

Deal with the fact that most players, especially under pressure, are far from rational, and may do highly irrational things (especially risky things) to gain a win.

Deal with the emotional pressure of being close to losing and bringing the game back from the edge of defeat.

Deal with the psychology of demeaning and dominating your enemy to make your victory all the more certain.



These are all things that good human players can cope with more or less instinctively; how do you codify them for a bot? Until we have "intelligence" on that level, no bot will ever pass the Halo Turing Test.


Good bot AI is not about coming up with strategies or even tactics. Well-known battlefield tactics are killer in an FPS, as any good team players can tell you; get 4 guys who know how to roam as a pack and assault fixed positions correctly, and they'll tear up Halo all day long. Code that into a bot and you end up with a virtually unbeatable opponent.

Bots don't have to compensate for perception delays.

Bots don't have to compensate for shaky trigger fingers.

Bots don't have to deal with feeling angry or depressed when they're losing.

Bots don't have to sacrifice vital time to make crucial decisions; they can do it almost instantaneously.


A good bot can slaughter human players all day. And they aren't that hard to write - I watched a friend of mine write a virtually unkillable bot AI for one of his school projects a couple of years ago, and he's far from an AI expert. He just put in place some basic rules (aim for the head, pace shots to mitigate recoil, etc.) based on his own experience in shooters. Those damn bots could mop the floor with us time and again, even though their pathfinding was crap and they tended to bunch up a lot (rocket fodder syndrome).


I'll steal a phrase from another AI colleague here (B. Schwab, respect if you're in the audience) - good AI isn't about winning. It's about losing with style.

Make a bot that can beat a human? No sweat.

Make a bot that can challenge a human, without causing excess frustration, and still provide a good, balanced experience to a very wide variety of players and skill levels?



Sweat.


Sweat very hard.

Share this post


Link to post
Share on other sites

Strategic-level AI in a game like Halo is trivial: get the biggest gun, sit in the hottest kill spots, troll the spawn points.


Yea, although I used that example in my original post, I was thinking more of strategy games where AI becomes more difficult and harder to create 'unique' player experiences. In most strategy games, after I figure out how to beat the automated opponent once, I can do it every time and the game quickly gets less exciting. It seems a lot of games, rather than create 'smarter' opponents as the difficulty is increased, simply allow the automated opponent to cheat or give them higher statistical advantages. For me, this is irritating and once I realize that the game is just 'cheating' to make it more challenging, I quickly lose interest.

I realize there are a lot of problems with machine learning and that in most cases, especially in games-- where the state of the world is entirely known in advance-- its easier/better to just program the agent. I was just interested in seeing if there had been any successful attempts in using this technique.

Share this post


Link to post
Share on other sites
The only reason that you can beat opponents on replays is because they let you. They specifically make stuff non-dynamic as a design decision. It is easy to make a widely varied, adaptive AI with the techniques already mentioned. In fact, your "learned AI" would actually tend to stagnate towards a vanilla solution rather than a creative one.

Share this post


Link to post
Share on other sites
To elaborate on the above, machine learning systems are good at finding local optima of a solution space, but not necessarily optima of the interesting solutions.

In other words, a learning system will quickly figure out how to mop the floor with you, and then just do that. Over and over. A machine will always have better reactions, better micro-management, better timing; even with the exact same resources and units in a strategy game (say, Starcraft II) the AI can always win, because it does not have human limitations and fallibilities.

So the AI will learn how to kill you, and then never do anything else. If it's designed to win - if that is the fitness criteria for how it is taught - then it will win, period. It won't play an interesting, creative game; it'll find that local optimum and just hammer on it until you give up in disgust.


To reiterate what I posted earlier, the challenge of game AI is not about winning - it's about losing in a way that feels like the AI was trying to win but was bested by the player. That's a much thornier problem overall. And since "interesting gameplay" is not quantifiable, we can't (yet) teach a machine to do it using standard machine learning algorithms, because we have no way to tell the machine how good it is doing at being interesting.

Therefore, human-designed and tuned behaviours remain the best tool we have available for that particular job.

Share this post


Link to post
Share on other sites

- If this strategy could be successfully implemented, it could allow for a completely new gaming scenario. I am imagining players conducting 'bot' wars where each player would pit there 'trained' bot against another's. Theoretically, the player with the most experience would have the 'better trained' bot.


There's a game like that called NERO: http://nerogame.org/

There's should be links to the papers about it with references on that site.

Share this post


Link to post
Share on other sites

I'll steal a phrase from another AI colleague here (B. Schwab, respect if you're in the audience) - good AI isn't about winning. It's about losing with style.

Make a bot that can beat a human? No sweat.


I'd add multiple caveats to that - "depending on game genre" and "easy to make a bot that can beat humans that are unskilled to average at the game".

Look how much it took for a bot to beat the best Chess players in the world. You are saying that was "easy"? Now consider Starcraft 2 or any other RTS. They are much more complex than Chess.

Blizzard cannot make a non-cheating AI that can give any Diamond league+ player a challenge. Same goes for every other company that makes RTS or turn-based strategy (GalCiv for example). So clearly, coding a bot (that doesn't cheat) for a deep strategy game that can beat expert level humans is far from easy.

Share this post


Link to post
Share on other sites
The idea of "cheating AI" vs. "non-cheating AI" is silly. Until we are modeling computer vision to look at screens and robotic hands to move mouse and keyboard, we are making a "cheating AI".

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!