Advanced AI in Games?

Started by
6 comments, last by frob 5 years, 11 months ago

At the company I currently work for we have been working on a variety of AI projects related to big data, natural speech, and autonomous driving. While these are interesting uses of AI, I wonder about their application in real-time systems like games. Games can't tolerate large delays while sending data to the cloud or complex calculation and are also limited in storage space than can be allocated to data. I am curious about the community's view of where complex AI could fit in gaming?

Advertisement

Besides the problems you mention, the usual criticism of complex AI in this forum usually contains these other points:

  • It is hard to test. You don't really understand what complex AI is doing and you can't make strong guarantees that it will not misbehave under any circumstances. Some simpler solutions (my favorite being maximization of a hand-coded utility function) don't have this problem.
  • Complex AI usually has an objective function to optimize. In most games you are trying to create a fun experience for the player, which is a very hard thing to codify as an objective function.

 

As mentioned, games focus on fun and entertainment rather than learning and memorizing decision surfaces.

Computer-controlled elements need to be tuned and adjusted in ways that are difficult with more complex AI systems. A backprop network is great at memorizing complex multi-dimensional feature sets, and machine learning or genetic algorithms with a few thousand iterations can learn how to solve problems in a game, but neither is really fun.  Fun needs to start out easy, then grow to present larger challenges over time that are always winnable.

A fun game may have a challenge that allows hundreds of replays.  A more advanced machine learning game may take too long to ramp up, or if it trains quickly it can overwhelm players becoming unwinnable.

 

AI systems in games tend to be state machines and collections of utility functions. 

Games also tend to use a small set of probability functions, like Gausian and Puisson distributions, sigmoids, linear and step functions, and weighted value arrays. Those values can be carefully tuned and adjusted until they are fun.  They don't use advanced topics like relaxation networks, RBF networks, backprop training, or other supervised and unsupervised learning because they can't be tuned and generally aren't fun.

 

One clearly visible example is The Sims series. Computer-controlled characters have a collection of variables indicating their current state. Nearby items are queried using a large pool of data about how they improve the current state, and a utility function provides a weight. Watching TV may have a total weight of +47, eating food may have a weight of +53, going to bed may have a total weight of +73, so the character chooses the highest ranking.  The weights for each may be adjusted based on probability curves, hunger utility functions may have exponential growth, actions like working out being a short bell curve when nothing else looks interesting yet will cause big changes to other motives.  Designers spend countless hours adjusting and fine tuning the weights and curves so the characters appear to have full and interesting lives while doing all the tasks to keep them alive, to maintain careers, and to maintain relationships with other characters.

Other clearly visible systems are games where you can set actions for AI-driven party members. There are options about what to query {myself, specific person, nearest party member, any party member, nearest enemy, strongest enemy, any nearby enemy, ...}, options about the triggering factor {health < X, is a boss, does not currently have a buff enabled, flies, is immune to something, ...}, and an action to take {heal, melee attack, cast regen, use a spell, use a potion, ...}. With a relatively small number of tuple parameters any character can have a short list of 5-10 items that are easy to implement and easy to adjust yet provide a wide range of emergent behaviors.  The system can be applied not just to player-exposed party members but to every character in the game. It can create lone monsters, it can create tutorial-friendly weaklings, it can create mobs that cluster and work together in a steady stream, it can create challenging groups that coordinate with melee and range and magic and defensive actions, it can create epic bosses.

 

I suppose one metric that could be gauged by modern, "Deep Learning" in game development is adaptive difficulty. The trick would lie in the Goldilocks zone where the player is challenged, but not over-whelmed, and not given so much leeway as to lose interest.

Arguably though, one would not even need to leverage an ANN for this. A whole other slew of (examinable) non-black box Machine Learning algorithms could fit this bill.

There are not many places where it can fit.
Training the AI is an offline non-shipping process, so immediately we can toss aside any ideas based on letting the AI grown as part of the game.

Now that we are talking only about games shipping with a developed ready-to-go AI, it first may seem logical to consider that a neural network can handle most AI tasks better than you could write by hand, and that is true (for a small set of tasks a direct hard-coded solution is best, but for all other tasks there can always be imagined a neural-network solution that handles the same task exactly as well as manually or better), but it ignores the important step of actually arriving at said perfect solution.

The problem is that just because a perfect neural network can almost always be imagined to handle a task better than manual code can (and this is what tempts people to keep thinking about how to apply them to games), that's doesn't mean you can create said perfect AI/neural network training weights.

Simply making a large working neural network is a chore in itself.  Once you have invested that time into it, you have to train it into the perfect end result, and there is no guarantee that that will happen.  How and what a neural network learns at each iteration is unknown to us and we can only make guesses as to how to steer them towards our desired behaviors.  By the time we discover that it is learning wrong behavior it is too late.

If you try to guide it to the desired behavior from there, there is no guarantee how much success you will have.  You can start the learning process over, but you still have no guarantees, and you will have lost too much time.

 

As you can see, the main issue is the lack of control.  The purpose of machine learning is for computers to teach themselves things that would be much too complex for us to teach them manually (for example how to identify images of objects).  We can't just look at their tables of weights and make adjustments or track progress or judge correctness, and machine learning will never be part of games (or become mainstream) until this can happen, so if you want to pioneer anything then start working on tools to help develop neural networks or otherwise facilitate machine learning.

 

I personally see machine learning, AI, and neural networks as fitting into our future pipelines the same ways as languages do now.  Meaning we can be more and more productive with certain languages, and so we created parser generators to create parsers for our languages so we could make better languages and etc.  Machine learning is blooming and it will soon be a large part of developing any game.  We should be settling on standards early and making tools and libraries to generate, train, and introspect large networks and deep learning.  New neural-network projects should be at least as easy as creating a new language—just explain the details to a "Neural Network Generator" as we do with parser generators, and the exported code will sit on top of a foundation that allows us to more easily create introspection routines and possibly creative ways to guide learning towards results more consistently controllable.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

This is all conjecture, since we probably won't find out until someone tries and fails or succeeds, but I'm not quite as pessimistic about machine learning-based AI being used in game AI, even without better verification and customization tools . I could see some small subset problems where something like neural networks could probably be applied fairly well in combination with traditional hand-designed AI techniques.

In particular, I'm thinking about problems where...

  • ... currently there don't exist any really solid/robust solutions with traditional methods, so there's no clear preference for 'let's just do it the way we know it works'. 
  • ... AI can be trained without the need of human input (so the target function can be optimized without a player having to play, which costs a lot of time/money), or where human-sourced training data is already readily available.
  • ... error cases are either not a problem (maybe even a feature) or can be identified easily so you can fall back on some kind of default behavior that is known to work.
  • ... Player perception

As an example that I may investigate closer in the future, AI tactics in RTS games (real-time maneuvering and usage of troops, etc.) is something where even the flagships of the industry like Total War/Starcraft 2, etc., are still suffering from major problems where the AI simply malfunctions often or where it is incredibly simplistic in its approach to battle and abused easily, just because the complexity of situations an AI can find itself in. Player discussions about AI in these games is often rife with complaints for good reason, and player consensus is that playing against other players is the only way to get actually interesting free-form battles.

Where to position troops and which actions to execute with them in order to achieve a desired win-rate, I imagine, would be something that can be optimized well with neural networks, with different networks trained according to different designer-picked restrictions in order to get different "AI-personalities", such as maximum actions per minute for the AI, or maximizing usage of certain unit types or action types.

The networks could be trained against pre-existing AI without human input, or successors to pre-existing franchises could make use of existing player replay data. Instead of maximizing winrate, neural networks could be optimized against a target winrate distribution with respect to the given training data in order to arrive at different difficulty levels. So an AI whose output is optimized to roughly follow the winrate distribution that a real player with skill level [x] would have against training data that represents players of various levels of skill, could possibly also be made to perform roughly like a player of skill level [x].

And neatly, this would be a subset problem for those games in general, and other problems (such as overall game strategy, what units to produce, what buildings to build, etc.) could still be solved with classical methods.

The largest concern I would have for something like this are training times, since simulating full battles is bound to be a slow process.

There are places where they are used, and have been used for years.  Systems involving feature recognition, such as handwriting recognition, speech or command recognition, or gesture recognition, many games do this with machine learning. 

 

They are also being used in games where people DON'T want to see them. Games like Slither.io or Generals have much simpler decision surfaces and have had some amazing machine learning done against them, in part because it is so easy to train them up.  On the flip side, nobody likes playing against bots in those games.

 

Really the key is their strengths. Machine learning systems like ANNs and genetic algorithms are amazing at recognizing complex decision surfaces. If you've got a bunch of training data and you've got time to train it, it can be amazing.

As far as games being used to discover those decision surfaces, they can be a great fit.  But few games are doing that kind of work, and the games that are complex enough where players want smart AIs are also the game where players complain about cheaters using exactly those things the discussion is claiming people want. The curse of dimensionality kicks in for larger games as well.  Games like Slither and Generals have relatively few options at any step, and relatively few inputs available. Games like Fortnite with a large playing area, a large variety of offensive options and defensive options and resource collection options and exploration options options, and with an enormous number of variables that change through the course of the game... well, that decision surface is far more complex and difficult to learn.

On top of that, balancing that "fun factor" is incredibly difficult.  Even having an AI that wins a certain percentage of the time won't work, because people want to see other behavior that mirrors what they would do. Bots on Slither.io that still lose after a statistically appropriate time are generally easily detected as bots. Trying to make them appear to behave consistent with humans is probably too much to ask for any unsupervised learning system.


 

This topic is closed to new replies.

Advertisement