how can neural network can be used in videogames

Started by
21 comments, last by Minsc&Boo 7 years, 10 months ago

hi.

i read a lot of of papers about playing a game by neural network or controlling characters by neural network. as you know neural network is just a model or just a hypothesis. most of what we are looking in neural network is how to train it.

first of all i think basic ways in neural network training methods dont think can work very well of neural network on game and you think using neural network can be a good use in games or that is just a reasearch.

next thing is how can i make a good design of neural network before of just thinking about error and training weights?

thank you for helping

Advertisement
We need to narrow down the type of game here.

For board games, neural networks are useful to come up with a probability distribution over the available moves, and to come up with a score that indicates something like the expected reward at the end of the game. These can be used as ingredients in either alpha-beta search or in MCTS.

For other games, you can still come up with something like an estimate of the sum of future rewards (usually with some exponential discount for rewards further away in the future) for each possible action taken. This can be trained using reinforcement learning, like DeepMind did for Atari 2600 video games.


next thing is how can i make a good design of neural network before of just thinking about error and training weights?


I have no idea of how to think of a NN design without knowing what the game is or what the role of the NN is in that game. Give us some context here, and perhaps we can come up with interesting ideas.

We need to narrow down the type of game here.

For board games, neural networks are useful to come up with a probability distribution over the available moves, and to come up with a score that indicates something like the expected reward at the end of the game. These can be used as ingredients in either alpha-beta search or in MCTS.

For other games, you can still come up with something like an estimate of the sum of future rewards (usually with some exponential discount for rewards further away in the future) for each possible action taken. This can be trained using reinforcement learning, like DeepMind did for Atari 2600 video games.

next thing is how can i make a good design of neural network before of just thinking about error and training weights?


I have no idea of how to think of a NN design without knowing what the game is or what the role of the NN is in that game. Give us some context here, and perhaps we can come up with interesting ideas.

thanks for your answer alvaro. i need a general answer. and i need info about discrete games like board games and games like shooter game. i dont know in a world that is not discrete and next step is not exact how reinforcement learning can be used.

I would start by reading the DeepMind paper on applying DQN to Atari 2600 games.

You can also check out some of the recent papers on using CNNs for computer go:
* http://arxiv.org/abs/1412.3409
* http://arxiv.org/abs/1412.6564
* http://arxiv.org/abs/1511.06410
* https://vk.com/doc-44016343_437229031?dl=56ce06e325d42fbc72

thanks alot alvaro. im going to start reading on them. please tell me any other thing useful about ann on games.

Article here about how Codemasters used a Neural Net for Colin McRae rally 2:

http://www.ai-junkie.com/misc/hannan/hannan.html

This sounds a lot like a homework question -- which we don't do here. Just sayin'.

To provide a general answer, however...

#facepalm

Yes... that is my answer.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

The problem with neural nets is that the inputs (the game situation) has to be fed to it as a bunch of numbers.

That means that there usually is a heck of alot of interpretation pre-processing required to generate this data.first.

Another problem is that the process of 'training' the neural nets is usually only understood from the outside - the logic is NOT directly accessible to the programmer. Alot of 'test' game situational data needs to be build up and maintained, and connected with a CORRECT action (probably done by a human) to force the neural net into producing is required. Again alot of indirect work.

Neutral nets also generally dont handle complex situations very well, too many factors interfere with the internal learning patterns/processes, usually requiring multiple simpler neural nets to be built to handle different strategies/tactics/solutions.

Usually with games (and their limited AI processing budgets), after you have already done the interpretive preprocessing, it usually just takes simple hand written logic to use that data -- and that logic CAN be directly tweaked to get the desired results.

It might be that practical neural nets may be just a 'tool' the main logic can use for certain analysis (and not for many others) .

--------------------------------------------[size="1"]Ratings are Opinion, not Fact

The problem with neural nets is that the inputs (the game situation) has to be fed to it as a bunch of numbers.
That means that there usually is a heck of alot of interpretation pre-processing required to generate this data.first.


You can feed images to a CNN these days.


Another problem is that the process of 'training' the neural nets is usually only understood from the outside - the logic is NOT directly accessible to the programmer. Alot of 'test' game situational data needs to be build up and maintained, and connected with a CORRECT action (probably done by a human) to force the neural net into producing is required. Again alot of indirect work.


You can make the network return an estimate of future rewards for each possible action: Read the DQN paper I linked to earlier. There are mechanisms to look into what the neural network is doing, although I think it's best to use NNs in situations where you don't particularly care how it's doing it.


Neutral nets also generally dont handle complex situations very well, too many factors interfere with the internal learning patterns/processes, usually requiring multiple simpler neural nets to be built to handle different strategies/tactics/solutions.


That's not my experience.


Usually with games (and their limited AI processing budgets), after you have already done the interpretive preprocessing, it usually just takes simple hand written logic to use that data -- and that logic CAN be directly tweaked to get the desired results.


That is the traditional approach, yes: You define a bunch of "features" that capture important aspects of the situation, and then write simple logic to combine them. When you do things the NN way, you let the network learn the features and how they interact.


It might be that practical neural nets may be just a 'tool' the main logic can use for certain analysis (and not for many others) .


I think you should give NNs an honest try. In the last few years there has been a lot of progress and most of your objections don't apply.


If you can define a reward scheme by which the quality of an agent's behavior is evaluated, you can probably use unsupervised learning to train a NN to do the job. I don't know if this is practical yet, but with the right tools, this could be a very neat way of writing game AI.

By "complex" I am talking NOT about some NPC-bot navigating around a static map, but one that has to react to friendly/enemy/neutral dynamic objects (possibly including one or more players) -- spatial relations between objects of different classifications. Now have the typical number of action options and whatever metabolic goals the NN is supposed to 'think. Suddenly a plethora of contradicting and irregular situation factors to be 'comprehended' (again, interpreting a situation which isn't just some terrain grid) need to be process to generate a 'good enough' current solution for what that 'smart' object is going to try to do. The training set expands exponentially with complexity, and a divide and conquer method cant work -- except as tool analysis interpretation which STILL has to be integrated in a complex fashion. Multiple metrics of 'good/bad' and situational adjustments for priorities (fun - modal factors to add in - big->huge NN, or breaking up into specialized NNs (which now STILL have to be intergated to decide which applies/overrides) etc....

Again 'tool' because any analysis leading to Temporally effective actions takes programming methods like finite state machines to carry out sequential solutions once some decision is made (and then possibly reevaluated and redirected - even WHEN to reevaluate and cancel current activity is a complex logic problem). Not just do action X or Y or Z and rinse, it is start strategy/tactic A or B or C and carry through/adjust...

We already have plenty of relatively mindless 'ant' objects done in games without needing NN, moving the AI up a few notches and suddenly the problem space expands hugely and the (richer) situational complexity likewise (training set hell). Thats the environment where NN fall down REAL fast -- very difficult especially any self-learning mechanism, and an assisted learn NN (being told whats good and bad in many very specific endcases) suddenly its the human limitation to get through the bulk of the work required.

Carefully targetted analysis is where I might consider using NN - limited domain and many small ones if that many different analysis are required. The primary logic for anything tactically game complex is still most efficiently created being hand crafted, where you wind up doing most of the work either way and trying to force a NN to do what you already have worked out the discrete logic for is pointless.

EDIT - a simple thing to contemplate what Im talking about is --- try to program Chess via a NN based solution.

--------------------------------------------[size="1"]Ratings are Opinion, not Fact

This topic is closed to new replies.

Advertisement