Neural Networks and Reinforcement Learning

Started by
6 comments, last by Cesar 20 years ago
Hi. I''m new to gamedev so please forgive me if I''m writing in the wrong forum or anything. That said, to the question: Although I have a considerable background in neural networks, I never built one with online learning and I never used an ANN in a game. So I was wondering if you could give me some pointers on how to accomplish this. I''m working on a 2d strategy game where you don''t tell the units how to do stuff, just what you want from them and I wanted to make the units of one player learn how to fight the units of the other player, maybe make them evolve so that the most aggressive player gets a bonus from that. Is using ANNs a good idea at all? What kind of ANN? Should I consider GAs to train the net? Thanks Cesar
Advertisement
Why not take a look at AI Junky''s site?

He has examples of Genetic Algorithms and Neural Nets combining to make a bot more capable of its task.

As with all algorithms, you do have to define the task. Your description is rather too vague to allow detailed advice.

Stevie

Don''t follow me, I''m lost.
StevieDon't follow me, I'm lost.
Ken Stanley, the inventor of NEAT is currently woking on such a game. I''m very interested to see where it''s going

http://www.cs.utexas.edu/users/kstanley/nero.html


My Website: ai-junkie.com | My Book: AI Techniques for Game Programming
Yey. I ordered your book!

Anyway, the first thing I must say is that this game is not only a game. Its also a M.S. course project and an opportunity to use some techniques I never had a chance to implement.

It can''t be meaninless, but if there''s a minor improvement, I will choose the more complicated method.

That said, about the game: like I said, its a 2d strategy game in which all you can do is tell your agents what you want. I hope it can be fun. There will be several different agents, but the ones I''m more concerned about are the ones that can actually fight (they will represent an army and not only one person). This guys will have to make several decisions based in what they can see and in their "personality". So It may choose to go straight to its destination or to fight the enemy nearby. The player won''t know untill it happens, because all he can do is assign objectives. First I thought about making a combat system and developing a good algorithm to represent the agent movement and all. But then I thought it would be really nice if the combat was complex enough to open room for a learning agent that could evolve (not necessarily using a GA) to fight better from fight to fight.

Right now I''m thinking a general (that is the AI behind the army) could split its forces, send archers first, flank the opponent, things like that. Of course all the options will have a different result and will affect combat in a different way, but I don''t have an exact model, yet.

NEAT sounds really interesting, and I would really love to implement it in Java (I don''t want to use the code from the page cause I think its a lot of fun and learning to make it work). Would I be capable of making it evolve during play but start with a not-so-dumb AI? Time is not really an issue, because the game is turn based and, as it''s supposed to be played over the internet (like Archmage and other games like that, only way more complex), a turn takes half an hour. Or could maybe a fixed-topology NN to the same job (like I said, if there''s improvement, I will choose the most interesting)?

Looking around in the forums, I found some stuff about the NeuralBot that couls work as a base model. It looks interesting too, but as far as I can tell, its topology does not change...
Yey. I ordered your book!

Thanks, I hope you enjoy it.

NEAT sounds really interesting, and I would really love to implement it in Java (I don''t want to use the code from the page cause I think its a lot of fun and learning to make it work).

You will learn a lot by implementing it yourself. There is a large chapter in my book devoted to NEAT so you shouldn''t have any problems. You might also like to join the NEAT discussion group at:

http://groups.yahoo.com/group/neat/


Would I be capable of making it evolve during play but start with a not-so-dumb AI?

If the network is used for some type of action selection then that would be possible. What you definitely so not want to do is attempt to make one huge monolithic ANN like neuralbot. That will almost certainly be doomed to failure.



My Website: ai-junkie.com | My Book: AI Techniques for Game Programming
If the network is used for some type of action selection then that would be possible. What you definitely so not want to do is attempt to make one huge monolithic ANN like neuralbot. That will almost certainly be doomed to failure.


That''s the kind of information I''m looking for. But would it be OK to use NEAT with several inputs and outputs? Not even close to the numbers in the NeuralBot, but something like 20 inputs and outputs? Or with good design I could make it more simple?
The more inputs and outputs your network has the harder it will be to train. It''s better to try to decompose the problem into several parts (preferably orthogonally so that no part relies on the state of another) and train a network for each sub task.

I would strongly recommend you try your hand at some of the exercises in the book before you attempt this problem. You will learn a lot in the process.

I''d also recommend you check out the bot tutorial at the ai-depot.

Have fun!


My Website: ai-junkie.com | My Book: AI Techniques for Game Programming
Your book has arrived. I already took a look at the NEAT chapter, its really easy to understand.

I will try the NEAT approach, but first, as you pointed, I will practice a little so it may take a while until I post some results. More likely I will post a cry for help before I finish anything. :D
Thanks for the help!

This topic is closed to new replies.

Advertisement