Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

trepan

State of Risk

This topic is 5227 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

In building a NN for the board game Risk, how would you represent the state of the board? How would you determine what moves to make? Your comments would be appriciated. Thanks, T

Share this post


Link to post
Share on other sites
Advertisement
you could probably represent the board as an influence map (google it). if it's a NN, just train it against other NN under some kind of genetic algorithm environment, perhaps.

you could develop a simple heuristic, i'm sure, with the influence map. if you increase your influence on the board it's a better move. however, the concept of determining what move to make and using a neural network are orthagonal concepts. the point of using a NN is to have it learn what moves to make based on training, not for you to tell it how to make a move. the point of neural netoworks is to learn stuff in an adaptive way.

-me

[edited by - Palidine on March 26, 2004 6:58:43 PM]

Share this post


Link to post
Share on other sites
Thanks for the reply. I''ve just started reading about this stuff in the last month or so.

Could two of the NN output nodes be which territory to attack from and which territory to attack? Could it learn the map so that it wouldn''t attack from Siam to Alaska for instance?

I''ve also thought about multiple NN based for the different stages or a turn...placing armies, attacking, free move, etc. But I don''t know how to tie them together.

With your suggestion in mind, maybe an influence map could be feed into the NN. I''ve read a little about TD-Gammon and it looks like the output of that NN is just the probability of winning from the current state. Maybe that''s what needs to be done with Risk.

Thanks,
T

Share this post


Link to post
Share on other sites
I don''t think its the best, but I''d like to learn ANN''s and I like Risk. Plus, this report got me interested...

http://www.cs.cornell.edu/boom/2001sp/choi/473repo.html

I''m thinking about representing the board, which as 42 territories, with 6 inputs for each territory. Each input would be a player''s number of armies on that territory.

I''d have additional inputs representing the continents and the points gained by owning them.

I also like Palidine''s idea of an influence map, but somehow integrate it''s results into the NN. I might use this to place initial armies or the free move at the end of the turn.

How would I train it to show the territories to attack from and to?

Thanks,
T

Share this post


Link to post
Share on other sites
as timkin pointed out probably not the best solution for this particular problem, but since you're just doing it to learn NNs, that doesn't really matter

hrm, you might try something like 2-3 inputs per territory. 1 input is theirs/mine, the other input is number of armies (stronger signal for more armies). outputs would need to indicate which territory you are attacking from and with how many units and you'd need to indicate the territory you are attacking. so agian, perhaps 2-3 nodes per territory.

the short is that you don't give it any groundrules. you stimulate the inputs and have it give a response. you then run some kind of back propigation mechanism to train it on whether or not that was a good answer. for instance, if it tried to attack from one country to another country that doesn't border it, you would send a strong WRONG signal back through to correct the node weights. similarly you'd have to train it to not attack itself. then you'd have to get into more strategic kind thinking like try and attack from somewhere that you are strong, to somewhere that the enemy is weak.

all in all it's going to be a nightmare of a task for a NN to learn. perhaps just trying to teach it a single rule like don't attack from one country to another that doesn't border it would be a better experiment. ignore everything about numbers of armies. ignore everything about teams and whether or not you own the territories. in fact, perhaps if you want to continue down this line you could arrange to train several different neural networks different rules and have them feed into each other somehow. i dunno, it's definitely a complex task and something you would have to experiment with a great deal.

definitely start simple. if you start with the goal of "learn to play risk" you will get frustrated b/c it may take you an extremely long time to get there (months, maybe even years). with a simple rule to learn as a starter you might expect to get results sooner.

-me

[edited by - Palidine on March 26, 2004 11:14:24 PM]

Share this post


Link to post
Share on other sites
quote:
Original post by trepan
I''m thinking about representing the board, which as 42 territories, with 6 inputs for each territory. Each input would be a player''s number of armies on that territory.


That''s 252 inputs right there, which is quite a few, and you want to add more. I doubt that this will prove workable.

If you insist on using a neural network, my guess is that your best bet will be to use a neural network as an evaluation function which gives the likelihood of winning, and base it on useful summaries of the current state (number of territories held, dummy flags for holding each of the continents, etc.). Such an evaluation function could be used within a search which computes (using probability) likely outcomes possible moves.

-Predictor
http://will.dwinnell.com



Share this post


Link to post
Share on other sites
You see, as far as I know, a neural net doesn''t allways produce the right answer. As it tries to generalize, it may sometimes to the wrong thing in order to to the right thing most of the time.

What I''m trying to say is that even if you only want to learn about ANNs, you should model something that may work. It would be really weird if from time to time an army could move to a country it doesn''t have a border with. This is a rule that could easily be implemented without neural networks, and the purpose of a neural net is to do stuff that is hard do model.

I think you could make sure the world is consistent and use an ANN to choose the better movement from a list of valid ones. A good way to check that, as already pointed out, is the increase of influence, but this is a very local strategy, you would need a global one. What do you think?

Share this post


Link to post
Share on other sites
quote:
Original post by Palidine
hrm, you might try something like 2-3 inputs per territory. 1 input is theirs/mine, the other input is number of armies stronger signal for more armies). outputs would need to indicate which territory you are attacking from and with how many units and you''d need to indicate the territory you are attacking.



I doubt this would be feasible. Imagine how much we should expect all of the outputs to change simply by flipping a few, even one, territory''s input for whether we or the opponent own it. This implies a need for an astronomical amount of training data, to cover all the possible combinations of inputs.

I recommend generating some set of derived features which summarize the state of the game, and using that in a model with much less ambitious output than the exact move to be made.

-Predictor
http://will.dwinnell.com

Share this post


Link to post
Share on other sites
Sounds like simple is better at first. How about...

One input for each player (6 inputs) showing the number on territories owned.

One input for each player stating the number of armies owned (6 inputs).

One input for each player stating number number of continents owned (6 inputs).

I don''t think this representation will give any type of grand strategy but as Predictor suggested it would give a percentage of winning from the current state.

I could then loop though all possible moves and check the NN to see the percentage if I own a territory and choose the highest one. Although I seems like I could derive a simple function to determine this.

What if I go a different route...sorry about the rambling but you guys are giving me good ideas and I''m sure saving me a lot of programming time.

What if each territory had it''s own mini-NN that produced a need signal. The inputs could be quite few, but highly strategic in nature. Number of armies I contain, closest enemy army, number of territories needed to complete my continent, my influence map weight, number of friendly and enemy armies within two move, etc.

Inputs for each territory could be less than 10 or so but these would be feed into a larger NN that takes into account this need signal.

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!