Jump to content

View more

Image of the Day

#indiedev  #indiegame #screenshotsaturday https://t.co/IwVbswGrhe
IOTD | Top Screenshots

The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

Sign up now

Board game AI for a game with hidden elements

4: Adsense

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
2 replies to this topic

#1 AWhiteOwl   Members   


Posted 31 May 2014 - 12:23 PM

Im currently working on a board game / turn based tactics game.


the gyst of it is each player has about 4 pieces (tokens) that can move around the board. Also each token can have up to 4 "inventory" items. Each turn the player will move each of his tokens once or use an inventory item. The inventory of each token is hidden to the other player, and in some instances to themselves (if a inventory item is stolen, the person who it was stolen from doesnt know it until they try to use it). the objective of the game can be different each game. sometimes its eliminate all enemy tokens, sometimes its to get a token to a certain square on a board.


Currently i have a very basic ai, it kind of works as more of a behaviorial tree (if enemy token is x squares away do this, stuff like that). I started reading about minimax and alpha beta pruning as it seems like a more robust way to make more advanced ai. I'd like to implement some kind of ai where it evaluates all its moves and possible outcomes and then make the best choices from there.


The main issue though is that from everything i read, the assumption is that both players have full knowledge of the board. I need a way for the ai to evaulate all possible moves without "cheating" and knowing what is in each tokens inventory.


Could anyone give me some pointers on how to do this? are there any notable games that implement this sort of thing that that has any kind of information about how they coded it?

#2 Álvaro   Members   


Posted 31 May 2014 - 08:37 PM

Alpha-beta search is great for two-player zero-sum games with complete information and no randomness, as long as they have moderate branching factor and you have an idea of how to write a decent evaluation function. So it's good for things like chess, checkers, and connect-4. MEdiocre for things like parcheesi or go and hopeless for something like what you describe.

The good news is that there is another class of algorithms that should work just fine for your situation: Monte Carlo Tree Search.

Here's the basic initial plan:

1. You need a quick probabilistic model of how players pick moves in this game. This is a piece of code that takes a game situation from the point of view of a player as input and returns a distribution of probabilities on the moves available as output. This doesn't need to be very sophisticated, but it needs to run fast.

2. Generate random configurations that are consistent with your knowledge of the game so far. Compute a weight for this scenario, which is the product of the probabilities of all the actions taken by all players, according to the probabilistic model above.

3. You need a quick playout policy. This is similar to the probabilistic model described earlier, and you could start by using the same code, but I suspect later on you may want to tune them separately.

4. Starting from the configurations in (2), play out the rest of the game using the playout policy from (3) for all players. Collect statistics about what rewards you end up getting for each move you played.

5. Pick the move with the best statistics.

First, as you collect statistics, try promising moves more often than non-promising ones. You can read about "multi-armed bandits" to learn some methods to do that (I first learned about this technique from a go program called MoGo, which used a rule called UCB1; you should try to find that paper).

What I described so far is not quite MCTS, but it's close. What's missing is that in MCTS you also accumulate statistics at nodes other than the root of the search tree. That way the search is guided in the first few moves by the statistics accumulated so far. I don't think this is very relevant in your case, because you'll be playing many different scenarios (the ones generated in (2)), so the exact same situation is not going to happen very often in nodes other than the root, and "full MCTS" is not likely to help things out much.

This is a lot to swallow. Try to search the web for some of the keywords in what I wrote and see if you can make sense of it all. Feel free to ask here for further explanations on any aspect of this.

#3 AWhiteOwl   Members   


Posted 01 June 2014 - 03:30 PM

thanks for your response, thats a lot of information and should help me a lot as i do further googling. thanks again!

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.