• Create Account

# Turn based, complete information game AI

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

14 replies to this topic

### #1Makers_F  Members   -  Reputation: 827

Like
0Likes
Like

Posted 26 January 2012 - 04:49 PM

I'm developing a game for android, and now i came to the AI part of it. I never wrote AI (previous application didn't need it), i tried to read what i found and thought it will be useful.

Since i'm developing for android, i'm forced to use java (yes, i can use the ndk, but it is far too much effort for the needed results), i'm memory bound and the less i use the cpu the better(i don't want my app to drain all the battery).

I'm first developing a 1 vs 1 situation, but i would like to improve it to be a X vs Y in future(not strictly related to this topic, but please keep this in mind when responding. I prefer expandable and maintainable code overs strict efficiency).

I have D = A + B + C entities(the real player controls A entities, the enemy controls B, and there are C unallocated entities). A player wins when all the opposite entities are killed, so even with 1 remaining entity, if the opposite player doesn't control any more, the player wins.

Since an entity can perform 2 actions, one on the enemy entities and one on the unallocated entities, so each choice/node has a branching of B + C
This leads to a big growth of a possible choices-tree.

In addition, in my game cyclic states can appear (cyclic state = you are in state A. You and the enemy do a some actions, and then you are again in state A). I would like to avoid this, even letting the AI do not-the-best-suited-move.

In addition i would like the AI not always to perform the best guessed action, but to randomly choose with a given distribution between the best e.g. 5 possible moves(obviously i need to find the 5 best possible moves, i can code the choice on my own ): predictability kills fun.

And the AI will have a maximum set of time to make its choice(that thus will probably be not the best one, but the better it can find in the give time).

I read about minmax and alpha-beta pruning, but i don't know if it can be the best algorithm for my case, if it is memory or processor hungry, if it can provide the best X moves and not only the best one, and it can be extended to work with more player..
In addition i don't know if it is heavier to reconstruct the tree each turn or to keep it in memory..

Useful resources, maybe with implementations details or code, would be really appreciated

A possible tree
http://www.pasteall.org/pic/show.php?id=25213

### #2Álvaro  Crossbones+   -  Reputation: 13935

Like
1Likes
Like

Posted 26 January 2012 - 06:15 PM

I read about minmax and alpha-beta pruning, but i don't know if it can be the best algorithm for my case, if it is memory or processor hungry, if it can provide the best X moves and not only the best one, and it can be extended to work with more player..
In addition i don't know if it is heavier to reconstruct the tree each turn or to keep it in memory..

Minimax is not memory hungry, although lots of memory can be used in the form of a transposition table to speed up the search (how effective this is depends a lot on the game). About CPU usage, the more you give it, the better it will play. But this will be the case pretty much for any algorithm you use.

One important thing to know about minimax is that you don't need to keep the tree in memory at all: You simply compute the moves from the current node and explore them one at a time, generating moves as you need them and forgetting them automatically when you return from your search function. If at all possible, you should have a single board on which you can perform moves and undo them as you search. The tree that you explore is only a mathematical construct, not an actual data structure.

About returning multiple moves, what you would probably do is add some random term to your evaluation function, which will provide variety to your bot's game.

Other than minimax, you should probably also look at Monte Carlo Tree Search (MCTS), which has been tremendously successful in computer go, compared to any other approach. MCTS can be applied to a much larger class of games, including games with dice, games with more than 2 players and games where players make simultaneous decisions. It is very different from minimax in nature, in particular because you don't need an evaluation function at all: You simply play games (known as "playouts") picking moves at random from the current position and accumulate statics on who wins. In order to make this basic idea work, you need to add some form of search (UCT is a good algorithm to start with) and you may have to bias the random moves in the playouts (at the very least so that games end in some reasonable amount of moves). Biasing the random moves in the playouts is the main way you can incorporate knowledge into your bot, while in minimax the evaluation function is where you do that.

There is probably less information online about MCTS than about minimax, since it is a much more recent method. However, it is so easy to get something basic working that I think I would start there. If you have trouble getting started, I can help you.

### #3Makers_F  Members   -  Reputation: 827

Like
0Likes
Like

Posted 26 January 2012 - 06:39 PM

Thanks for the reply, i'll start searching papers and code for it!
I just need to clarify 1 point:
Should i provide a "database" with the history of the played match (playouts) with my application in order to not have a AI totally dumb the first few times the game will be played?

What a pity that multithreading in android is quite useles(very few devices have multiple core cpus), i know that Monte Carlo algorithms usually are good in mulithreading.

Additional informations: the board is not fixed. Neither the number of elements. It will change for each "level". In addition different entities may have different effects (for example an entities may do the double of the standard damage). Is MCTS still suited?

### #4Álvaro  Crossbones+   -  Reputation: 13935

Like
0Likes
Like

Posted 26 January 2012 - 08:57 PM

As I said, MCTS can handle a very large class of games. The only feature of a game that would make it very tricky to use MCTS is hidden information (i.e., cards).

Can you post the rules? Or perhaps a slightly simplified version of them, but such that the essence of the game is still captured? And a sample level?

### #5Makers_F  Members   -  Reputation: 827

Like
0Likes
Like

Posted 27 January 2012 - 06:16 AM

Rules:

Player1 starts with X entities.
Player2 starts with Y entities.
Each entity control one and only one location.
There are Z empty locations.

In player1's turn, each of the X entities can choose one between:
-attack one of the Y entities(and thus make the enemy entities value increase by the attacking value)
-if its value %2 ==0 it can split itself into an empty location.

Then it is player2's turn, that can act like player one.

When the value of an entity is bigger than a given threshold, the entity dies and leave the controlled location empty.
When a player lose all of his entities, it dies. The last alive player wins

In the current level i'm working on i have 3 players: yellow, blue and green. But since yellow and blue are on ally, i think it is no different from having 2 players.

I can post you an example of a singe game 1vs1, with 2 starting entities each other and no starting empty post.

Here the threshold is 4. There is a sign near the player who have to take the turn. I didn't paint all the possible actions, only some to show how it works (actually, i also avoided to "conquer" the locations left empty by enemy entities, but this is a valid move)
http://www.pasteall.org/pic/25248
(in the top right branch, player1 will lose)

As you can see, often a good move is not to let the other split (so keeping it with a odd value)

### #6Álvaro  Crossbones+   -  Reputation: 13935

Like
0Likes
Like

Posted 27 January 2012 - 06:50 AM

I see. And what is the largest board configuration you have in mind?

### #7Álvaro  Crossbones+   -  Reputation: 13935

Like
0Likes
Like

Posted 27 January 2012 - 07:01 AM

Regardless of which technique you use, you'll need a class that represents the board, with methods like generate_moves and make_move. Did you write that already?

Once you have that, you can easily generate random playouts from the initial configuration. Check if those playouts finish in some reasonable time (in some games -like go- you need to disallow certain types of really bad moves for the playouts to even finish). If that's the case (and I suspect it will be), or if you can identify the really bad moves that make the playouts last forever, I think MCTS should work just fine.

### #8Makers_F  Members   -  Reputation: 827

Like
0Likes
Like

Posted 27 January 2012 - 12:16 PM

i'll never go over the total 20 locations, but usally it will be around 10, not many more.

I didn't wrote it already, but i suspected that it would be needed.
I already have a class that describes the starting board, i create it after parsing my level with the parser i wrote, and i use it to populate the scene, so this is not a problem.
I can write a memory optimized one that allow to undo the moves if this is needed by the algorithm. But i didn't understand what you mean with generate_move.

For the infinite play moves i prefer the AI to make random decison rather that turn into a cyclic situation, so maybe is possible to check if the state was already visited, and if so, the algorithm chose a random move, or not-the-best-move, just to escape the loop.

So actually a "database/tree" of moves is needed? Maybe it is possible to collect the match played by the players, and let them upload them, then i can merge them and improve the ai for the next releases?

It is possible to set a difficulty variable( the more it is low, the more the algorithm choose random moves instead of good ones)?

Btw i'll read this papers as soon as i find time
http://www.mcts.ai/?q=mcts
http://teytaud.over-blog.com/article-35709049.html
http://www.ru.is/faculty/yngvi/pdf/WinandsBS08.pdf

### #9Álvaro  Crossbones+   -  Reputation: 13935

Like
0Likes
Like

Posted 27 January 2012 - 01:37 PM

You'll only need the ability to undo moves if you use minimax. For MCTS, making moves forward is enough. By generate_moves' I mean a function that creates a list of the available moves in a particular position.

You will not need a database of moves, at least not to get the basic idea working. The UCT algorithm does grow a tree in memory; it's actually fairly easy to implement, but you don't even have to do that to get something working.

As a first version of your bot, make something that explores 1000 times each possible move, finishing the game by playing random moves from there. Then pick the move that won the most times. Once you have that working, you can improve it by using an algorithm called UCB (Upper Confidence Bounds) to explore promising moves more often. Then you can use UCB not only for the current position, but also for positions that have appeared several times in the search, and that's what UCT is (Upper Confidence bounds applied to Trees).

MCTS is a bit random by nature, and you can make it weaker and more random by playing fewer playouts. That's the easy part.

### #10Makers_F  Members   -  Reputation: 827

Like
0Likes
Like

Posted 29 January 2012 - 09:30 AM

Ok, here what I understood.

1) the algorithm "start". Nothing is in memory and it doesn't "know" anything.
2) the algorithm is initialized :. the root node and the board (that currently represent the starting situation) are created.
3) the current node calls the board, that returns a list of all possible moves. For each move, the algorithm create a child of the current node.
4) a move/child node is chosen at random, the board is updated to represent that move.
5) repeat steps 3-4 until the board declare a winner.
6) go up the tree, from the winning step until the root, and update all the needed values of the node (from what i understood, wins and visits).
7) set the root node as current node and update the board to represent that state.
8) repeat 3-4-5-6-7 as many times as you wish (the more you repeat, the smarter the AI will be)

So the tree is created.
Now the player can make his move. The tree is traversed move by move(either the real player and the ai moves are taken in account, so that the current node always represent the current state), and the AI makes the right move by comparing the values of all the child nodes of the tree.

So, actually, all the teaching smartness is in the board, that should not chose moves at random but with a bit of heuristic and should not report all the possible moves, but only the "smart ones"(i'm not sure of this, since if the player will make a "stupid move", the tree will have a totally missing branch until a known situation is reached).

I understood this (and already implemented my board that does all is needed), but i'm not really able to join my knowledge with the code i find on the net.

For example this
http://www.mcts.ai/?q=code/simple_java
uses UCT (you can see uct value formula in select()). But i don't know where the board is saved, if it is needed to stay even after the tree is constructed or only when building it, where i should plug my board in that code, etc etc etc.
Long story short, i'm not sure of implementation details. Can you please provide me some code (even pseudo code), or some example? I found even this
http://senseis.xmp.net/?UCT%2FDiscussion
but they say the code is not complete/fully functional/take in account something different form the real UCT.

### #11Makers_F  Members   -  Reputation: 827

Like
0Likes
Like

Posted 09 February 2012 - 03:55 PM

Well, i implemented my AI.

I just need to clarify some points, i needed to modify various part of the code provided at mcts.ai since it could lead to exceptions and other bad behaviours.

1) How are handled "final" nodes? With final i mean nodes that contains a win or a loss. The algorithm will try to expand them, but they can't be expanded as the simulated player can't do any move(since he lost and it is no more present on the board)

	   cur.expand();
TreeNode newNode = cur.select();
double value = rollOut(newNode);


become
	    cur.expand();
double value;
if(!cur.isLeaf()) //the node is not a final state, the player can do a move and is not dead
{
TreeNode newNode = cur.select();
EvaluationContext.getCurrentBoard().apply_move(newNode.getMove());
EvaluationContext.getCurrentBoard().nextPlayer();
value = rollOut(newNode);
}
else
{
value = EvaluationContext.getCurrentBoard().playerWins(EvaluationContext.getCurrentBoard().getCurrentPlayerID()) ? 1:0;
}


where
public void expand() {
children = new TreeNode[nActions];
for (int i=0; i<nActions; i++) {
children[i] = new TreeNode();
}
}


become
public void expand() {
if(!isLeaf())
return;
List<Move> moves = EvaluationContext.getCurrentBoard().generate_moves();
if(moves.size() == 0)
return;
children = new TreeNode[moves.size()];
int i = 0;
for (Move move : moves) {
children[i++] = new TreeNode(move, EvaluationContext);
}
}
`

so each time it will continue to simulate them even if it is not needed..

2) I have my "real" board (the one on which the player plays and interact) and a board for each Ai simulating thread, and i keep this 2 in sync updating the real board with the ai moves and updating the ai ones with player moves. Each thread have also a board on which it plays simulations (since simulations change the board state). deepCopying the board or generating a new one from the real one (the implementations are different, ai board is by far more memory efficient) takes quite the same time. I'm facing sync problems (i blame myself for not commintting when all worked, i changed a few things and now i'm not no more able to sync it lots of debug time is waiting for me), i'm already planning to solve it anywais, but is it wrong to generate ai boards from real board instead of keeping them in sync?

### #12Álvaro  Crossbones+   -  Reputation: 13935

Like
0Likes
Like

Posted 10 February 2012 - 03:11 AM

I have some opinions about point 2. The absolute cleanest way to organize the relationship between the GUI and the engine is to define a text-based protocol that they will use to communicate. The GUI and the engine are run as separate processes which communicate through a pipe. This has many advantages, including the possibility of swapping one engine for another (if you make your protocol available to others), automated testing of the engine, you can easily adapt this scheme to allow people to play online...

That mechanism is very popular in chess, where most engines implement one of two protocols (UCI and whatever XBoard uses). You should read the description of UCI because it is well designed.

Even if you don't end up implementing this in two processes, I would still strongly encourage very clear separation of engine and GUI, with a well-defined interface between them.

### #13Makers_F  Members   -  Reputation: 827

Like
0Likes
Like

Posted 11 February 2012 - 11:18 AM

I already have different threads working on the update loop of the engine and the ai simultions, and i communicate with moves(that contains a board independent description of the attacker, the defender, and the action to perform), so it is already similar to a communication protocol, but implementing a real and full protocol is far too much work and i don't think it is well suited for this situation (mobile phone game). Btw i already have a class that acts as an interface between the engine and the simulating threads(i start one for each core of the device). So you are saying that is better to keep the boards "indirectly" in sync (using the protocol to notify the moves of the ai and the player) instead of regenerating the board each time..

### #14Makers_F  Members   -  Reputation: 827

Like
0Likes
Like

Posted 28 February 2012 - 05:33 PM

Problem finally solved, everithing is working just fine!
It was hard to syncronize all the thread and the board!
Now a bit of code clean up and then multi player support is waiting for me!
Thanks for the help!

### #15Álvaro  Crossbones+   -  Reputation: 13935

Like
0Likes
Like

Posted 28 February 2012 - 11:12 PM

So, did you end up using Monte Carlo?

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

PARTNERS