A couple of questions about Monte Carlo Tree Search (MCTS)

Started by
3 comments, last by Fry_hunter 10 years, 2 months ago

Hi everyone,

First of all sorry for possible mistakes. English is not my native language...

I'm trying to implement MCTS with UCT, to do an intelligence capable of play Go game. However, I have an important question about the algorithm. I know there are 4 phases: Selection, Expansion, Simulation and Backtracking.

My question is about Selection phase. I've been reading some papers, documentation and so on, and from what I understand, if a move has not been tried before, we must priorize it before anything else. That's make sense to me, but at the same time, I don't undertstand how to grow the tree in depth with this rule. For example, let's suppose we have 3 possible moves. We initialitze the tree, with just the root node.

  • 1st playout: 0 moves. No childs yet. So we choose randomly 1 of the three possible moves, we add as a child of the root node, and we make a simulation from it. Then we update this new node incrementing the visit counter and the win counter (if playout has been won).
  • 2nd and 3rd playout: We have 1 child in root node. But there are two moves that have not been tried yet. So, we must priorize them, isn't it? At the end of 3rd playout, we'll have 3 childs at first level of the tree, with 1 simulation each one.
  • 4th playout: Okay. Now, in the selection phase, we travel down the tree and use UCT formula to select the first node (for example). We have three untried moves here because any simulation has been tried before through this node yet.

And here comes my question: everytime we get an unexploedr node, we will have to try the 3 possible moves before going deeper? That means that we will have always a branching factor of 3 for all nodes, isn't it? However in the next image (an example I've found in the Internet), the selected node is part of a level with only 2 of the 3 possible moves. How is that possible?

z8z6c.png

Am I doing this right? Or I'm missing something?

Thanks in advance!

Advertisement

And here comes my question: everytime we get an unexploedr node, we will have to try the 3 possible moves before going deeper? That means that we will have always a branching factor of 3 for all nodes, isn't it? However in the next image (an example I've found in the Internet), the selected node is part of a level with only 2 of the 3 possible moves. How is that possible?


Where do you think that each node in that diagram is supposed to have 3 available moves? As far as I can tell, your question can only be answered by referring to the original paper that contains that diagram. I doubt the original paper with that diagram says that it represents a game that always presents three choices at every step of the game.

Edit: Even if you're right about the diagram, I think this can still happen:

The first three nodes, if you have three choices at the beginning, will be those three choices before it goes deeper. I believe you're right about that. However, once all three were tried once, you may have simulated 1 win and 2 losses. Now your search will start to favor the sub-tree with the win. If you continue to find mostly wins in that sub-tree, the other two choices will be ignored for awhile while you go deeper into the winning choice.
Yes, I understand that. But my question is if in these subtrees, the rule that priorizes untried moves is applied too.

One thing I think I'm missunderstanding too, is the possible moves in these subtrees. We have 3 possible moves at the first level of the tree. In following levels can we have just two moves or even three different moves from the three of the begining? If so, my previous question is answred but it generates a new different question to me:

In the expansion phase which moves are generated? I though that were always the three same moves from the begining, but if we can have different moves how are they generated? Randomly like the in the simulation phase? Or we must track all possible moves in each node in the tree?

Thanks for your response!
I implemented UCT for go a few years ago, but I am having a hard time understanding your language.

The way I think of it, the "expansion phase" is not a phase at all. Moves are picked from a rule like UCB1 while we have statistics to do so, and from some playout policy function when we don't. We then may want to add a new node to the UCT tree along the line we just explored (but if you only add a node after it has been searched a few times, the algorithm works just fine; it's a tradeoff of performance for memory consumption).

Although in the original description of the algorithm you do try each move once before trying any move for a second time, this detail is not important, and it's actually detrimental to the performance of the program. It's not hard to see why this is a bad idea. Imagine that you are using "heavy" playouts (e.g. the simulation phase picks moves from a non-uniform probability distribution biased towards better moves). Then a full playout (selection + simulation, in the language you are using) typically consists of a few high-quality moves (picked by the UCB1 rule based on good statistics), then a totally random move when you reach the leaf (because you need to pick an unexplored move), then reasonable moves again.

There are several techniques that have been proposed to fight this problem. Off the top of my head, "progressive widening" (CrazyStone?), using statistics from the grandparent node (Mogo?) or heuristics from the heavy playouts to initialize counts when a new node is created. I believe RAVE may also contribute to ameliorate the situation.

Notice that if you are using "light playouts" (which was probably the case for the first implementation of UCT), this issue doesn't exist, because the last move in the UCT tree is then of comparable quality to the moves in the "simulation phase".

Well... I followed the way explained in some papers and wikipedia (for example).

http://en.wikipedia.org/wiki/Monte-Carlo_tree_search

Every playout has 4 steps, and the UCT is applied at Selection step, to find a balance between exploration and explotation. I think exactly the same you are saying but with other words :)

Thanks for the explanation anyway. You have clarified me some things. I would like to try RAVE and heavy playouts but this game is my Bachelor's final project and I don't have so much time :( Anyway, when I'm done with it, I'll keep enhacing my AI in my free time.

Thanks again!

This topic is closed to new replies.

Advertisement