# Game Tree Search with MTD(f) algorithm

This topic is 3657 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hello, I have some problems about implementing the MTD(f) algorithm correctly. I want to use iterative deepening that I can select the best moves from the previous iterations and search them first when iterative deepening goes one depth deeper. Now I don't know which values I can reuse in my transposition table. I have a hash table that computes his key from the board position. When I use iterative deepening the MTD(f) algorithm is called a number of times in one iterative deepening iteration to find the minimax value. At the moment I'm swapping my transposition table to another after each call of MTD(f) because when MTD(f) searches with another bound I can't reuse the values (I'm getting wrong results, because of the different bounds?). I'm only reusing the best moves from the swapped transposition table (the transposition table of the previous iteration of MTD(f)). Would that be ok if I use the best moves of a previous iteration of MTD(f)? Then when iterative deepening goes one depth deeper I only can select the best moves of the previous iterative deepening iteration from the last call of MTD(f). So, my question: can I reuse any values from an MTD(f) search? If I would also store bounds in my transposition table I think it won't make sense because in the next iterative deepening iteration I can't reuse them because all values are from a shallower depth. I hope someone could explain me how that should be implemented. Thanks so far.

##### Share on other sites
For a state node in my transposition table I store values called reverseDepth and turnNumber. The reverseDepth tells how far away from the evaluated leaf the score of the node comes from (leaves get reverseDepth of 0, nodes just above leaves a value of 1 and so on..). The parameter turnNumber is used to age up old records to make way for newer and hopefully more relevant records.

Then, each transposition table node stores the type, which tells if the associated score is a lower/upper bound or the exact score of the node.

To properly reuse the data in the transposition table, you need to check when reading the entry that it is from a sufficiently deep search. E.g. Suppose you're doing a 5-ply search and you're traversing a node that is at depth 3, that is, 2 plies far away from a leaf. When you read the transposition table, you need to make sure that the entry there has at least reverseDepth >= 2. If it is <= 1, it means the result comes from a too shallow search, and is not reliable.

After that, making the AI ponder during opponent's turn becomes a lot easier, since it will work on "warming up" the cache by filling in records from deeper and deeper searches, until the human player has moved. When the AI's turn comes, the shallow depth searches will typically finish with only a few cache lookups.

##### Share on other sites
The entire point of all iterative deepening alpha-beta derivatives is that you want to store information in the hash which will be most usefull for generating a fast cutoff next time.

Everything else comes secondary. The cost of re-searching the principle variation is trivial in comparison to the cost of poor move ordering.

This is true in full window AlphaBeta, narrow window AlphaBeta, and null window AlphaBeta.

Now, if you are storing information in the table about the 'score' of a search then you need to keep it in context.. in the case of MTD(f), which uses a null window alphabeta, you will almost NEVER have the true score, but instead only an upper or lower bound on it.

##### Share on other sites
Quote:
 Original post by Rockoon1The entire point of all iterative deepening alpha-beta derivatives is that you want to store information in the hash which will be most usefull for generating a fast cutoff next time. Everything else comes secondary. The cost of re-searching the principle variation is trivial in comparison to the cost of poor move ordering.This is true in full window AlphaBeta, narrow window AlphaBeta, and null window AlphaBeta.Now, if you are storing information in the table about the 'score' of a search then you need to keep it in context.. in the case of MTD(f), which uses a null window alphabeta, you will almost NEVER have the true score, but instead only an upper or lower bound on it.

Yes, I know that MTD(f) returns only bounds because of the null window search. But from which search do I take the best moves for the next iterative deepening search? Because MTD(f) calls alphabeta search a number of times in each iterative deepening iteration. From which alphabeta search should I take then the best moves for ordering the moves in the next iterative deepening search?

Quote:
 To properly reuse the data in the transposition table, you need to check when reading the entry that it is from a sufficiently deep search. E.g. Suppose you're doing a 5-ply search and you're traversing a node that is at depth 3, that is, 2 plies far away from a leaf. When you read the transposition table, you need to make sure that the entry there has at least reverseDepth >= 2. If it is <= 1, it means the result comes from a too shallow search, and is not reliable.

Sorry, I didn't tell you that I'm searching only one position once and do not use the algorithm in a game. So I will not have another searches that have gone deeper before.

##### Share on other sites
Quote:
 Original post by flushYes, I know that MTD(f) returns only bounds because of the null window search. But from which search do I take the best moves for the next iterative deepening search?

Hmm, I think there might be a flaw in your design if you have multiple results for a specific move in your HT entry.

##### Share on other sites
Quote:
 Original post by flushSorry, I didn't tell you that I'm searching only one position once and do not use the algorithm in a game. So I will not have another searches that have gone deeper before.

You do have information for searches of the same depth, because each iteration if your IDS can spawn multiple MTF(f) searches.

##### Share on other sites
I'am storing the board in my hash table, so that I can go sure to have the same position. I replace the entry if it is the same (if the transposition table did not return it because the node wasn't searched to the actual search depth). Or I replace it if it was a position with a deeper search.

##### Share on other sites
Another question I have is also about my hashing. I don't want to make a new topic, so I'll ask here. I'm hashing by division: h(k) = k mod m
k = key
m = transposition table size, prime away from 2^x

Now, if I'm using a small transposition table size variable m like 8306069 (23 bit) the performance is much more better than I'm using a size like 134217689 (27 bit). My key uses 49 bits. Is there a limit that I have to consider? Also if I'm using a larger variable m, should the key be larger?

1. 1
2. 2
Rutin
21
3. 3
4. 4
A4L
15
5. 5
khawk
14

• 13
• 26
• 10
• 11
• 9
• ### Forum Statistics

• Total Topics
633737
• Total Posts
3013608
×