• Create Account

# Transposition Table question

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

7 replies to this topic

### #1gfrommer  Members   -  Reputation: 138

Like
1Likes
Like

Posted 30 July 2008 - 12:37 PM

Hello everyone, I have some questions about my transposition tables. One question I am having is this, is a transposition entry more valuable if it was cached at a level closer to the root then to the leaf nodes? Sometimes I wonder if how I refer to the root/leaf nodes is different then others... I consider zero the root node, and the maximum search depth the ply of the leaf nodes. With that in mind, In a hypothetical 6 ply minimax search, it would make more sense that a move cached at level 1 would be of more value then a move cached at level six, because the score of the move at level 1 is calculated based on all of the many leaf nodes in the tree below it. Is that the proper thinking? Check out this psuedocode which reflects my thinking: hashmap TransTable = new hashmap() score CheckTranspositionTable( zobkey ) { if( TransTable.get(zobkey) != null) { entry = TransTable.get(zobkey); return entry->score; } return null; } void cacheTransposition(zobkey, score, depth) { if( TransTable.get(zobkey) != null) { entry = TransTable.get(zobkey); // Dont cache if there exists an entry with a lower depth if(entry->depth < depth) { return; } } entry = new cacheEntry() entry->score = score; entry->depth = depth; TransTable.put(zobkey, entry); } Thanks everyone

### #2Álvaro  Crossbones+   -  Reputation: 19956

Like
1Likes
Like

Posted 30 July 2008 - 03:20 PM

People usually think of how many levels deeper into the subtree we want to search, not how far from the root we are. That's why I would keep entries with large depths preferentially. From now on, let's use depths the way everyone else does.

You understood the basic idea of transposition tables, but your code has certain problems.

The first problem is easy to understand: You should only use the score stored in the hash table if the current search has depth that is less or equal to the depth stored in the hash table entry. You may have found this node before and analyzed it to depth 3, but if you now need to analyze it to depth 5, your old result is not very meaningful.

The second problem is harder conceptually. Your hash entries are missing an important piece of information: bounds. An alpha-beta search has 3 possible outcomes:
1) The true score is alpha or below, and the score returned is an upper bound of the true score.
2) The true score is between alpha and beta, and it's returned exactly.
3) The true score is beta or above, and the score returned is a lower bound of the true score.

You should store which of these cases we are in, together with the score that alpha-beta returns, and the depth. That way, when we find this position in some other part of the search, if the depth in the hash table entry is too small, we won't use the score at all. If the depth is large enough, we have to look at the field that tells us which of the 3 cases above we were in. Case 2 is the easy one, where we can simply return the score. In case 1, we can improve beta if the score is below beta. In case 3, we can improve alpha if the score is above alpha. If after these checks you find out that alpha >= beta, then you can return almost anything (in my chess program I return alpha).

There is at least one more piece of information that you should store in your hash table entries: Which move was found to be the best. This is either the move that returned the best score, or the one that produced a beta cut. This information can be used when you find this position again, even if the new search is deeper than what you stored: You simply try that move first when you loop through all available moves. This will significantly improve your move ordering and therefore the efficiency of alpha-beta.

I can provide some pseudo-code if anything is not clear enough.

### #3gfrommer  Members   -  Reputation: 138

Like
1Likes
Like

Posted 03 August 2008 - 08:22 AM

What a great reply! It makes good sense. I have a question about what to do when interpreting those three special cases resulting from the AB pruning. You mentioned:

"In case 1, we can improve beta if the score is below beta. In case 3, we can improve alpha if the score is above alpha. If after these checks you find out that alpha >= beta, then you can return almost anything (in my chess program I return alpha)."

Does this mean rerunning another minimax search on that move stored in the transposition table? With the new search having the upper or lower bound set to the saved score (depending on which case we are in). What would the depth of those searches be? Would it be the maximum search depth minus the depth recorded in the transposition table?

Thanks! This helps very much!! Karma points+++

### #4Álvaro  Crossbones+   -  Reputation: 19956

Like
1Likes
Like

Posted 03 August 2008 - 08:47 AM

Let me see if I can explain it with pseudo-code:

int NegaMax(Position P, int depth, int alpha, int beta) {  if(depth<=0) {    return QuiescenceSearch(P, alpha, beta); // For some games, simply return the static evaluation here  }    // Look this position up in the hash table  HashEntry hash_entry;  bool hash_hit=query_hash(b.hash_key(), &hash_entry);  if(hash_hit && hash_entry.depth>=depth){    switch(hash_entry.score_type){    case lower_bound:      if(alpha<hash_entry.score)	alpha=hash_entry.score;      break;    case upper_bound:      if(beta>hash_entry.score)	beta=hash_entry.score;      break;    case exact_score:      return hash_entry.score;    }    if(alpha>=beta)      return hash_entry.score;  }// ... continue search normally}

Feel free to ask if anything is still not clear.

### #5gfrommer  Members   -  Reputation: 138

Like
1Likes
Like

Posted 03 August 2008 - 09:17 AM

AHHH!! I gotcha... so you're saying if the exact minimax value is not cached in the transposition table but an alpha or beta limit is cached instead, then run the search as I normally would but adjust the alpha or beta limit according to what was in the cache. This is very cool! Thanks.

I think when I started working with trans-tables, I forgot that when using AB pruning you are no longer assured of true minimax values for INTERNAL nodes, just the ROOT node. Also, you've cleared up my root-depth=0 confusion :)
-----------------

I have some conceptual understanding of AB Pruning that I am working on. Perhaps you can help,

Alpha = the minimum score that MAX is assured of
Beta = the maximum score that MIN is assured of

When beta < alpha, the current position can not be the result of best play, and can be pruned from consideration. Are these definitions correct? It's the beta-cut part that I'm having trouble conceptualizing. So if MIN's maximum attainable score is less then MAX's minimum attainable score, then that is resulting from bad game play? That does not seem right. I feel like this is word soup... but I'm getting close to the deeper understanding, I'm sure of it!

Thanks again

Quote:
 Original post by alvaroLet me see if I can explain it with pseudo-code:*** Source Snippet Removed ***Feel free to ask if anything is still not clear.

### #6Álvaro  Crossbones+   -  Reputation: 19956

Like
1Likes
Like

Posted 03 August 2008 - 01:36 PM

I can try to explain alpha-beta in my own words, and see if that helps you.

We are writing a function that, given a subtree of the game and two values alpha and beta, will return a number s which is:
* an exact score if alpha < s and s < beta,
* an upper bound if s < alpha,
* a lower bound if beta < s.

In order to do that, you look at the subtree corresponding to each child of the root, and you recursively call alpha-beta on each child, with carefully chosen values for alpha and beta of the new searches so you extract the information you want.

So my interpretation is:
* alpha: a value such that if the true score is alpha or below, we don't really care what it is exactly.
* beta: a value such that if the true score is beta or above, we don't really care what it is exactly.

The reason why we don't care is that we know in previous moves the players had options such that if the true score is outside (alpha,beta), this branch won't be chosen as the best and its score won't propagate to the root.

Several refinements to alpha-beta use null-window searches (where beta == alpha+1), which are a way of deciding whether the true score is better than alpha.

A word of caution: There are many things that you expect to be true about alpha-beta searching which go out the window once you introduce transposition tables. For instance, you would expect that if you repeat a search you'll get the same result, or that if a search told me that the true score of a subtree is above 50, a new search with a window (alpha=50, beta=infinity) would return something larger than 50. These things may not be true because the transposition table will not be in the same state in between calls. However, having transposition tables is such a huge performance boost that they are worth all the insanity that comes with them. This insanity I talk about is usually called "search instability". Another problem that transposition tables bring with them is "graph history interaction".

### #7gfrommer  Members   -  Reputation: 138

Like
1Likes
Like

Posted 06 August 2008 - 06:52 AM

Another great reply Alvaro, thanks! I forgot that the whole point of alpha-beta is to cut branches that would never propagate to the root... Any branch that does not propagate it's value up to it's parent node is a less-then best move, which we can prune off if we can prove there is a better alternative.

One more question regarding Transposition Tables.... does it make sense to preload a transposition table with moves before I run my iterative deepening minimaxAB search? My game runs in a web browser so I've made it stateless... and each time a new move is submitted, the processing starts with an empty transposition table. What are the issues with loading/saving transposition cache results? Is it even worth my time to load a couple hundread megs of TT data before I start the search?

Thanks again for the great reply!

Quote:
 Original post by alvaroI can try to explain alpha-beta in my own words, and see if that helps you.We are writing a function that, given a subtree of the game and two values alpha and beta, will return a number s which is: * an exact score if alpha < s and s < beta, * an upper bound if s < alpha, * a lower bound if beta < s.In order to do that, you look at the subtree corresponding to each child of the root, and you recursively call alpha-beta on each child, with carefully chosen values for alpha and beta of the new searches so you extract the information you want.So my interpretation is: * alpha: a value such that if the true score is alpha or below, we don't really care what it is exactly. * beta: a value such that if the true score is beta or above, we don't really care what it is exactly.The reason why we don't care is that we know in previous moves the players had options such that if the true score is outside (alpha,beta), this branch won't be chosen as the best and its score won't propagate to the root.Several refinements to alpha-beta use null-window searches (where beta == alpha+1), which are a way of deciding whether the true score is better than alpha.A word of caution: There are many things that you expect to be true about alpha-beta searching which go out the window once you introduce transposition tables. For instance, you would expect that if you repeat a search you'll get the same result, or that if a search told me that the true score of a subtree is above 50, a new search with a window (alpha=50, beta=infinity) would return something larger than 50. These things may not be true because the transposition table will not be in the same state in between calls. However, having transposition tables is such a huge performance boost that they are worth all the insanity that comes with them. This insanity I talk about is usually called "search instability". Another problem that transposition tables bring with them is "graph history interaction".

### #8Álvaro  Crossbones+   -  Reputation: 19956

Like
1Likes
Like

Posted 06 August 2008 - 11:03 AM

Quote:
 Original post by gfrommerOne more question regarding Transposition Tables.... does it make sense to preload a transposition table with moves before I run my iterative deepening minimaxAB search? My game runs in a web browser so I've made it stateless... and each time a new move is submitted, the processing starts with an empty transposition table. What are the issues with loading/saving transposition cache results? Is it even worth my time to load a couple hundread megs of TT data before I start the search?

I don't think it's worth the effort. I decided to start with clean transposition tables every move even on my regular chess program, because it made debugging easier (should I say "possible" instead?). You are leaving very little performance on the table. Another disadvantage of keeping the TT around is that time control becomes much harder: The first searches up to the depth that you saw last time minus 1 or 2 will happen instantaneously, and then you'll get stuck in a high-depth search with no idea of how long it will take to finish it. If your TT starts empty, each depth costs about a constant times the time it took to finish the previous searches.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

PARTNERS