# Advices for implementing transposition tables needed

## Recommended Posts

Quote:
 Original post by RPGeezus in this thread P.S. Come talk to us before you implement your hash tables-- I've screwed them up enough times myself that I can probably steer you clear of a few pitfalls.
what tips can anyone provide concerning transposition tables ?!?! i already implemented a zobrist key generator for my board-representation class. i produce two 64-bit integers using zobrist keys methode. the question that first comes to my mind is: how can i store the data and retrieve it effiently ?!?! because using a big array and search through it all seems so ineffient. thanks for any advice.

##### Share on other sites
Transposition tables are just hash tables.

There is little searching involved, you use the zobrist key as an index into the table to find the entry for the state with this zobrist key.

For instance, if you want to find the transposition table entry for a state with zobrist key z in a transposition table of size table_size then do something like this pseudo code:
entry := table[z modulo table_size].

This is the basic setup, it has many variations.

##### Share on other sites
i thought of this way, but wondered if someone made a more effiect fetching technique. i am thinking of making an arrays of vectors like this:
vector<CTableEntery*>* Table = new vector<CTableEntery*>[MAX_UINT];

then push the eneteries in vectors, and delete outdated ones.

is vector suitable for this purpose ?!? or maybe a list ... ?!
(thats my first time i'd use STL, i dont wanna start an arguement wether STL are good or not, but i heard somewhere that use of vector might cause memory defragmentation, is that a real problem ?! especially that i will push and delete enteries often.

[Edited by - zaidgs on August 18, 2004 3:13:29 PM]

##### Share on other sites
I simply overwrite old entries when collissions occur, so I dont need lists nor vectors. Anyway I do not think memory fragmentation is something you need to worry about for this type of game.

Here's a short discussion of replacement strategies for chess.

note:
I am making an Othello game myself,not Chess.

[ Edited by mod to fix LONG, nonclickable URL ]

[Edited by - Timkin on August 18, 2004 7:50:56 PM]

##### Share on other sites
Hi Zaidgs,

Storing the hash data works as follow:

Make a really really big array. Your first 64-bit key with be an index in to this array. You'll have to use a modulous or bit-mask to make the index fit properly.

i.e. index = keyA % sizeOfHashTable;

Each element in your array will hold the following information:
long long keyA;
long long keyB;
long score;
long depth;
char type;

type is the type of score the hash represents (upper limit, lower limit, or exact). depth is the depth of search the score represents. keyA and keyB are obviously your keys.

Forget vectors.. Use a statically allocated array. Yes, it will be huge. This is normal. I read in a paper somewhere the every time you double the size of your hash table, you can expect a 7% increase in speed; This is not something I've independantly confirmed.

Hash table bugs are hard to track down. The following method will help ease you in slowly, letting you verify the basics work properly.

1. Create a set of test cases. Record the first eight moves of a game, along with the scores the computer is assigning to each move on the first ply. Do this for several games.

2. Create some simplified test cases-- these are positions that have very few pieces, and exhibit unusual conditions; i.e. Castling, En-Passant, Check-mate, and stale-mate. Record the moves, and and important scoring details as in 1. etc....

Now that you are armed with test data:

1. Implement the key generation. That is, for each move generated a corresponding hash key pair is generated.

2. Just AFTER you call your evaluation function, store the result in the hash table. The depth will be 0 and the hash type will be exact.

i.e.

myScore = StaticEvaluation ( board);

myHashTable[board.keyA % kHashTableSize].keyA = board.keyA;
myHashTable[board.keyA % kHashTableSize].keyB = board.keyB;
myHashTable[board.keyA % kHashTableSize].depth = 0; // for now
myHashTable[board.keyA % kHashTableSize].type = kHashExact;
myHashTable[board.keyA % kHashTableSize].score = myScore;

3. Just BEFORE your evaluation, check if there is a hash entry. If so, extract it. Run your evalation, and compare the evals score to the hashed score.. If they are different throw up an error.

Run through your test suites. If you get your scores-are-different error then you have a bug and must fix it.

Make sure the exact same plays, for the exact same reasons, are made. If they are not, you have a bug somewhere, and you must find out why the numbers do not match.

Be sure to check castling, check, check-mate, stale-mate, and en-passant. This WILL come back to bite you if you're not certain it's working...

At this point you are simply verifying the hashing algorithm.. Once you are certain it is working you can have it return the score from the hash table instead of calling your evaluation function. Re run your test suite.. It should still play exactly the same way as without the hashing, as we're not yet skipping anything except the evaluation.

Once you are certain this is working, you can implement the full gamit of hash optimizations. My memory is not good enough for me to explain everything right now... From this point on your game may or may not play the same way, which is normal, but this is best left for another topic.

Good luck,
Will

P.S. Hash tables are not perfect. Things like stale-mates and check-mates are bound to screw it up for you, since they are usually dependant on the current move-number... The hash table is not aware of this so you will need to handle these things carefully.

##### Share on other sites
thanks RPGeezus for your tips on debugging hash tables. i'll make sure these stuff work fine as u described, and when i am done i will post any further questions if there was ... ( actually i think there will be :D )

hopefully i will start implementing it tonight.

##### Share on other sites
ok, i got one questions.

but first i want to say that my transposition tables passed the test RPGeezus suggested .... and also the transposition table code is small and clean, so saving and retreiving data from the table seems bugless.

now my question is: do people hash quiescent nodes ?!?! if so is there any special way to treat them ?!

##### Share on other sites
In the checkers engine that i am making, i will be using the following for its transposition tables.

it uses two hash tables, one contains the score for an entire tree, the other contains the score for a leaf.

When it is ready to recur(is that the right word?) to explore the subtree it does a check of treehash. if it did not contain an entry then it adds it afterwoods.

When it is checking the leaves, it goes and checks leafhash before evaluating (i will most likely be using a quite slow pattern matching evaluation function so it would be well worth it). if it does not already exist then after evaluation it adds the entry.

Now heres a little thing that i think is nifty: for each subtree in the hash table, i store its score, its activity, its ansesters (going from it back to the root), and array indexes to where all of its leaves (which could also be subtrees) are stored in the hash table.

Now before the search i go through the entire subtree hash table (i need to figure someway to stop this, doing this for a 1k hash table is ok, but for a table going into the 2+ megs...)
Now for each subtree i mark it inactive (hash[i].active = 0)

Then after the search i go over it again, if hash[i].active = 0 then it is removed. that way i get to reuse most of the tree between searches!

Then again what works for others may or may not work for you.

One is glad to be of service in the first day of the ninth memenium (9000 days since january 1 1980).
DENC

##### Share on other sites
Zaidgs: I had that same question not long ago. In this forum:

Subject:
Hash table use in Chess Quiescence Search

http://www.gamedev.net/community/forums/topic.asp?topic_id=229888

Glad to hear your hasing is up to snuff.. Did you run in to any problems?

Good luck,
Will

##### Share on other sites
i noticed a strange thing. in benchmarking, that is using epd positions to evaluate the performance of my engine (by counting matches with suggested best moves) transpositions gave me a noticable improvement. yet in matches between my engine with and without transposition tables, the one without transposition tables wins most of the time. :S
i am still trying to do benchmarking, and see what kind of situations causes my engine to lose in matches.
my transposition table contains 2 million entries, is that enough ?!
most of my testing is at depth 7 and 8, with null-moves on.
btw, what is the expected improvent for implementing transposition tables ?! given that i use it for transpositions as well as retrieving best move for that position ?! but no enhanced transposition cut off yet ... ?! and is the latter really useful ?!

[Edited by - zaidgs on August 24, 2004 5:28:39 AM]

##### Share on other sites
ok, i think i found a bug that might have caused the strange behavior. its that when value returned equals alpha i used to store it as exact value, not an upperbound.

something like:
if ( BestEval < Alpha ) then store(Depth, Node, hfUpperBound);
else store(Depth, Node, hfExactScore);

which should be:
if ( BestEval <= Alpha ) then store(Depth, Node, hfUpperBound);
else store(Depth, Node, hfExactScore);

i am gonna start benchmarking all-over again.

[Edited by - zaidgs on August 24, 2004 7:48:21 AM]

##### Share on other sites
Yep, you've got to be very careful. Luckily for you that was an easy bug to fix. :)

You can't really test a full hash-table implementation on just one position, since problems typically occure one the table starts gettting full. Try mate in 8 or mate in 5 problems to stress the program.

Make sure you test out the check-mate, (for and against), stale-mate (for and against), passant and castling (on both sides).

Will

##### Share on other sites
Quote:
 Original post by zaidgsok, i think i found a bug that might have caused the strange behavior. its that when value returned equals alpha i used to store it as exact value, not an upperbound.something like:if ( BestEval < Alpha ) then store(Depth, Node, hfUpperBound);else store(Depth, Node, hfExactScore);which should be:if ( BestEval <= Alpha ) then store(Depth, Node, hfUpperBound);else store(Depth, Node, hfExactScore);i am gonna start benchmarking all-over again.

aparently this bug was a false alarm !!! actually code was set up the right way, and i created a bug when i modified it !! it happens ! it took me sometime to track down the bug i created, (well, i corrected it the same day but was late to post this)

so the strange behavior i reported still holds, it performs better in testing epd positions than in actual games ....

so can sm1 tell me the expected boost when using a 2 million entry transposition table ... i think there still must be something wrong because i didnt get a high boost as i thought i should !!!

##### Share on other sites
some engines clear the transposition tables after each move .... is that a good practice ?! what things i should consider to decide on that ?! .....

##### Share on other sites
Quote:
 Original post by zaidgssome engines clear the transposition tables after each move .... is that a good practice ?! what things i should consider to decide on that ?! .....

You should be getting major speed boosts. At least 25% faster. I made graphs showing the speed improvements each of these features made to my program. I should really post them.

I don't clear the hash table after every move. I keep track of a move number and a depth-- if move n is a deeper search than the hash, i overwrite the hash. If the move number is at least 6 moves ahead of the hash, I overwrite the hash.

Will

##### Share on other sites
i know that hash tables should give a high boost, but i dunno whats going wrong but its not. according to my benchmarking its the ELO increased by only about 50 points !!! my engine's rating on chessanytime.com was oscilating between 2050 - 2080,
now it is oscilating between 2100 - 2115.
also on a 360 positions test (Chess IQ Test), without TT it solves 106, but with TT it solves 117, by giving it 10 seconds per position. i should dig deeper to see why there is no obvious improovement.

##### Share on other sites
I recall seeing some computer chess tests that involved king-pawn end games. The result was mate in 31.

If you're transposition tables are working you should be able to solve in a few seconds. Even if you can't solve it in 10 seconds, you'll know the hash tables are working if it goes from depth 2 to 25 in about 2 seconds. :)

Sorry, but I no longer have the link..

Will

##### Share on other sites
i think you mean something like this my engine solved it in 3 seconds at depth 25.

to ensure the integrity of my data saving and retrieval i verified data like this:
if ( CurrentBoardInHash() ){  if ( TTEntry.Type == ExactScore ){    assert( alphabeta(CurrentBoard, TTEntry.Depth, alpha=-infinity, beta=+infinity) == TTEntry.Score ) //hashing disabled  }  if ( TTEntry.Type == MinScore ){    assert( alphabeta(CurrentBoard, TTEntry.Depth, alpha=-infinity, beta=+infinity) >= TTEntry.Score ) //hashing disabled  }  if ( TTEntry.Type == MaxScore ){    assert( alphabeta(CurrentBoard, TTEntry.Depth, alpha=-infinity, beta=+infinity) <= TTEntry.Score ) //hashing disabled  }}

so i would assume back again that data storage and retrieval is bugless.

i also thought of a different possible reason for the ineffiency of transposition tables in my engine: collision rate higher than normal, but i think its normal according to a test i did.
the test was that i calculated the collision rate when the TT was 30% full. in such a case it is easy to conclude that collision rate should be also 30% and according to the statistics the collision rate was only 23% indicating that my indexing is high randomized.

also around 27% of visited positions had an entry found for them found in the TT.

PS:
Quote:
 Original post by RPGeezusI made graphs showing the speed improvements each of these features made to my program. I should really post them.

true. true. true!

[Edited by - zaidgs on August 27, 2004 1:57:35 PM]

##### Share on other sites
RE: Lasker Position: That is the exact board setup I was thinking of.

3 seconds is very good. I think it's fair to say that, generally, your code is working properly.

How are you calculating the hash keys for a board? Just a shot in the dark, but maybe you're not being effecient, and the time it takes to generate the keys is robbing you of the speed boost.

Will

##### Share on other sites
I just want someone to verify that my enhanced transposition cut off code is actually correct.

for each sucessor node that is found in hash tables{  if ( hash.depth >= depth-1 && -hash.score >= beta && hash.type != hfFailHigh )    return -hash.score;}

##### Share on other sites
anyone can confirm that ETC code is correct ?!?!

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
627734
• Total Posts
2978846

• 10
• 10
• 21
• 14
• 12