Hashing connect -4 board (for Transposition table)..need some help :)

Started by
81 comments, last by Tom Sloper 8 years, 3 months ago

Ok thanks,you are right. i will try to figure it on my own for a while. i am sure that in some way or the other it will work,even if the implementation won't be the best.

Advertisement

EDIT: I rewrote this post, i've managed to work it! now i have a zobrist hashing and the speed gain is very nice. thanks a lot Alvaro for the help smile.png

But i have a another question related to the transposition table management. would you recommend using some sort of replacement scheme for same positions with different depth? say if the new position has a larger depth than the saved one than i should replace it with the new one. casue right now i am not doing any replacement,i am just inserting the positions blindly to the table.

I am handling collisions by using a LinkedList in every table index,so if two board positions fall at the same index, i just add them to the list.

And what size of Transposition table do you recommend using? right now i have table at size 500009 . but when i measure the amount of insertions to the table,I get more than 3 million ! (depth 15)

If your search is fast at all, you won't be able to keep all the results in memory. The way a transposition table typically works, it has a fixed size and at some point you'll have to overwrite entries and forget things. There are several replacement schemes you can use to decide what to throw out. I have always had good results with size-4 buckets. See this page for more info.

The size of the transposition table should be determined by the available RAM in your computer. Make them as big as you can without forcing the OS to swap memory pages. If you have 4 GB of memory, you should be able to use 2GB for the transposition table without any trouble.

thx, this is exactly what i needed! But how can i know when is the best time to start replacing entries from the table? regardless of the replacement scheme i use, how should i technically remove old entries from the table? should i iterate over the whole array and start deleting items?

EDIT: I was getting out of memory errors! so now my implementation includes one array in size 500009, each index contains a LinkedList (for chaining when collision occurs)

so right now when a linked list has over 2 items in it i just clear them from the list. but i don't know if my approach is good or not.

Each index should contain an entry and nothing else. You need to decide what locations an entry might be stored in (I use buckets of 4 locations) and then, when you want to store some information, you might find that all those locations are occupied by other stuff: That's when you overwrite something with the new entry.

thanks for your patience Alvaro...very much appreciated! but i don't understand. what do you mean by 4 locations? (by "location" you mean an index in the array? should i store the same entry in 4 different indexes?) if i have an array ,that each index of the array (this is what you mean by bucket?) contains only single Entry object (Entry object contains all the details like zobrist key,depth,score etc..). the index which the Entry is stored in get's picked by the compression function (zobrist key % size of array). my logic tells me,that i can store only one Entry object per index (instead of a list of a few Entries)

and when a collision occurs, i can decide whether to keep the already saved entry or to overwrite it for the sake of the new entry ( the new entry might be the same position, or a different one that only fell in the same index by the compression function).

Am i totally in the wrong direction here??

EDIT: Right now i am experimenting with an array of size 1000003, and everytime a position collide to a certain index which already have a position in it,i simply replace no matter what is stored. i will try to improve it,with a replacement by depth. but is it at least acceptable?

Naming convention:
* Entry: A structure containing hash key, depth, score, bound type, best move, age...
* Bucket: A group of a few consecutive entries (say, 4).

There are many possible replacement schemes (I gave you a link earlier to a chessprogrammingwiki page about it). The one I use consists of computing n=(hash_key%table_size)&~3, and storing the entry in the "weakest" of entries n, n+1, n+2 and n+3. By "weakest" I mean:
(1) If the hash_key of the stored entry matches the hash_key of what we are storing, that is the weakest entry.
(2) Empty entries are weakest.
(3) Entries with a value of age that is not the current age (meaning they are from a previous search) are weaker than entries from the current search.
(4) For entries that are not distinguished by the previous criteria, lower depth is weaker.

When you want to retrieve data from the transpositions table, you need to search in four locations before giving up. If you align things correctly, all four entries might fit in a cache line, which means you get the extra memory accesses essentially for free.

If I were using this scheme, I would pick a table size that is a multiple of 4, of course.

Hi. I just wanted to say that so far i have implemented the Transposition table with zobrist keys and also a killer moves heuristic mechanism. all that combined with bit board logic and my program runs super fast. only 2-3 ( maybe 4 at the first move only) seconds in depth 15! when comparing the time this calculation takes without the use of TT and killer moves, the difference is HUGE (it takes forever). so thank you all and especially Alvaro for all the help in all subjects, thanks to you i have learned a lot and have a nice working engine :)

just one final question related to this subject, i am applying the transposition table and killer moves only in the "inner" negamax function, not int the "root" negamax where i go through the root moves (the "root" negamax calls the negamax on every root move and the negamax calls itself recursively). should i implement the TT and killer moves operations in the root negamax also?
And i also clear the hashTable and killer moves table after every move, is this neccesary? my TT replacement scheme is "always replace" at the moment, not by depth...so maybe clearing the hash table every move is not neccesary.

Clearing the TT makes your program forget a lot of what it found in previous searches, some of which is still useful. So one should generally not do it.

The main reason to clear the TT is that it is much easier to debug problems if they can be reproduced, and clearing the TT and other data structures (e.g. killer moves, history heuristic) makes your searches reproducible.

This topic is closed to new replies.

Advertisement