• FEATURED

View more

View more

View more

### Image of the Day Submit

IOTD | Top Screenshots

### The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

# Hashing connect -4 board (for Transposition table)..need some help :)

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

82 replies to this topic

### #21Álvaro  Members

Posted 01 July 2013 - 07:48 PM

Here's the code again:

u64 encode(Position const &p) {
u64 occupied = p.white | p.black;
return ((occupied << 1) | BOTTOM_ROW) ^ p.black;
}

Hopefully it is clear what occupied' means. occupied << 1' means the set of squares that are above an occupied square. I compute the union of that and the bottom row (which is a constant) and now I have the set of all the squares occupied or on top of an occupied one. XORing that with the set of black pieces sets the places with black pieces to 0, leaving the unique number that I described early in this thread.

Is that clear now?

Remember to do something about what I mentioned at the end of post #17.

### #22patishi  Members

Posted 01 July 2013 - 08:04 PM

ok,thx.  yeah,i chose a prime number for the table (do you think that 500009 is enough?) and for the bit mixing part, the function you posted already does that i suppose(?).

By the way, i do keep the lowest free square of each column in an array called height[] (size 7) but i am only using the exact 42 bits for the board,unlike john tromp's 49 bits representation.    when i make a move and updating ( height[n]++ ), when i reach the column topMost square , the height[n] becomes equal to the next column lowest bottom square.  (this is how i also check for if a certain column is full or not).      e.g i don't have extra 7 dummy bits.

so i guess the method you showed above still works in my case..?

Edited by patishi, 01 July 2013 - 08:07 PM.

### #23Álvaro  Members

Posted 01 July 2013 - 08:24 PM

ok,thx.  yeah,i chose a prime number for the table (do you think that 500009 is enough?) and for the bit mixing part, the function you posted already does that i suppose(?).

No, it does not. But you might not need it because you are using a non-power-of-2 table size.

By the way, i do keep the lowest free square of each column in an array called height[] (size 7) but i am only using the exact 42 bits for the board,unlike john tromp's 49 bits representation.    when i make a move and updating ( height[n]++ ), when i reach the column topMost square , the height[n] becomes equal to the next column lowest bottom square.  (this is how i also check for if a certain column is full or not).      e.g i don't have extra 7 dummy bits.

so i guess the method you showed above still works in my case..?

No, in that case the code I posted is useless to you. The whole point of the 49-bit representation is to allow that trick.

Alternatively, you can still use a traditional Zobrist key, which would work just fine. Simply generate a table long zobrist_table[42][2]', initialize it with random 64-bit numbers and XOR together the numbers corresponding to the pieces present on the board. The resulting quantity can be updated incrementally when a disk is place or removed, and it's quite cheap.

### #24patishi  Members

Posted 01 July 2013 - 08:40 PM

ok thanks!  sorry for the confusion..I should have known that me not using the 49 bits method will not work in this case.  so i think i will go for the zobrist solution,seems simple enough.                  Although i can't help thinking that if a combination of 2 bitboards (white and black both)  is unique for each position (e.g there can't be a situation with the same white bitboard and the same black bitboard together) , there is a straight forward way to produce some kind of key  (something like addition of the two bitboards together + another something  )      but i afraid that if i will go for something of my own i will get identical keys / codes for different positions.     so  i will go for the safe and popular method of zobrist.

Edited by patishi, 01 July 2013 - 08:42 PM.

### #25patishi  Members

Posted 01 July 2013 - 10:48 PM

And another quick question if i may, i know that after making a move (an actual one) i should empty the table and start fresh (please correct me if i am wrong) .

But what about if there is no depth limit..e.g if i am doing a complete tree search. Do i still need to reinitialize the table after each move?

EDIT: while the random numbers part is rather easy for me to understand.   from what i understood, one of the advantages of zobrist is that instead of hashing the entire board every time, one can update it's hash value by means of using XOR.   but i can't figure out how exactly should i do it in connect 4.  I would like to get some example if possible.
Or maybe this is not relevant in connect 4 since unlike chess,  the pieces are not moving on the board,just being added to it..??

EDIT2:  what do you say about those two functions.  one is initializing the table with random numbers and the other produce the hash code.  I need some review here
zobrist is a long [42] [2] array defined in the begining of this class

private void initZobrist(){
Random r = new Random();

for(int i = 0; i<42; i++){
for(int j = 0; j<2; j++){
zobrist[i][j] = r.nextLong();
}
}
}

public long hash(long[]board){
long hashValue = 0;
long y1 = board[0];  //white's bitBoard
long y2 = board[1];  //black's bitBoard

for(int i = 0; i<42; i++){
if((y1 & (1L<<i)) !=0){
hashValue ^= zobrist[i][0];
}

else if((y2 & (1L<<i)) !=0){
hashValue ^= zobrist[i][1];
}
}
return hashValue;
}

Edited by patishi, 02 July 2013 - 01:34 AM.

### #26Álvaro  Members

Posted 02 July 2013 - 04:23 AM

That code is very very slow. Make hashValue a part of the board representation and whenever you place a disk or remove a disk, ^= it with the appropriate entry from the table.

### #27patishi  Members

Posted 02 July 2013 - 05:36 AM

sorry,but i am a slow thinker  when exactly should i ^= ?  when i make an actual move or when i make a move inside the alpha beta function?  and how (technically) this can be done??  after i make a move i get a totally new position..how (without hashin it) can i tell what to xor it with?        and what did you mean by making the hashvalue a part of the board representation?

Edited by patishi, 02 July 2013 - 05:38 AM.

### #28Álvaro  Members

Posted 02 July 2013 - 07:57 AM

The way I structure my code, there is a class Board that looks something like this:

struct Board {
u64 bitboard[2];
int height[7];
u64 hash_key;

void make_move(int square, int side) {
bitboard[side] ^= u64(1) << square;
height[square/7]++;
hash_key ^= zobrist_table[square][side];
}

void undo_move(int square, int side) {
bitboard[side] ^= u64(1) << square;
height[square/7]--;
hash_key ^= zobrist_table[square][side];
}
};
`

I don't create new positions in the search tree, but modify the existing one by calling make_move and undo_move. Does that make more sense?

### #29patishi  Members

Posted 02 July 2013 - 08:28 AM

ok.. first of all thx for the code,it is very helpful.   and yes,ofcourse i am also doing make and unmake move (not creating new board arrays or something like this) i am working on a single board in my program.    I see that you XOR that hash_key class member..but what are you doing with it?  aren't you saving it in the hash table or something?

in my program i have a special class for all the transposition table functions.  you can also find there the zobrist making and the insert and find method.   In the alpha beta function,after i finish calculating a move, or when i get an early cut off i normally store the current board position in the hash table.  and in the start of the function i search for a similar position to see if it is in the table (and for that i need to hash it again!).      but you know that stuff already    I just don't get how your hash_key relates to the transposition table

Edited by patishi, 02 July 2013 - 09:11 AM.

### #30Álvaro  Members

Posted 02 July 2013 - 09:51 AM

### #31patishi  Members

Posted 02 July 2013 - 09:57 AM

Ok thanks,you are right.  i will try to figure it on my own for a while.  i am sure that in some way or the other it will work,even if the implementation won't be the best.

### #32patishi  Members

Posted 03 July 2013 - 02:17 AM

EDIT: I rewrote this post, i've  managed to work it!  now i have a zobrist hashing and the speed gain is very nice.    thanks a lot Alvaro for the help

But i have a another question related to the transposition table management.  would you recommend using some sort of replacement scheme for same positions with different depth?   say if the new position has a larger depth than the saved one than i should replace it  with the new one.        casue right now i am not doing any replacement,i am just inserting the positions blindly to the table.

I am handling collisions by using a LinkedList in every table index,so if two board positions fall at the same index, i just add them to the list.

And what size of Transposition table do you recommend using?  right now i have table at size 500009 .  but when i measure the amount of insertions to the table,I get more than 3 million ! (depth 15)

Edited by patishi, 03 July 2013 - 06:19 AM.

### #33Álvaro  Members

Posted 03 July 2013 - 03:46 PM

If your search is fast at all, you won't be able to keep all the results in memory. The way a transposition table typically works, it has a fixed size and at some point you'll have to overwrite entries and forget things. There are several replacement schemes you can use to decide what to throw out. I have always had good results with size-4 buckets. See this page for more info.

The size of the transposition table should be determined by the available RAM in your computer. Make them as big as you can without forcing the OS to swap memory pages. If you have 4 GB of memory, you should be able to use 2GB for the transposition table without any trouble.

Edited by Álvaro, 03 July 2013 - 03:46 PM.

### #34patishi  Members

Posted 04 July 2013 - 09:38 AM

thx, this is exactly what i needed!     But how can i know when is the best time to start replacing entries from the table?    regardless of the replacement scheme i use, how should i technically remove old entries from the table?   should i iterate over the whole array and start deleting items?

EDIT:  I was getting out of memory errors!  so now  my implementation includes one array in size 500009, each index contains a LinkedList (for chaining when collision occurs)

so right now when a linked list has over 2 items in it i just clear them from the list.   but i don't know if my approach is good or not.

Edited by patishi, 04 July 2013 - 01:15 PM.

### #35Álvaro  Members

Posted 04 July 2013 - 09:13 PM

Each index should contain an entry and nothing else. You need to decide what locations an entry might be stored in (I use buckets of 4 locations) and then, when you want to store some information, you might find that all those locations are occupied by other stuff: That's when you overwrite something with the new entry.

### #36patishi  Members

Posted 04 July 2013 - 09:44 PM

thanks for your patience Alvaro...very much appreciated!     but i don't understand.  what do you mean by 4 locations?  (by "location" you mean an index in the array? should i store the same entry in 4 different indexes?)  if i have an array ,that each index of the array (this is what you mean by bucket?)  contains only single  Entry object (Entry object contains all the details like zobrist key,depth,score  etc..).   the index which the Entry is stored in get's picked by the compression function (zobrist key % size of array).        my logic tells me,that i can store only one Entry object per index (instead of a list of a few Entries)

and when a collision occurs, i can decide whether to keep the already saved entry or to overwrite it for the sake of the new entry  (  the new entry might be the same position, or a different one that only fell in the same index by the compression function).

Am i totally in the wrong direction here??

EDIT:  Right now i am experimenting with an array of size 1000003, and everytime a position collide to a certain index which already have a position in it,i simply replace no matter what is stored.    i will try to improve it,with a replacement by depth.      but is it at least acceptable?

Edited by patishi, 05 July 2013 - 04:53 AM.

### #37Álvaro  Members

Posted 05 July 2013 - 11:18 AM

Naming convention:
* Entry: A structure containing hash key, depth, score, bound type, best move, age...
* Bucket: A group of a few consecutive entries (say, 4).

There are many possible replacement schemes (I gave you a link earlier to a chessprogrammingwiki page about it). The one I use consists of computing n=(hash_key%table_size)&~3, and storing the entry in the "weakest" of entries n, n+1, n+2 and n+3. By "weakest" I mean:
(1) If the hash_key of the stored entry matches the hash_key of what we are storing, that is the weakest entry.
(2) Empty entries are weakest.
(3) Entries with a value of age that is not the current age (meaning they are from a previous search) are weaker than entries from the current search.
(4) For entries that are not distinguished by the previous criteria, lower depth is weaker.

When you want to retrieve data from the transpositions table, you need to search in four locations before giving up. If you align things correctly, all four entries might fit in a cache line, which means you get the extra memory accesses essentially for free.

If I were using this scheme, I would pick a table size that is a multiple of 4, of course.

### #38patishi  Members

Posted 06 July 2013 - 10:00 PM

Hi.  I just wanted to say that so far i have implemented the Transposition table with zobrist keys and also a killer moves heuristic mechanism. all that combined with bit board logic and my program runs super fast.  only 2-3 ( maybe 4 at the first move only) seconds in depth 15!   when comparing the time this calculation takes without the use of TT and killer moves,  the difference is HUGE (it takes forever).  so thank you all and especially Alvaro for all the help in all subjects, thanks to you i have learned a lot and have a nice working engine

### #39patishi  Members

Posted 07 July 2013 - 01:54 AM

just one final question related to this subject,  i am applying the transposition table and killer moves only in the "inner" negamax function, not int the "root" negamax where i go through the root moves (the "root" negamax calls the negamax on every root move and the negamax calls itself recursively).  should i implement the TT and killer moves operations in the root negamax also?
And i also clear the hashTable and killer moves table after every move, is this neccesary?  my TT  replacement scheme is "always replace" at the moment, not by depth...so maybe clearing the hash table every move is not neccesary.

Edited by patishi, 07 July 2013 - 02:01 AM.

### #40Álvaro  Members

Posted 07 July 2013 - 05:12 AM

Clearing the TT makes your program forget a lot of what it found in previous searches, some of which is still useful. So one should generally not do it.

The main reason to clear the TT is that it is much easier to debug problems if they can be reproduced, and clearing the TT and other data structures (e.g. killer moves, history heuristic) makes your searches reproducible.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.