Jump to content

  • Log In with Google      Sign In   
  • Create Account


Hashing connect -4 board (for Transposition table)..need some help :)


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
80 replies to this topic

#41 patishi   Members   -  Reputation: 212

Like
0Likes
Like

Posted 07 July 2013 - 06:50 AM

thank you , i wll try it without the clearing process.  actually in the 4 first moves i use an evaluation function and after that i am brut forcing to the end of the game tree,so i hope that this is still ok (to retain the positions searched in the "evaluation" part of the program).   

and one more thing please..  i am trying to understand what exactly is a "hash move"..  i know that when i am storing a position in my TT i should also store the best move (or something like that...).   but when should i use it?    at the beggining of alpha beta, when i am probing the table for a position, if the position is found in the table i just returning the score, or updating alpha/beta accordingly..  so when i get the change to use the hash move??     can you please advice on that?   

I tries reading stuff on the net, but no one actually gets to the technical part


Edited by patishi, 07 July 2013 - 06:52 AM.


Sponsor:

#42 Álvaro   Crossbones+   -  Reputation: 12886

Like
0Likes
Like

Posted 07 July 2013 - 07:53 AM

You get to use the hash move whenever you find the position in the TT but you can't use the information there to just return a score. What you do with it is try it first, of course.



#43 patishi   Members   -  Reputation: 212

Like
0Likes
Like

Posted 07 July 2013 - 08:25 AM

ok,  so every situation where i don't return the score (nor exact, and not alpha>= beta after updating them).    and i guess that the hash move can be not only when i get a cut-off but also after a complete branch search (?)     thx again for all the answers, i will conclude this topic in that final question  .(.unless someone else will want to post something ofcourse)


Edited by patishi, 07 July 2013 - 08:27 AM.


#44 Álvaro   Crossbones+   -  Reputation: 12886

Like
0Likes
Like

Posted 07 July 2013 - 12:54 PM


and i guess that the hash move can be not only when i get a cut-off but also after a complete branch search (?)

 

I always store a move, but I think there are people that only do it if the move caused a beta cutoff. This is the kind of thing that you should determine by testing.



#45 patishi   Members   -  Reputation: 212

Like
0Likes
Like

Posted 07 July 2013 - 04:05 PM

Do you do TT and killermoves (etc) operations in the root alpha beta functions also?   right now i am just iterating through all the root moves without any further operations, all of the TT and killer moves operations are happening inside my recursive alpha beta functions (not for theroot moves),  but i would like to try and get an early cut off in the root moves as well.    right now my root moves order is static  (starting from the middle column and going to the side columns)   
 


Edited by patishi, 07 July 2013 - 04:10 PM.


#46 Álvaro   Crossbones+   -  Reputation: 12886

Like
0Likes
Like

Posted 07 July 2013 - 07:16 PM

The order at the root is very easy to do well if you use iterative deepening: Whenever a move is determined to be the best so far, move it to the top of the list.

 

If you are going to always search to the end of the game, you should probably start with the hash move if there is one.  Then sort the other moves using history heuristic (John Tromp swears his particular flavor of history heuristic made his program dramatically faster).



#47 patishi   Members   -  Reputation: 212

Like
0Likes
Like

Posted 07 July 2013 - 07:53 PM

yes,right now i am starting with the hash move and than trying my two killers..after that i just continue to go through the rest of the columns list that wasn't played.  
I also got some very good improvements with the killer moves (and the TT ofcourse).  but like i said,i am not doing nothing at the root negamax and i think that this is the main reason that the search is not optimal.            I am not using iterative deepening yet,but i will investigate about it.         thx!


Edited by patishi, 07 July 2013 - 08:39 PM.


#48 patishi   Members   -  Reputation: 212

Like
0Likes
Like

Posted 08 July 2013 - 06:25 AM

Just another question, i can't stop thinking that i am doing something wrong.   so i need to check some things:
 

A.  whenever the alpha beta function returns a score (beta cut off or not), i am saving the current position in the Hash table with the evaluation and the current move that  was tried.     this is the so called "hash move" that i will use later in case this position ever occur again.    but I read somewhere (a chess blog) that i don't always have a best move to store...that is a little confusing, because i DO  always have a best move!     the move under investigation will be the best move!   so i don't get it...what am i missing here?

 

 

B.  the times when i store a position in the TT after a beta cut off, the hash move that i store is the same as the killer move for that same depth (if i understood correctly,a killer move is a move that caused a beta cut off).  so many times i have both hash move and a killer move that are actually the same(!).   is this normal?     i always try the hash move first, and after that i check if the killer is the same move,and if it is ,i don't play it and just move on to the next.    so many times, i don't  benefit at all from using the hash move cause i have the killer moves anyway.         up until now i haven't noticed a significant improvement from using the hash move sad.png      that got me thinking that maybe i am doing something wrong 


Edited by patishi, 08 July 2013 - 06:31 AM.


#49 Álvaro   Crossbones+   -  Reputation: 12886

Like
0Likes
Like

Posted 08 July 2013 - 08:20 AM

A. Most chess programs use a so-called "null move" heuristic which can result in finishing a search without a move. This doesn't apply to connect 4 (it only works for games where Zugzwang is the exception).

 

B. Nothing wrong with what you describe. Most of what you read is geared towards minimax searches that don't get to the end of the game, where a lot of things are quite different. It is not surprising that heuristics that work well in a typical searcher don't work well in a program that does exhaustive search to the end of the tree. Even between programs of the same type, most heuristics don't work for everybody because they interact with other decisions made when building the program.



#50 patishi   Members   -  Reputation: 212

Like
0Likes
Like

Posted 08 July 2013 - 06:05 PM

Thanks for the explanation.  totally forgot about schemes like null moves :)



#51 patishi   Members   -  Reputation: 212

Like
0Likes
Like

Posted 14 July 2013 - 07:15 AM

An update, I switched from using a 3000017 sized array for the TT into using a 1048583 sized array, but this time it is a two dimentional array that holds another "inner" array of 4 entries in each index.    that means that there is more room for entries (4000000+).        The replacement scheme is roughly the same,  i check to see if an identical position is found,  if not i check to see if i still have room to store a new entry (e.g if there are null entries)  and if not,  i just push out the oldest one out of the array and insert the new one.     so every time the oldest entry get to be dropped out...  and so on.      

 

Actually i don't know what is the difference between using this kind of Table and using a single linear array with more positions. ( like i did before). if it turns out that the two tables are roughly the same, than i suppose that using a single array is better and faster (no need for searching operations, i just immediately replace an entry)

and  If i do a full game tree search i don't use a depth parameter so replacing "by depth" schmeme is useless.         



#52 Álvaro   Crossbones+   -  Reputation: 12886

Like
0Likes
Like

Posted 14 July 2013 - 09:10 AM

Even in a full game tree search some nodes have enormous subtrees and others don't. You may want to prioritize keeping information about nodes that have large subtrees, because they can potentially save you much more work than a node towards the end of the tree could.

I think what John Tromp used was a table with 2-entry bins, prioritizing by depth.

#53 patishi   Members   -  Reputation: 212

Like
0Likes
Like

Posted 14 July 2013 - 12:30 PM

ok thaks!    but how can i recognize a node with a large subtree?  should my priority be  the bigger distance from the root is better?  but still...how can i know what was the size of the subtree of a particular node?    

 

 

EDIT: Ah i think i got it.  silly question, i need to keep the nodes with a bigger distance from the root..that's the sign of a large subtree. :)


Edited by patishi, 14 July 2013 - 12:41 PM.


#54 Álvaro   Crossbones+   -  Reputation: 12886

Like
0Likes
Like

Posted 14 July 2013 - 12:44 PM

EDIT: Ah i think i got it.  silly question, i need to keep the nodes with a bigger distance from the root..that's the sign of a large subtree. smile.png


I imagine you meant the opposite: The closer to the root, the larger the subtree. You could also have a global counter of positions visited, compute the difference between the value of this counter at the beginning and at the end of the search, and then you'll actually know how much effort was put into the search.

As I have said a couple of times before, you need to test several options to see what works best in your program.

#55 patishi   Members   -  Reputation: 212

Like
0Likes
Like

Posted 14 July 2013 - 01:02 PM

thx for the tip,  i am right on it !  :)



#56 patishi   Members   -  Reputation: 212

Like
0Likes
Like

Posted 16 July 2013 - 06:14 AM

I am trying to implement the "Deep + always"  or  the "Two tier System"  .. e.g   at every index in the table, i have one entry that gets replaced by "depth" , that means that only if the depth is higher (i assume that means closer to the root node) it gets replaced by the new entry.      And the second entry always gets replaced no matter what.   

The question i have regarding this:     when i want to insert new entry to the table,  than i go to that specific index and first check the "by depth" entry,  and if the depth is higher than the one in the table, i immediately switch between them, i even don't check if this is the same position or just another one hashed to the same index.   
But if not (!)  ,e.g, the depth is not higher, than i need now to put the new entry in the second poisition (the "always replace "one).     but i didn't get if i need to check whether the new position is different than the one i already have in the first position (the "depth " entry)  or i allowed to have two identical entries, side by side.        

my logic tells me that it is a bit of waste to have two identical positions one next to the other, when in any case,  when probing for the table for an entry, i always check the "depth" entry first anyway...     I hope that the question is clear.smile.png       in other words, do i need to check if the two entries are similar or not.
 


Edited by patishi, 16 July 2013 - 06:16 AM.


#57 Álvaro   Crossbones+   -  Reputation: 12886

Like
0Likes
Like

Posted 16 July 2013 - 06:33 AM

The better transposition-table replacement policy is the one that performs better. Did you set up a way to test them? What are the results?

#58 patishi   Members   -  Reputation: 212

Like
0Likes
Like

Posted 16 July 2013 - 07:32 AM

right now, from checking the time i get an answer from the engine there is not a crucial difference between the TT with the single entry array &&  always replace scheme, and the scheme i am using now  (deep+always).    like i said,so far,the only big difference i noticed is using the killer moves instead of the history heuristic (killer moves are better), but there is a chance that my HH implementation is not the "best".     

 

as for the TT , so far i haven't been using the hash move,  and right now i am working on implementing it,  to see if this will speed up the process.   
I must say that now,  after i implemented the 8 ply "book" moves (e.g database) my engine is playing perfect, and against week engines, the answer of the first engine move (the 9th ply) is getting pretty fast, something like 5 6 seconds..  but i want it to be faster!    
 


Edited by patishi, 16 July 2013 - 08:52 AM.


#59 patishi   Members   -  Reputation: 212

Like
0Likes
Like

Posted 16 July 2013 - 08:52 AM

Actually, the speed in the first engine move is not bad at all (could help just a tiny bit of extra boost)  in most positions.   but when testing it agains a certain (based on the velena engine) connect 4 engine with a perfect play.  than the sequence of moves is always the same.    they play some kind of follow up,  and than the engine gets stuck for a minute or so...  sad.png    but when testing it against the john tromp machine, in all positions i have tested so far,  it plays the first move after a few seconds only!    


Edited by patishi, 16 July 2013 - 08:53 AM.


#60 Álvaro   Crossbones+   -  Reputation: 12886

Like
0Likes
Like

Posted 16 July 2013 - 09:53 AM

Wait, how can the opponent make a difference for the time it takes to complete the first search? The opponent hasn't even moved at that point!






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS