Jump to content

  • Log In with Google      Sign In   
  • Create Account


#Actualpatishi

Posted 05 July 2013 - 04:53 AM

thanks for your patience Alvaro...very much appreciated!     but i don't understand.  what do you mean by 4 locations?  (by "location" you mean an index in the array? should i store the same entry in 4 different indexes?)  if i have an array ,that each index of the array (this is what you mean by bucket?)  contains only single  Entry object (Entry object contains all the details like zobrist key,depth,score  etc..).   the index which the Entry is stored in get's picked by the compression function (zobrist key % size of array).        my logic tells me,that i can store only one Entry object per index (instead of a list of a few Entries)

and when a collision occurs, i can decide whether to keep the already saved entry or to overwrite it for the sake of the new entry  (  the new entry might be the same position, or a different one that only fell in the same index by the compression function).      

Am i totally in the wrong direction here??       

 

EDIT:  Right now i am experimenting with an array of size 1000003, and everytime a position collide to a certain index which already have a position in it,i simply replace no matter what is stored.    i will try to improve it,with a replacement by depth.      but is it at least acceptable?

 


#9patishi

Posted 04 July 2013 - 10:12 PM

thanks for your patience Alvaro...very much appreciated!     but i don't understand.  what do you mean by 4 locations?  (by "location" you mean an index in the array? should i store the same entry in 4 different indexes?)  if i have an array ,that each index of the array (this is what you mean by bucket?)  contains only single  Entry object (Entry object contains all the details like zobrist key,depth,score  etc..).   the index which the Entry is stored in get's picked by the compression function (zobrist key % size of array).        my logic tells me,that i can store only one Entry object per index (instead of a list of a few Entries)

and when a collision occurs, i can decide whether to keep the already saved entry or to overwrite it for the sake of the new entry  (  the new entry might be the same position, or a different one that only fell in the same index by the compression function).      

Am i totally in the wrong direction here??       

 


#8patishi

Posted 04 July 2013 - 10:11 PM

thanks for your patience Alvaro...very much appreciated!     but i don't understand.  what do you mean by 4 locations?  (by "location" you mean an index in the array? should i store the same entry in 4 different indexes?)  if i have an array ,that each index of the array (this is what you mean by bucket?)  contains only single  Entry object (Entry object contains all the details like zobrist key,depth,score  etc..).   the index which the Entry is stored in get's picked by the compression function (zobrist key % size of array).        my logic tells me,that i can store only one Entry object per index (instead of a list of a few Entries)

and when a collision occurs, i can decide whether to keep the already saved entry or to overwrite it for the sake of the new entry  (  the new entry might be the same position, or a different one that only fell in the same index by the compression function).      

Am i totally in the wrong direction here??           I just thought that if everytime i get a collision i will simply overwrite automatically what i have stored in that index, the system won't be able to use the stored positions (because i keep replacing them all the time)   , so i went for a list with a few entries instead of one.      The problem is that if i don't remove anything from those lists, they keep getting full (those lists are not limited in size) until i get an "out of memory" error.  I also feel that the whole operation is very slow and unuseful.    

 


#7patishi

Posted 04 July 2013 - 10:05 PM

thanks for your patience Alvaro...very much appreciated!     but i don't understand.  what do you mean by 4 locations?  (by "location" you mean an index in the array?)  if i have an array ,that each index of the array (this is what you mean by bucket?)  contains only single  Entry object (Entry object contains all the details like zobrist key,depth,score  etc..).   the index which the Entry is stored in get's picked by the compression function (zobrist key % size of array).        my logic tells me,that i can store only one Entry object per index (instead of a list of a few Entries)

and when a collision occurs, i can decide whether to keep the already saved entry or to overwrite it for the sake of the new entry  (  the new entry might be the same position, or a different one that only fell in the same index by the compression function).      

Am i totally in the wrong direction here??           I just thought that if everytime i get a collision i will simply overwrite automatically what i have stored in that index, the system won't be able to use the stored positions (because i keep replacing them all the time)   , so i went for a list with a few entries instead of one.      The problem is that if i don't remove anything from those lists, they keep getting full (those lists are not limited in size) until i get an "out of memory" error.  I also feel that the whole operation is very slow and unuseful.    

 


#6patishi

Posted 04 July 2013 - 10:05 PM

thanks for your patience Alvaro...very much appreciated!     but i don't understand.  what do you mean by 4 locations?  (by "location" you mean an index in the array?)  if i have an array ,that each index of the array (this is what you mean by bucket?)  contains only single  Entry object (Entry object contains all the details like zobrist key,depth,score  etc..).   the index which the Entry is stored in get's picked by the compression function (zobrist key % size of array).        my logic tells me,that i can store only one Entry object per index (instead of a list of a few Entries)

and when a collision occurs, i can decide whether to keep the already saved entry or to overwrite it for the sake of the new entry  (  the new entry might be the same position, or a different one that only fell in the same index by the compression function).      

Am i totally in the wrong direction here??           I just thought that if everytime i get a collision i will simply overwrite automatically what i have stored in that index, the system won't be able to use the stored positions (because i keep replacing them all the time)   , so i went for a list with a few entries instead of one.      The problem is that if i don't remove anything from those lists, they keep getting full (those lists are not limited in size) until i get an "out of memory" error.  I also feel that the whole operation is very slow and unuseful.    

by the way,i am storing positions when alpha beta cutoff occurs and also  at the end of the "for each" loop (all children of a node was investigated)  after a full branch search.  hope this is the right thing to do
 


#5patishi

Posted 04 July 2013 - 09:58 PM

thanks for your patience Alvaro...very much appreciated!     but i don't understand.  what do you mean by 4 locations?  if i have an array ,that each index of the array (this is what you mean by bucket?)  contains only single  Entry object (Entry object contains all the details like zobrist key,depth,score  etc..).   the index which the Entry is stored in get's picked by the compression function (zobrist key % size of array).        my logic tells me,that i can store only one Entry object per index (instead of a list of a few Entries)

and when a collision occurs, i can decide whether to keep the already saved entry or to overwrite it for the sake of the new entry  (  the new entry might be the same position, or a different one that only fell in the same index by the compression function).      

Am i totally in the wrong direction here??           I just thought that if everytime i get a collision i will simply overwrite automatically what i have stored in that index, the system won't be able to use the stored positions (because i keep replacing them all the time)   , so i went for a list with a few entries instead of one.      The problem is that if i don't remove anything from those lists, they keep getting full (those lists are not limited in size) until i get an "out of memory" error.  I also feel that the whole operation is very slow and unuseful.    

by the way,i am storing positions when alpha beta cutoff occurs and also  at the end of the "for each" loop (all children of a node was investigated)  after a full branch search.  hope this is the right thing to do
 


PARTNERS