Hashing connect -4 board (for Transposition table)..need some help :)

Started by
81 comments, last by Tom Sloper 8 years, 3 months ago

An update, I switched from using a 3000017 sized array for the TT into using a 1048583 sized array, but this time it is a two dimentional array that holds another "inner" array of 4 entries in each index. that means that there is more room for entries (4000000+). The replacement scheme is roughly the same, i check to see if an identical position is found, if not i check to see if i still have room to store a new entry (e.g if there are null entries) and if not, i just push out the oldest one out of the array and insert the new one. so every time the oldest entry get to be dropped out... and so on.

Actually i don't know what is the difference between using this kind of Table and using a single linear array with more positions. ( like i did before). if it turns out that the two tables are roughly the same, than i suppose that using a single array is better and faster (no need for searching operations, i just immediately replace an entry)

and If i do a full game tree search i don't use a depth parameter so replacing "by depth" schmeme is useless.

Advertisement
Even in a full game tree search some nodes have enormous subtrees and others don't. You may want to prioritize keeping information about nodes that have large subtrees, because they can potentially save you much more work than a node towards the end of the tree could.

I think what John Tromp used was a table with 2-entry bins, prioritizing by depth.

ok thaks! but how can i recognize a node with a large subtree? should my priority be the bigger distance from the root is better? but still...how can i know what was the size of the subtree of a particular node?

EDIT: Ah i think i got it. silly question, i need to keep the nodes with a bigger distance from the root..that's the sign of a large subtree. :)

EDIT: Ah i think i got it. silly question, i need to keep the nodes with a bigger distance from the root..that's the sign of a large subtree. smile.png


I imagine you meant the opposite: The closer to the root, the larger the subtree. You could also have a global counter of positions visited, compute the difference between the value of this counter at the beginning and at the end of the search, and then you'll actually know how much effort was put into the search.

As I have said a couple of times before, you need to test several options to see what works best in your program.

thx for the tip, i am right on it ! :)

I am trying to implement the "Deep + always" or the "Two tier System" .. e.g at every index in the table, i have one entry that gets replaced by "depth" , that means that only if the depth is higher (i assume that means closer to the root node) it gets replaced by the new entry. And the second entry always gets replaced no matter what.

The question i have regarding this: when i want to insert new entry to the table, than i go to that specific index and first check the "by depth" entry, and if the depth is higher than the one in the table, i immediately switch between them, i even don't check if this is the same position or just another one hashed to the same index.
But if not (!) ,e.g, the depth is not higher, than i need now to put the new entry in the second poisition (the "always replace "one). but i didn't get if i need to check whether the new position is different than the one i already have in the first position (the "depth " entry) or i allowed to have two identical entries, side by side.

my logic tells me that it is a bit of waste to have two identical positions one next to the other, when in any case, when probing for the table for an entry, i always check the "depth" entry first anyway... I hope that the question is clear.smile.png in other words, do i need to check if the two entries are similar or not.

The better transposition-table replacement policy is the one that performs better. Did you set up a way to test them? What are the results?

right now, from checking the time i get an answer from the engine there is not a crucial difference between the TT with the single entry array && always replace scheme, and the scheme i am using now (deep+always). like i said,so far,the only big difference i noticed is using the killer moves instead of the history heuristic (killer moves are better), but there is a chance that my HH implementation is not the "best".

as for the TT , so far i haven't been using the hash move, and right now i am working on implementing it, to see if this will speed up the process.
I must say that now, after i implemented the 8 ply "book" moves (e.g database) my engine is playing perfect, and against week engines, the answer of the first engine move (the 9th ply) is getting pretty fast, something like 5 6 seconds.. but i want it to be faster!

Actually, the speed in the first engine move is not bad at all (could help just a tiny bit of extra boost) in most positions. but when testing it agains a certain (based on the velena engine) connect 4 engine with a perfect play. than the sequence of moves is always the same. they play some kind of follow up, and than the engine gets stuck for a minute or so... sad.png but when testing it against the john tromp machine, in all positions i have tested so far, it plays the first move after a few seconds only!

Wait, how can the opponent make a difference for the time it takes to complete the first search? The opponent hasn't even moved at that point!

This topic is closed to new replies.

Advertisement