Iterative Deepening

Started by
3 comments, last by glJunkie 21 years, 5 months ago
Hi everyone, Id like to say thanks for the responses to the folks on this forum who have helped me so far with my game tree type questions. My latest thing is trying to understand Iterative Deepening (ID). I''m not explicityly implementing anything, Im just trying to understand it as a high level concept. This is my understanding so far: ID is used when a time constraint is involved; the search algorithm must return its "best guess" within a given time. Like in a game of chess where there is a time limit on each player''s move. We know that searching the nodes in order of their value (searching the best ones first) is important because alpha-beta pruning works well when it can establish a "best case" scenario quickly, meaning it can hack off subsequent "not so good" nodes thus lessening the amount of work it must do. So if we can order the nodes at a particular ply, thats a good thing. So, ID searches to some ply, k, then forces an evaluation. If time is up it returns the best state discovered. If theres still time left, it continues the search on down some amount, s (a total search depth k+s from the root) searching the states assigned the highest values first, as this is most efficient for alpha-beta as described above. Another evaluation of discovered states is forced and if time is up, the best valued state is returned. Otherwise, it searches again to another level (a total of k+s+s from the root), and so on, until time *is* up. This is advantagous to games where a time constraint is involved because a best-guess is constantly formulated and refined, meaning a best guess for the time allowed is available when the time is actually up. Ofcourse, ID relies heavily on an trusworthy evaluator. Does this sound right? I havnt heard it described like that. But thats my take on it. Thanks, Tim.
Advertisement
The way I understand it (which may be incorrect), the iterative deepening algorithm performs a 1 ply search, then a 2 ply search and so on. The search is thorough, so you only get a chance to stop the algorithm inbetween these searches. If you want to stop the algorithm after a precise interval, you have to cancel the last (unfinished whe the timer runs out) search and ignore it''s results.

The results of the previous searches can be used by store the scores of all positions in a transposition table (a hash table). The next search can retrieve the scores from the table and use them for move sorting.

One advantage of this scheme is that the AI always goes for the fastest win if you stop the program as soon as one of the searches finds a winning solution.
I think you get the basic idea. Here are some webpages to check out if you''d like to do a little more reading on ID.

http://www.seanet.com/~brucemo/topics/iterative.htm
http://www.maths.nott.ac.uk/personal/anw/G13GT1/compch.html#deep
http://www.ast.cam.ac.uk/~cmf/chess/theory.html#iterative

Russell
Thanks Russel. The best variant idea is great. I have already implemented best variant calculation for my go-moku program, but didn''t realise it can work together with iterative deepening. This improvement wasn''t so great for gomoku unfortunately (10-20% increase), but I imagine it will work a lot better with Othello, where the evaluation function is a lot stronger.
If you going to use Iterative Deepeing, I''m assuming you''ve got your hash tables running as well.

You can use the scores in the hash table to re-sort the moves, so you get more cut-offs and do less searching.

Try seperately tracking the principal variation and use this to sort as well. If you only rely on your hash table you''ll sometimes ''miss'' data due to hash collisions.
------------------http://www.nentari.com

This topic is closed to new replies.

Advertisement