Followers 0

Iterative Deepening

15 posts in this topic

Is this valid iterative deepening code??

   public Move GetBestMove(IBoard board, int depth)
{
this.maxDepth = 100;
this.maxTime = 9000;

//Set initial window
int alpha = -INFINITY, beta = INFINITY;

//The move that will be returned
Move bestMove = null;
List<Move> moves = board.getMoves();

//Get the time search has started
long startTime = System.nanoTime();

//Iterate through the depths
for (int i = 0; i < maxDepth; i++)
{
bestMove = negaScoutRoot(board, alpha, beta);
int val = bestMove.score;

//Time check
double curTime = (System.nanoTime() - startTime) / 1e6;
if (curTime >= maxTime)
return bestMove;

//Score missed aspiration window
if (val <= alpha || val >= beta)
{
alpha = -INFINITY;
beta = INFINITY;

//Go to next iteration
continue;
}

//Set new aspiration window
alpha = val - ASPIRATION_SIZE;
if (alpha < -INFINITY)
alpha = -INFINITY;

beta = val + ASPIRATION_SIZE;
if (beta > INFINITY)
beta = INFINITY;
}

//Return the move
return bestMove;
}

private Move negaScoutRoot(IBoard board, int alpha, int beta)
{
//The move that will be returned
Move bestMove = null;
List<Move> moves = board.getMoves();

for (Move move : moves)
{
//Make the move
board.make(move, true);

//Search
int val = -negascout(board, 0, alpha, beta);

//Undo the move
board.undo(move);

//Keep best move
if (val > alpha)
{
alpha = val;

bestMove = move;
bestMove.score = alpha;
}
}

//Return the move
return bestMove;
}


I also added aspiration window. The root always starts searching from the current board state but with a different window.

0

Share on other sites
Setting alpha to -infinity should be done inside the loop over depths. I would combine both functions into one. The division as you have it is a bit odd: Why do both functions have a list of moves?

I implement aspiration search differently, where I try the narrow window only for the first move. If the result is outside the window, I search that first move again with full window. The rest of the moves are explored with null windows to test if they are better than alpha, following the PVS algorithm (I believe NegaScout is essentially the same thing).
0

Share on other sites

The moves variable in the getbestmove() is not required. I forgot it that. I will change the code as you suggest and have just one function that will in turn call the recursive search.

0

Share on other sites
One advantage of having the list of moves persist from iteration to iteration is that you can keep it sorted. Whenever you find a new best move, you move it to the top of the list, sliding down the moves that were before it.
0

Share on other sites

Ok now I have the main deepening loop and inside it the loop to iterate through the moves. Should the aspiration window check be inside the second loop (moves) or move outside to the main loop?

0

Share on other sites
There are several ways of doing aspiration search. You could search the first move before the loop starts, and then it would look something like this:
int score_guess = 0;

for (int depth=1; depth<100 && !out_of_time(); ++depth) {
board.make_move(moves[0]);
int alpha = -negamax(board, score_guess - 25, score_guess + 25, depth - 1);
board.undo_move(moves[0]);

if (alpha <= score_guess - 25 || alpha >= score_guess + 25) {
board.make_move(moves[0]);
alpha = -negamax(board, -Infinity, +Infinity, depth - 1);
board.undo_move(moves[0]);
}

for (int i=1; i<n_moves; ++i) {
//...
}

score_guess = alpha;
}

NOTE: I didn't test this code, so there might be mistakes.
0

Share on other sites

This is the updated code:

   public Move GetBestMove(IBoard board, int depth)
{
this.maxTime = 9000;
this.maxDepth = 100;

int alpha = -INFINITY, beta = INFINITY;
int scoreGuess = 0;

Move bestMove = null;
List<Move> moves = board.getMoves();
long startTime = System.nanoTime();

for (curDepth = 1; curDepth < maxDepth && !outOfTime(); curDepth++)
{
board.make(moves.get(0), true);
alpha = -negascout(board, curDepth - 1, scoreGuess - ASPIRATION_SIZE, scoreGuess + ASPIRATION_SIZE, startTime);
board.undo(moves.get(0));

if (alpha <= scoreGuess - ASPIRATION_SIZE || alpha >= scoreGuess + ASPIRATION_SIZE)
{
board.make(moves.get(0), true);
alpha = -negascout(board, curDepth - 1, -INFINITY, +INFINITY, startTime);
board.undo(moves.get(0));
}

int bestPos = -1;

for (int i = 1, n = moves.size(); i < n; i++)
{
board.make(moves.get(i), true);
int val = -negascout(board, curDepth - 1, alpha, beta, startTime);
board.undo(moves.get(i));

//Keep best move
if (val >= alpha)
{
alpha = val;
bestMove = moves.get(i);
bestPos = i;
}
}

//Move the best move to the top of the list
if (bestPos != -1)
{
moves.remove(bestPos);
}

//Set the current best score
scoreGuess = alpha;
}

//Return the move
return bestMove;
}


After the 17th move and it's the AI's turn again, the search enters an infinite loop.

I can't figure out why, because the code looks good to me. However, the negascout function has the following checks:

This one at the top:

        //Horizon has been reached
if (depth == curDepth)
{
t = board.getScore();
return t;
}


And this one when searching deeper (inside the move loop):

            t = -negascout(board, depth + 1, -b, -a, startTime);
if ((t > a) && (t < beta) && (i > 0) && (depth < curDepth - 1))
t = -negascout(board, depth + 1, -beta, -t, startTime);


Could it be that curDepth is wrong here?

0

Share on other sites

	int val = -negascout(board, curDepth - 1, alpha, beta, startTime);

That's clearly wrong: You need to use (-beta, -alpha) as the window, like in every other invocation of negamax.

You also seem to be confused as to what depth' means. Usually it means how many more moves we are going to explore. So you pass depth-1 in the recursive calls, and when it reaches 0 you return the static evaluation function (or you call quiescence search).

As for the part where your program gets stuck in an infinite loop, you'll have to find your own bugs, sorry.
0

Share on other sites

Ok changed the code you pointed out. As you can see from the snippet above my version of negascout does depth+1 in the recursive function, so I will never hit 0. But I changed 0 to curDepth (which is the current depth of the loop).

The infinite loop resulted from some inconsistencies with the setting up of variables within the ID loop.

I've run a few tests and I noticed that with ID, the moves search are around 4000-6000 compared to over 200000 when using just negascout without ID. Is it possible that there is such a big cut ?

0

Share on other sites

Now the next step is to add history heuristics. If I understand correctly, hh is just a table that increments a counter on every beta cut-off. Then in the next call to the recursive function, the generated moves are sorted by the counter value (or weights depending on the move type) for that index. Am I right?

So in the case of the nine men morris game I'm doing, i will have this hh:

int[color][posFrom][posTo] hh = new int[2][24][24]

During placement stage, I will only have one position, so when incrementing the hh counter, should I check what kind of move it is?

Edited by cryo75
0

Share on other sites
At this point, I would try to make sure that what you've written so far is working correctly. Is it still the case that the program doesn't play better at higher depths?
0

Share on other sites

I still have problems with the loop. The 4000-6000 moves searched was because the recursive function was search up to 1 depth. That has been fixed but the AI plays worse than without ID. For example, in some cases it loops for 99 depths but finds nothing.

I tried changing the aspiration delta but it didn't help.

Could it that the best move is placed to the top of the list and the search keeps searching the path?

0

Share on other sites

I've finally managed to implement ID into the search and it's working.  I have a couple of questions:

1. Killer moves are local to the current search. Should the TT also be local to the current search or should it be for the whole game?

2. I also implemented transposition move in the search (before normal negascout search starts). However an assert is failing on makeMove because the current position doesn't belong to the current player on turn. Should I even bother with an assert there?

3. I would like to implment history heuristics. Are these local to the current search or should they be for the whole game?

0

Share on other sites

I've finally managed to implement ID into the search and it's working.  I have a couple of questions:

1. Killer moves are local to the current search. Should the TT also be local to the current search or should it be for the whole game?

The TT should be for the whole game, or you'll forget what you learned in the previous search, which is not a good idea.

2. I also implemented transposition move in the search (before normal negascout search starts). However an assert is failing on makeMove because the current position doesn't belong to the current player on turn. Should I even bother with an assert there?

So you have an assert that fails for reasons you don't quite understand? It sounds like removing it is a horrible idea.

3. I would like to implment history heuristics. Are these local to the current search or should they be for the whole game?

I make them for the whole game, but I divide every entry by 4 at the beginning of each search, so old entries that are no longer relevant can decay. If you were to make them local to the current search, I think they would work just fine too.
0

Share on other sites

I've finally managed to implement ID into the search and it's working.  I have a couple of questions:

1. Killer moves are local to the current search. Should the TT also be local to the current search or should it be for the whole game?

The TT should be for the whole game, or you'll forget what you learned in the previous search, which is not a good idea.

>2. I also implemented transposition move in the search (before normal negascout search starts). However an assert is failing on makeMove because the current position doesn't belong to the current player on turn. Should I even bother with an assert there?

So you have an assert that fails for reasons you don't quite understand? It sounds like removing it is a horrible idea.

3. I would like to implment history heuristics. Are these local to the current search or should they be for the whole game?

I make them for the whole game, but I divide every entry by 4 at the beginning of each search, so old entries that are no longer relevant can decay. If you were to make them local to the current search, I think they would work just fine too.

good answer for question 2!! nah I will need to find out what the problem is and solve it.

for the history table and in the case of nine men morris where you have just placement of pieces, moving, and also placement+capture and move+capture is it wise to have a 4d array in this case or should I have a 2 3d arrays (one for each player).

when sorting the history, i would assume that the moves are sorted based on the array of player whose turn it is at that stage of the search. right?

small question about iterative deepening... I have a time limit of 2s and it goes 7 plies deep but sometimes it goes even 8 or 9 but because the time required to complete the search at that depth is more than 2s, it can take up to 6s to complete. should i also quite searching in negascout by checking outOfTim() in the move loop or is it wise to let it finish the search at that depth?

0

Share on other sites

for the history table and in the case of nine men morris where you have just placement of pieces, moving, and also placement+capture and move+capture is it wise to have a 4d array in this case or should I have a 2 3d arrays (one for each player).

For nine men morris I would use history heuristic only for non-mill-forming moves, similarly to how history heuristic is used only for non-capturing moves in chess. You can deal with placements as moves with a special from' value, or by setting from' and to' to the same value.

when sorting the history, i would assume that the moves are sorted based on the array of player whose turn it is at that stage of the search. right?

Yes.

small question about iterative deepening... I have a time limit of 2s and it goes 7 plies deep but sometimes it goes even 8 or 9 but because the time required to complete the search at that depth is more than 2s, it can take up to 6s to complete. should i also quite searching in negascout by checking outOfTim() in the move loop or is it wise to let it finish the search at that depth?

This is tricky. It depends on how you want to control time. I can tell you what I do, but there might be better solutions.

In my chess engine I use UCI to communicate with the GUI, and time control is part of the protocol. So I have to be flexible and be able to respect several different clock types: Infinite search, fixed time per move, fixed depth, a certain time for the next certain number of moves, a certain time for the rest of the game, perhaps there is a time increment every time you move (a.k.a. Fisher clock)... In the end I translate all those things into four numbers: A maximum depth, a desired time to spend in this move (which I use to determine whether I dare to search one ply deeper or not), a hard time limit in case I get stuck in a deep search (this is set to 5 times the desired time, and it's useful when the transposition tables bring you immediately to some large depth and then you don't have time to finish the next one), and a hard time limit to avoid running out of time. I have two hard limits because I modify the desired time per move when I detect a difficult situation (basically when the score drops), and I scale the first hard limit with it, but not the second hard limit.
0

Create an account

Register a new account