Jump to content

  • Log In with Google      Sign In   
  • Create Account

cryo75

Member Since 04 Dec 2002
Offline Last Active Jul 19 2013 03:39 AM

Topics I've Started

Negascout behaves oddly

04 July 2013 - 04:28 AM

I have a negascout function within an iterative deepening with aspiration windows framework. The negascout function uses move ordering, transposition tables, killer moves and history heuristics. The game is a nine men's morris (yes.. still working on it in my free time!!)

 

The problem is that over 70% of the time, the AI is not that aggressive, ie. when the AI has a chance to capture pieces, it doesn't.

 

This is the function:

	private int negaScout(int curDepth, int maxDepth, int alpha, int beta) 
	{
		if (curDepth >= maxDepth || board.getWon() != 0)
			return board.getScore();

        //Check if the move is in the TT
        long key = board.getZobristKey();
        TTEntry entry = tt.probe(key, curDepth);
        if (entry != null)
        {
        	switch (entry.boundType)
        	{
            	case TTEntry.BOUND_EXACT:
                    return entry.score;

            	case TTEntry.BOUND_ALPHA:
                	alpha = Math.max(alpha, entry.score);
            		break;

            	case TTEntry.BOUND_BETA:
            		beta = Math.min(beta,  entry.score);
            		break;
        	}
        	
        	if (alpha >= beta)
        		return entry.score;
        }
		
		int val = 0;
 		int bestScore = -INFINITY;
		
		Move bestLocalMove = null;
		boolean foundPV = false;
		
		List<Move> moves = sortMoves(board, curDepth, false);
		for (int i = 0, n = moves.size(); i < n; i++) 
		{
			Move move = moves.get(i);
			
			//PV has been found
			if (alpha < bestScore) 
			{ 
				alpha = bestScore; 
				foundPV = true; 
			} 
			
			board.make(move, true);
			
			if (foundPV) 
			{
				//Zero window
				val = -negaScout(curDepth + 1, maxDepth, -alpha - 1, -alpha);
				if (val > alpha && val < beta) 
					val = -negaScout(curDepth + 1, maxDepth, -beta, -alpha);
			} 
			else	
				val = -negaScout(curDepth + 1, maxDepth, -beta, -alpha);
			
			board.undo(move, true);
			
			//Alpha has improved
			if (val > bestScore) 
			{
				bestScore = val;
				bestLocalMove = move;
				
				//Beta cut-off
				if (bestScore >= beta)
					break;
			}
		}

		//We have the current best move so far at the root
		if (curDepth == 0) bestMove = bestLocalMove;
		
		//Set TT entry flag
		byte flag = bestScore <= alpha ? TTEntry.BOUND_BETA : 
			        bestScore >= beta ? TTEntry.BOUND_ALPHA : TTEntry.BOUND_EXACT;
		
        //Store the move in the TT
        tt.set(key, curDepth, bestScore, flag, bestLocalMove);

        if (bestLocalMove != null)
        {
	    	//Add to killer moves
	    	killers.add(bestLocalMove, curDepth);
			
	        //Update history heuristics for non-captures
			if (bestLocalMove.cellRemove == -1)
				history[bestLocalMove.player][bestLocalMove.cellFrom + 1][bestLocalMove.cellTo + 1] += 1<<(maxDepth-curDepth);
        }
		
		return bestScore;
	}

This is the transpostion table:

public class TranspositionTable 
{
	private TTEntry[] tt;
	
    public TranspositionTable(int size)
    {
    	tt = new TTEntry[size];
    }
    
    public TTEntry probe(long key, int depth)
    {
        TTEntry entry = tt[(int)(key % tt.length)];

        if (entry != null && entry.depth <= depth) 
        	return entry;

        return null;
    }     
    
    public void set(long key, int depth, int val, byte boundType, Move move)
    {
    	int pos = (int)(key % tt.length);
        tt[pos] = new TTEntry(key, depth, val, boundType, move);
    }
}

As you can see in the transposition table, I always do a replace in the 'set' function. However, when I probe, I get the entry only if the entry's depth is smaller than the depth required. By depth here, I mean that depth from the root node, so the smaller the depth the nearer the entry is to the root move.

 

Is it possible that I'm probing/setting wrong values or could it be the part of the code when the 'Alpha has improved' comment is?

 

Thanks,

Ivan


Speeding up evaluation function

17 June 2013 - 10:57 PM

Hi,

 

My abstract strategy game AI implements negascout in an iterative deepening framework, using zobrist keys and killer moves. I use bitboard representation for my board.

 

However, when running a version of the game on Android devices, I can only search to a depth of 4, because starting with 5 the search starts taking more time (depth of 5 takes 3-4 seconds per move).

 

I would like to at least go to depth 5 so I was thinking if it would make sense to store the score of previous game moves in a sort of table and then quickly access them. If the score is not found, the normal eval function kicks in.

 

Does this make sense, and more importantly, will it bring performance benefits?

Thanks

 


How to weaken AI?

25 March 2013 - 07:54 AM

What is a good way to weaken an AI's strength when using Negascout?

 

1. Decrease search depth

2. Apply noise to the evaluation score

3. Reduce the number of generated moves


Improving move ordering

19 March 2013 - 08:54 AM

This is my basic move ordering function:

 

    private List<Move> sortMoves(List<Move> moves, int depth)
    {
    	//Create new sorted list
    	List<Move> sorted = new ArrayList<Move>();
    	
    	//Exit if original list is empty
    	if (moves.size() == 0)
    		return sorted;
    	
    	//Temporary lists to hold killer moves and remaining moves
    	List<Move> primary = new ArrayList<Move>();
    	List<Move> secondary = new ArrayList<Move>();
    	List<Move> rest = new ArrayList<Move>();
    	
		//Loop through the moves and add killer moves first
		for(int i = 0, n = moves.size(); i < n; i++)
		{
			if (killers.primary[depth] != null && moves.get(i).equals(killers.primary[depth]))
				primary.add(moves.get(i));			

			else if (killers.secondary[depth] != null && moves.get(i).equals(killers.secondary[depth]))
				secondary.add(moves.get(i));

			else
				rest.add(moves.get(i));
		}
    	
		//Add the moves in descending order of priority
    	sorted.addAll(primary);
    	sorted.addAll(secondary);
    	sorted.addAll(rest);
    	
    	//Return the sorted moves
    	return sorted;
    }

The moves parameter contains the list of generated moves (which I already sort upon generation by adding a weight to them). The killers include primary and secondary killers for a particular depth.

 

The code works and I didn't profile it, but I believe it can really be improved upon. Any suggestions?


Thanks,

C

 


Iterative Deepening

19 March 2013 - 04:13 AM

Is this valid iterative deepening code??

 

   public Move GetBestMove(IBoard board, int depth)
    {        
    	this.maxDepth = 100;
    	this.maxTime = 9000;
    	
        //Set initial window
    	int alpha = -INFINITY, beta = INFINITY;
        
        //The move that will be returned
        Move bestMove = null;      
        List<Move> moves = board.getMoves();
        
        //Get the time search has started
        long startTime = System.nanoTime();
        
        //Iterate through the depths
        for (int i = 0; i < maxDepth; i++)
        {
        	bestMove = negaScoutRoot(board, alpha, beta);
        	int val = bestMove.score;
        	
        	//Time check
        	double curTime = (System.nanoTime() - startTime) / 1e6;
        	if (curTime >= maxTime)
        		return bestMove;
        	
        	//Score missed aspiration window
        	if (val <= alpha || val >= beta)
        	{
        		alpha = -INFINITY;
        		beta = INFINITY;
        		
        		//Go to next iteration
        		continue;
        	}
        	
        	//Set new aspiration window
        	alpha = val - ASPIRATION_SIZE;
        	if (alpha < -INFINITY)
        		alpha = -INFINITY;
        	
        	beta = val + ASPIRATION_SIZE;
        	if (beta > INFINITY)
        		beta = INFINITY;
        }
		
		//Return the move
        return bestMove;
    }
    
    private Move negaScoutRoot(IBoard board, int alpha, int beta)
    {        
        //The move that will be returned
        Move bestMove = null;      
        List<Move> moves = board.getMoves();
        
        for (Move move : moves)
        {
         	//Make the move
        	board.make(move, true);

        	//Search
            int val = -negascout(board, 0, alpha, beta);
               
            //Undo the move
            board.undo(move);
            
            //Keep best move
            if (val > alpha)
            {
                alpha = val;
                
                bestMove = move;
                bestMove.score = alpha;
            }
        }
		
		//Return the move
        return bestMove;
    }

 

I also added aspiration window. The root always starts searching from the current board state but with a different window.


PARTNERS