cryobuzz75

Member
  • Content count

    12
  • Joined

  • Last visited

Community Reputation

105 Neutral

About cryobuzz75

  • Rank
    Member
  1. AI and undoing moves

    That's similar to what I'm doing:   private void placePiece(int cell) { this.board.getCells()[cell] = playerType; this.board.getActivePieces()[playerType]++; } private void undoPlacePiece(int cell) { this.board.getCells()[cell] = CellState.Empty; this.board.getActivePieces()[playerType]--; }   But the activepieces still increase out of proportion.
  2. AI and undoing moves

    It looks like the search is partially working but this is due to the fact that I'm still not ready with the evaluation function and move generation.   However, I come back to my first issue where I have a variable to hold the number of active pieces on the board for each player. Before the game starts, the variable is 0 for each player. After the first turn, the variable goes up to approx. 3000 for each player and it keeps growing after each turn.   In makeMove I'm doing +1 and in undomove I'm doing -1 so why is it increasing that much?
  3. AI and undoing moves

    Ok. So I replace bestScore with alpha. This should make the search correct now?
  4. AI and undoing moves

    I tried to follow your suggestions, and this is what I came up with:   public Move GetBestMove(IBoard board, int depth) { //Changes in alpha/beta values propagates. Increases the chance of alpha-beta prunning int alpha = Integer.MIN_VALUE, beta = Integer.MAX_VALUE; int val, bestScore = 0; //The move that will be returned Move bestMove = null; List<Move> moves = board.getMoves(); for (Move move : moves) { //Make the move board.makeMove(move, true); val = -negamax(board, depth - 1, -beta, -alpha); //Undo the move board.undoMove(move); //Keep best move if (val > bestScore) { bestScore = val; bestMove = move; } } //Return the move return bestMove; } private int negamax(IBoard board, int depth, int alpha, int beta) { if (depth == 0) return board.getScore(); if (board.getWon() != 0) return board.getScore(); int val; List<Move> moves = board.getMoves(); for (Move move : moves) { //Make the move board.makeMove(move, true); val = -negamax(board, depth - 1, -beta, -alpha); //Undo the move board.undoMove(move); //Alpha-Beta pruning if (val > alpha) alpha = val; if (alpha >= beta) return alpha; } return alpha; } } Is this correct?
  5. AI and undoing moves

    I have a variable called activePieces which returns the number of activepieces a player has.when I make a move that places a piece, I add 1 to it. In undomove I subtract 1. my negamax function is:   private int negamax(IBoard board, int depth, int alpha, int beta, Move[] bestMove) { bestMove[0] = null; if (depth == 0) return board.getScore(); if (board.getWon() != 0) return board.getScore(); List<Move> bestMoves = null; List<Move> moves = board.getMoves(); if (depth == maxDepth) { bestMoves = new ArrayList<Move>(); bestMoves.clear(); bestMoves.add(moves.get(0)); } int minimax = -board.getMaxScoreValue(); int val; for (Move move : moves) { //Make the move board.makeMove(move, true); val = -negamax(board, depth - 1, -beta, -alpha, dummyMove); //Set the move score move.setScore(val); //Undo the move board.undoMove(move); if (val > minimax) minimax = val; if (depth == maxDepth) { if (val > bestMoves.get(0).getScore()) { bestMoves.clear(); bestMoves.add(move); } else if (val == bestMoves.get(0).getScore()) bestMoves.add(move); } else { //Alpha-Beta pruning if (val > alpha) alpha = val; if (alpha >= beta) return alpha; } } if (depth == maxDepth) { bestMove[0] = bestMoves.get(0); for (int i = 1; i < bestMoves.size(); i++) if (bestMoves.get(i).getScore() > bestMove[0].getScore()) bestMove[0] = bestMoves.get(i); } return minimax; } As you can see above, because undoMove is called after the recursive calls, the activepieces variable can go up to 8000 and above.
  6. AI and undoing moves

    Hi all,   I'm writing a board game and using Negamax for the AI. I'm applying an undo function instead of copying the board state, in order to speed up the search.   The evaluation function needs to consider the players' number of pieces that are on the board, those that still have to be placed and those that have been removed from the game. I can get this information every time but this means that I have to loop through the whole board and that would slow down search.   If I add variables and these are increased in makemove and decreased in undomove, it still doesn't work because of the recursive nature of the AI search.   So what is the best way to set these variables and sync them with the negmax search and the actual moves being done?   Thanks, C  
  7. Negamax Ai for TicTacToe

    Thanks for the insight. So basically I start the scoring by calling getWon() and return maxscore, -maxscore or 0 depending if the game is won, lost or drawn. If this is not the case, the evluation function will check of other combinations.   I will implement this in the connect4 I'm doing to make sure that negamax works fine before adding extensions to it.
  8. Negamax Ai for TicTacToe

    So I guess I need to do a Connect4 to test negamax with an evaluation function.
  9. Negamax Ai for TicTacToe

    I changed the operands in getScore() as suggested, and now the AI looks like it's playing good.   So my question is: Why should getScore() check for 3 in a line?
  10. Negamax Ai for TicTacToe

    This is getWon():   public int getWon() { //If any winning row has three values that are the same (and not EMPTY), //then we have a winner for (byte c = 0; c < WINNING_POS.length; c++) { if ((cells[WINNING_POS[c][0]] != CellState.Empty) && (cells[WINNING_POS[c][0]] == cells[WINNING_POS[c][1]]) && (cells[WINNING_POS[c][1]] == cells[WINNING_POS[c][2]])) { return cells[WINNING_POS[c][0]].ordinal(); } } //Since nobody has won, check for a draw (no empty squares left) byte empty = 0; for (byte c = 0; c < cells.length; c++) { if (cells[c] == CellState.Empty) { empty++; break; } } if (empty == 0) return 3; //Since nobody has won and it isn't a tie, the game isn't over return 0; }   getScore checks for 2 in a line. Even if I change maxscore with eg, 100, the AI still plays bad.
  11. Negamax Ai for TicTacToe

    Here's my Negamax class:   public class Negamax { IBoard board; int maxDepth; private Move[] dummyMove = new Move[1]; public Move GetBestMove(IBoard board, int depth) { maxDepth = depth; int alpha = -999999, beta = 999999; Move[] newMove = new Move[1]; alphaBeta(board, depth, alpha, beta, newMove); return newMove[0]; } private int alphaBeta(IBoard board, int depth, int alpha, int beta, Move[] bestMove) { bestMove[0] = null; if (depth == 0) return board.getScore(); if (board.getWon() != 0) return board.getScore(); List<Move> bestMoves = null; List<Move> moves = board.getMoves(); if (depth == maxDepth) { bestMoves = new ArrayList<Move>(); bestMoves.clear(); bestMoves.add(moves.get(0)); } int minimax = -board.getMaxScoreValue(); int val; for (Move move : moves) { this.board = board.copy(); this.board.makeMove(move, true); val = -alphaBeta(this.board, depth - 1, -beta, -alpha, dummyMove); move.setScore(val); if (val > minimax) minimax = val; if (depth == maxDepth) { if (val > bestMoves.get(0).getScore()) { bestMoves.clear(); bestMoves.add(move); } else if (val == bestMoves.get(0).getScore()) bestMoves.add(move); } else { if (val > alpha) alpha = val; if (alpha >= beta) return alpha; } } if (depth == maxDepth) { int rnd = MathUtils.random(bestMoves.size() - 1); bestMove[0] = bestMoves.get(rnd); } return minimax; } }
  12. Hi all,   I'm doing a simple TicTacToe so that I can implement a Negamax algorithm which I can later use for other abstract games. However I'm encountering problems whereby the AI doesn't play the best move, and it loses constantly. My suspect is the static evaluation function. Here it is:   public int getScore() { int score = 0; CellState state = CellState.Empty; if ((cells[0] == cells[1] || cells[1] == cells[2]) && (cells[1] != CellState.Empty)) state = cells[1]; if ((cells[6] == cells[7] || cells[7] == cells[8]) && (cells[7] != CellState.Empty)) state = cells[7]; if ((cells[0] == cells[3] || cells[3] == cells[6]) && (cells[3] != CellState.Empty)) state = cells[3]; if ((cells[2] == cells[5] || cells[5] == cells[8]) && (cells[5] != CellState.Empty)) state = cells[5]; if (((cells[3] == cells[4] || cells[4] == cells[5]) && (cells[4] != CellState.Empty)) || ((cells[1] == cells[4] || cells[4] == cells[7]) && (cells[4] != CellState.Empty)) || ((cells[0] == cells[4] || cells[4] == cells[8]) && (cells[4] != CellState.Empty)) || ((cells[2] == cells[4] || cells[4] == cells[6]) && (cells[4] != CellState.Empty))) { state = cells[4]; } if (state == currentPlayer) score = getMaxScoreValue(); else if (state == currentPlayer.getOpponent()) score = -getMaxScoreValue(); return score; }   Cells is just an array of size 9 that represent the board positions. CellState is just an enum with values {Empty(0), Player(1), Opponent(2)}. getmaxScoreValue just returns the highest score (65536),   Is this static function complete or am I missing other conditions?   Thanks, C