Mouhyi

Members
  • Content count

    2
  • Joined

  • Last visited

Community Reputation

121 Neutral

About Mouhyi

  • Rank
    Newbie
  1. Right now I am using  iterative deepening AB minimax with memory(MTD(f)) + Monte Carlo search as the evaluation function and it is performing better than my earlier minimax algorithm and better than a basic Monte Carlo search.   The algorithm is performing like a Monte Carlo search in the beginning: the search depth is in the range [1-4] (branching factor ~ 122 and the leaf nodes are evaluated using Monte Carlo search which takes time)  but since it's storing the node's evaluation in a transposition table, this guides minimax later when the search can go deep towards the end of the game. I am trying to find a good combination of Monte Carlo and minmax especially that the current implementation suffers from the horizon effect and has a very small search depth at the beginning.   I am thinking about the following approaches to improve my player: - Improve the evaluation function (Alvaro's suggestions sound reasonable) and maybe use quiescence search to mitigate the horizon effect. - Implement my AI using a purely Monte Carlo search (it's Aheuristic nature might work for this game)
  2. I am implementing an AI player for this board game. I am using AB minimax with memory (MTD(f)) for search, but I am having trouble finding a good heuristic for the evaluation function, and even if I want to use reinforcement learning I need to find a good set of features to represent the board states since the state space is very large. How should I approach this problem? Thanks