Jump to content

  • Log In with Google      Sign In   
  • Create Account


We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.

Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!


Member Since 16 Apr 2013
Offline Last Active Apr 24 2013 06:31 PM

#5054315 Heuristic for minimax - Board Game 'Odd'

Posted by Julius_Caesar on 17 April 2013 - 03:26 PM

Right now I am using  iterative deepening AB minimax with memory(MTD(f)) + Monte Carlo search as the evaluation function and it is performing better than my earlier minimax algorithm and better than a basic Monte Carlo search.


The algorithm is performing like a Monte Carlo search in the beginning: the search depth is in the range [1-4] (branching factor ~ 122 and the leaf nodes are evaluated using Monte Carlo search which takes time)  but since it's storing the node's evaluation in a transposition table, this guides minimax later when the search can go deep towards the end of the game. I am trying to find a good combination of Monte Carlo and minmax especially that the current implementation suffers from the horizon effect and has a very small search depth at the beginning.


I am thinking about the following approaches to improve my player:

- Improve the evaluation function (Alvaro's suggestions sound reasonable) and maybe use quiescence search to mitigate the horizon effect.

- Implement my AI using a purely Monte Carlo search (it's Aheuristic nature might work for this game)

#5053926 Heuristic for minimax - Board Game 'Odd'

Posted by Julius_Caesar on 16 April 2013 - 12:24 PM

I am implementing an AI player for this board game.

I am using AB minimax with memory (MTD(f)) for search, but I am having trouble finding a good heuristic for the evaluation function, and even if I want to use reinforcement learning I need to find a good set of features to represent the board states since the state space is very large. How should I approach this problem?