Static eval function for Magic the Gathering type game

Started by
2 comments, last by mtgrares 15 years, 10 months ago
I'm working on an AI opponent for a MtG type game (Dreamblade). Things are going great. Originally wrote it in python and ported to C++ for speed. The issue I'm running into is with the static evaluation function. Its results are great but I have a fixed depth search which in some cases allows the AI to go further into the game (think phases in Magic) and see that it is has better move later on. For instance, the AI is given the choice to either make a play or continue on. When choosing to continue on it gets further into the turn quicker and is able to do a move that can generate a better score. It could still get there by doing something else and then continuing after making the move. I see drastic differences in the AI's choices based on the depth due to it crossing one of these thresholds (usually continuing on to the next phase). Is there some standard way to deal with this? I'm considering having the depth be the # of phases to go through so the minimax always ends at the same part of the turn so things are equal. I don't feel like I'm describing it very well so I hope someone can decipher what I'm saying. :) ~telengard
Advertisement
Other than forcing the tree to have all the leaves at the same phase, there are two other things you can try:
* Write a better evaluation function that gives good values at every phase.
* Use some sort of quiescence search (allow moves that can potentially improve the score a lot for the player to move).

You could also use a Monte Carlo method, which doesn't require an evaluation function at all. But that seems too far from the way you are thinking about the problem, so you should probably ignore this option.

Quote:Original post by alvaro
Other than forcing the tree to have all the leaves at the same phase, there are two other things you can try:
* Write a better evaluation function that gives good values at every phase.
* Use some sort of quiescence search (allow moves that can potentially improve the score a lot for the player to move).

You could also use a Monte Carlo method, which doesn't require an evaluation function at all. But that seems too far from the way you are thinking about the problem, so you should probably ignore this option.


Hi thanks for the info,

I've fiddled w/ the evaluation function but it seems responding to the current phase/turn of the game makes it much more difficult to score the board. Lots of little details to take into account. I have a pretty coarse grained eval function which works well but needs to evaluate apples to apples. It's also very fast since it doesn't do all that much.

It seems the fact that there isn't one move per player's play means I need to handle depth differently. What I'm considering is to stop going deeper once an executed move goes into a different phase. The depth specified to the application would be the number of phases to look ahead instead of how deep to search. I can't forsee any issues with this even taking into account skipping phases or additional phases being added by pieces' abilities.

The Monte Carlo thing seems interesting. The AI stuff is pretty self contained so I can play with different styles of AI.

~telengard

Keep up the good work, it sounds like it would be really fun. I wrote the Magic implementation MTG Forge and its AI. I actually coded around 50 cards and then I added that AI later. And the AI I use is really, really simple. I say program a very basic AI, or search function, and then a few games against the computer so you get an idea about what to improve.

This topic is closed to new replies.

Advertisement