# Designing AI for Quoridor

This topic is 1856 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hello.

I'm attempting to learn programming and web development and for my latest project I'm attempting to program the board game Quoridor. For those that don't know the game; basic idea is Quoridor is played on a 9x9 board, first pawn that reaches the opposite side of the board wins. During your turn you can either move or place a wall (you have 10 walls), each wall blocks 2 squares.

So right now I have the an A* search that lets my AI finding the fastest path and I use the A* search to make sure no wall placed in illegal spots (an illegal wall would completely block a player from being able to reach the end). My problem is trying to do the wall placing AI.

My first attempt was kind of a state machine. After 3 turns (so if both pawns move only they are at the middle of the board) I do a check which pawn is closer to their goal. If the player is closer I try putting a wall on every path of their best route (or the first best route that A* returns, I do check multiple ones but only return the first best one) and put the wall where the players movement is increased the most compared to how it affects the AI. If the AI is closer to the goal than I just get the AI to move.

Now the problem with this is the AI kind of stupidly and quickly runs out of walls, and doesn't place them in the best spot (since It's just going off the best route, the walls don't line up right to really slow the player, its too direct). Also one of the biggest strategies in Quoridor is placing walls defensively so your opponent can't cut off your best path (something I have no idea how to even begin to code).

So I'm now trying to think of a different way of approaching this problem. I'm kind of interested with trying a genetic algorithm / neural network to try and predict moves and act accordingly, but for the life of me I can't think how to condense the data down to even start to do that. So kind of hoping someone can point me in a better direction.

##### Share on other sites
The first thing you should probably do is read what others have published about creating AI for this game. If you Google for "mastering Quoridor", you'll find two papers on the subject. I don't think they are particularly good, but they are a start.

Here's what my plan would be for this game:[list]
[*]Make your path-finding as fast as possible.
[*]Implement functions to make moves on the board and to undo them.
[*]Implement alpha-beta search using difference of path lengths plus a bit of randomness as the evaluation function.
[/list]
At this point you'll probably have a program that plays OK, at least if you implement the search reasonably well (no bugs, good ordering, hash tables, iterative deepening...). You can then work on improving the evaluation function. I would probably match my program against itself and make a database with the games. You can then use the database to make a model that predicts the result of the game given a position, and use that as evaluation function.

Having known the game for about 30 minutes, the only features of a position that I think would work in an evaluation function are the difference in path lengths and the number of walls left for each player. From a database of games you should be able to determine something like an equivalent number of moves for each number of walls in hand (a table of 10 numbers), so your evaluation function would simply be
[code]// Returns a score that is positive if things are looking good for White
int evaluation(Board const &b) {
static int const table[11] = {0, /* insert reasonable values here*/};
return 100* (b.path_length(Black) - b.path_length(White)) + table[b.walls_left(White)] - table[b.walls_left(Black)];
}[/code] Edited by alvaro

##### Share on other sites
[quote name='Wizuriel' timestamp='1346389717' post='4975041']
So I'm now trying to think of a different way of approaching this problem. I'm kind of interested with trying a genetic algorithm / neural network to try and predict moves and act accordingly, but for the life of me I can't think how to condense the data down to even start to do that.[/quote]
Nope. Just don't. That's not the solution.

##### Share on other sites
I've never played the game (it seems quite interesting though). But it seems to me your evaluations should break down to:[list]
[*]Is it more profitable for me to move or to block movement?
[/list]
To know the answer to this, you'll have to find a way to score the possible actions. If it were me, I'd ask myself:[list=1]
[*]How much is forward movement worth?
[*]How much is lateral movement worth?
[*]What's my best-case scenario for turns taken to reach the end?
[*]How much is blocking my opponent worth?
[*]How will blocking my opponent affect MY path?
[/list]
Then maybe add some weighting and 'personality' to the evaluations:[list=1]
[*]For an offensive player, blocking in the beginning may not be important. But it gets to be VERY important when the opponent starts getting closer to the end (hence the weighting).
[*]For a defensive player, maybe blocking is worth more in the beginning.
[*]Blocking a path is worth X. But the more turns it adds to the 'Best-path' makes it more valuable.
[*]etc.
[/list]
There's a plethora of ways that the AI can take these into account. I agree with IADaveMark though. Adding layers of obfuscation to your AI will not help anything. It may be a good practice in programming and A.I. theory. But for practicality and usability, you won't benefit from them.

[size=2][i][edit - I should have pre-read this before posting. My grammar was horrible!][/i][/size] Edited by ShadowValence

##### Share on other sites
From my experience with other two-player games (checkers, chess, connect 4...) I don't think evaluating moves will make an AI opponent that is challenging enough. Typically you do much better if you evaluate positions and use minimax to search.

##### Share on other sites
I'm not too sure though how to evaluate the position for a minimax tree without it quickly growing out of control / taking forever.

For an easy scenario:

Player green should place the yellow wall, if he doesn't and just moves (following the yellow arrow since that is his closest path to the end) player blue will just place a wall in front of him forcing him to have to backtrack to the one path around that series of walls. In actual gameplay this can get more complex since your best action might be placing several walls trying to make a safe path to the goal instead of moving or trying to block your opponent.

So right now the way I'm (or I was) looking at it would be:
If my path is shorter (using A* and Manhattan distance [with some weight to try and avoid paths right beside 2 walls])
a) see if placing a wall defensively is a good idea (not sure how to do without brute force since the best position in this scenario won't be from the best paths [usually])
b) move

If my path is longer
-find my opponents best path
-for each wall position on their best path try placing a wall and recalculate my best path and opponents
-pick the one that gives the best results. Edited by Wizuriel

##### Share on other sites
That scenario would be solved fine with a depth-2 search. If more walls need to be placed, a deeper search might be needed to find that out.

The root of the tree is the diagram, before placing the yellow wall. A depth-2 search consists of all the possible moves by green (including wall placements), with each one of them followed by each possible move by blue (including wall placements). For most of the branches of this tree, blue has a move that is placing a wall blocking the arrow in the diagram, which would result in good evaluations for blue because it increases the distance to goal for green more than it increases it for blue. There will be five moves that don't allow that: blocking the tunnel on the right in four ways (the yellow wall is one of those), or placing a vertical wall along the opening on the left side (adjacent to the arrow on its left). In all those cases, blue cannot immediately block green's path.

EDIT: By the way, the "killer heuristic" would help you sort the moves in such a way that blue would try to block the opening on the left as its first move most of the time, and alpha-beta would allow you to skip looking at other blue moves.

If you can write a program that searches depth 10 or so, it will probably be very hard to beat, even with a simple evaluation function. Edited by alvaro

##### Share on other sites
Here's a thought... rather than forward searching (which can be a huge potential state space), do it like a planner and search backwards from the goal.

##### Share on other sites
Here's a thought... rather than forward searching (which can be a huge potential state space), do it like a planner and search backwards from the goal.
[/quote]

Not really sure how that would reduce the state space, but definitely need to find some way to do that. I followed alvaro's advice and did a Minimax tree with alpha beta pruning and at depth 3+ it is pretty slow on finding the next move. Though looking at the game each position can have something like 130 children :S

##### Share on other sites
Are you pruning the search space using common sense assumptions? For example, taking advantage of left/right reflection symmetry, ignoring illegal wall combinations, ignoring wall combinations which will never improve the situation, etc? And of course there's applying heuristics so you search the more likely options first, e.g. putting a wall behind the opponent is unlikely to help your cause.

##### Share on other sites
[quote name='jefferytitan' timestamp='1346620333' post='4975829']
Are you pruning the search space using common sense assumptions? For example, taking advantage of left/right reflection symmetry, ignoring illegal wall combinations, ignoring wall combinations which will never improve the situation, etc? And of course there's applying heuristics so you search the more likely options first, e.g. putting a wall behind the opponent is unlikely to help your cause.
[/quote]

I'm trying to [img]http://public.gamedev.net//public/style_emoticons/default/biggrin.png[/img] I do make sure all wall combinations are legal and my first children are always moving forward (followed by other movement options) since getting closer to the goal should always give a good score (I think).

As for applying common sense to try and prune more wall options, that I'm not sure how to do. I am going through every valid move, because I couldn't think of a logical way to try and find good spots for defensive walls to help you. I'm literally starting at square zero, checking what walls I can place there and if it is legal making it a new child and going to the next square. Also you might be surprised how much putting a wall behind your opponent can help

What I'm actually going try and see if it helps if make the computer just move on the first 3 turns. If they are both in the middle hopefully can prune some results faster as more walls are played instead of just movement. Yeah if no walls are placed and pawns just move you have 128 legal wall placements and 4 movement options. Edited by Wizuriel

##### Share on other sites
Pruning the search space would best be done by biasing the search to include stuff that is close the the current player position and close to the goal. Summing 2 influence maps would give you a hierarchy of which places to test first.

##### Share on other sites
Just in case I'm misunderstanding the wall placement, the way I calculate the number is:

A line contains 9 possible wall positions.
There are 8 lines down (I assume the outer lines aren't included).
The same applies the other way.

This means 9 * 8 * 2 possible wall positions, which is 144. If you apply left/right symmetry, this cuts it down to 72 possible wall positions. This seems more manageable to me. Please correct me if I've misunderstood the rules. I also wonder whether checking if the wall positions are valid is slowing you down a lot, as it's in the inner loop. Try removing that check (apart from "is a wall already there") just to see if it is a large factor on speed.

##### Share on other sites
PS, I wonder whether some results could be cached and re-used somehow. If they moved, then the walls are the same time as when you last calculated, and if they built a wall then their position is the same as when you last calculated.

##### Share on other sites
The middle point of a wall can be placed in 8x8 different places, and then you pick whether the wall is vertical or horizontal, so 128 options. Left/right symmetry goes out the window after a few moves, so it's not terribly important. Checking if the wall position is valid is trivial and should be done.

Move ordering heuristics that should work well include killer heuristic and history heuristic; find information about them in chess programming websites.

You can implement a hash to identify positions that can be reached in different ways (but that probably won't matter until your depth goes beyond 4 ply).

You can apply reductions for unpromising moves (e.g., moving away from the goal, or placing a wall in such a way that it obstructs you more than it does the opponent).

It might take a little work, but you should be able to search significantly deeper than 3 moves.

##### Share on other sites
Here's a thought. If you pre-process the board each turn to determine the number of moves to the goal from each square, after a move is made, you only need to determine what boxes are affected by the new move. If you can get from your current location to one of the unaffected boxes, you only need to add the number of moves from that box. That is, you don't need to pathfind all the way across the board for each test -- only to an unaffected square. The trick is, how do you determine which squares have been affected?

##### Share on other sites
Okay so just not sure if I'm doing this right then. I'm trying to use negamax with alpha beta pruning to find my move. What I have so far

getbestmove: This is the function I initially call to try and find my best move. I get all the possible moves based off the current players position. I than loop through them all and run them through the negamax function to evaluate how good of a placement they are.
[source lang="csharp"]
public static List<BoardView> GetBestMove(BoardView myview, int depth, int color)
{
var bestMoves = new List<BoardView>();

double highestScore = int.MinValue;
double tmpScore;
Boolean first = true;
List<BoardView> validmoves = new List<BoardView>();
//getting all valid children based off the current players position
for (int x = 0; x < validmoves.Count; x++)
{
BoardView move = validmoves[x];
//doing minimax tree / negamax alpha beta pruning (tree?) for upcoming potential moves to evaluate how well this first move way
if (first)
{
tmpScore = NegaMax(move, int.MinValue, int.MaxValue, depth, -color);
first = false;
}
else
tmpScore = NegaMax(move, highestScore, int.MaxValue, depth, -color);
if (tmpScore > highestScore)
{
bestMoves.Clear();
highestScore = tmpScore;
}
else if (tmpScore == highestScore)
{
move.board.myscore = highestScore;
}
}
return bestMoves;
}
[/source]

[source lang="csharp"]
private static double NegaMax(BoardView myview, double alpha, double beta, int depth, int color)
{
if (depth <= 0 || myview.currentplayer.checkWin())
return myview.getScore();
List<BoardView> validmoves = new List<BoardView>();
foreach (BoardView child in validmoves)
{
double temp = -NegaMax(child, -beta, -alpha, depth - 1, -color);
if (temp > alpha)
alpha = temp;
if (alpha >= beta)
break; // cut-off
}
return alpha;
}
[/source]

and my boardview class. This just pretty much holds the board and 2 players. The constructor takes either an old view or a board and 2 players. It than makes deep copies of the objects.

swap: just swaps the current and opponent references

getChildren: Returns a list of all possible plays based off the current players position on the board. It first checks for movement for the current player than wall positions. For wall positions it first checks if the current player is between the opponent and the goal and which direction the opponent is going. This should create 4 scenarions
i) Opponent is going towards start of the board and I'm between my opponent and the goal spaces. The wall logic will start placing walls behind me going from my position > 0 ; maxsquare > position
ii) Opponent is going towards start of board and I'm not between my opponent and the goal. Same thing as above but I start placing the walls behind the opponent
iii & iv) same thing but replace start of board with end of board and the logic is position > maxsquare; 0 > position

Getscore: I use an A* search to find the shortest path for me to a goal square and my opponent and his goal squares. The main weight is based off the distance to the goal with some weight on bad wall positions. While I don't think this equation is going get great defensive wall positions (see picture posted earlier in this thread). In theory moving away from the goal or bad wall placements should return bad scores resulting in those paths getting cut off.

[source lang="csharp"] public class BoardView
{
public Board board { get; set; }
public Player currentplayer { get; set; }
public Player opponent { get; set; }
//public BoardView parent { get; set; }
//makes a clone of an old view
public BoardView(BoardView oldview)
{
//parent = oldview;
board = new Board(oldview.board);
currentplayer = new Player(oldview.currentplayer);
opponent = new Player(oldview.opponent);
}
//makes a new board view by combining 2 players and a board
public BoardView(Board myboard, Player p1, Player p2)
{
board = new Board(myboard);
currentplayer = new Player(p1);
opponent = new Player(p2);
}
//swap the players
public void swap()
{
Player temp = currentplayer;
currentplayer = opponent;
opponent = temp;
}

/*
* returns a list of all avilable moves from a given view
* color = 1 = current player
* color = -1 = opponent
*/
public List<BoardView> getChildren(int color)
{
List<BoardView> children = new List<BoardView>();
//getting all movement views from the current player
GameSpace myspace = board.grid[currentplayer.position];
List<GameSpace> movement = new List<GameSpace>(AStar.neighborsAndJumps(myspace, board, opponent.symbol));
if (currentplayer.symbol != Global.getType(color))
{
myspace = board.grid[opponent.position];
movement = new List<GameSpace>(AStar.neighborsAndJumps(myspace, board, currentplayer.symbol));
}

foreach (GameSpace mymove in movement)
{
BoardView newReality = new BoardView(this);
if (newReality.currentplayer.symbol != Global.getType(color))
newReality.swap();
newReality.board.MovePawn(newReality.currentplayer.position, mymove.SQNum, newReality.currentplayer.symbol);
newReality.currentplayer.position = mymove.SQNum;
}
//if the current player has walls left, find all valid moves
if (currentplayer.numberwalls > 0)
{
/*
* 0 = horz walls
* 1 = vertical walls
*/
for (int y = 0; y < 2; y++)
{
//go through and try a wall at every valid position
for (int x = 0; x < board.grid.Count; x++)
{
int pos = 0;
int temp = opponent.position;
//finding where I should start the wall placement. If my goal[0] is > size (length of board) than it means I'm heading towards the back squares
if (currentplayer.goal[0] >= Global.SIZE)
{
/*
* opponent is moving to the front squares. Check if current player or opponent is closer to the goal.
* If I'm closer start placing walls behind me, if opponent is closer place walls behind opponent
*/
if (currentplayer.position < opponent.position)
temp = currentplayer.position;
/*
* starting at the far right of the row the opponent or current player are on (depending who is closer to the goal). Keep moving left checking each square for wall placement.
* Hopefully this will quickly find a good spot to place a wall that slows the opponent and not me
*/
pos = (temp / Global.SIZE) * Global.SIZE + (Global.SIZE - 1) - x;
if (pos < 0)
pos += board.grid.Count;
}
else
{
//opponent is moving to the back squares
if (currentplayer.position > opponent.position)
temp = currentplayer.position;
/*
* starting at the far left of the row the opponent or current player are on (depending who is closer to the goal). Keep moving right checking each square for wall placement.
* Hopefully this will quickly find a good spot to place a wall that slows the opponent and not me
*/
pos = (temp / Global.SIZE) * Global.SIZE + x;
if (pos >= board.grid.Count)
pos -= board.grid.Count;
}
BoardView newReality = new BoardView(this);
if (newReality.currentplayer.symbol != Global.getType(color))
newReality.swap();
if (y == 0)
{
//if true the wall has been placed and add it to the list of vald moves
if (newReality.board.SetHorzWall(pos, newReality.currentplayer, newReality.opponent))
{
}
}
else
{
//if true the wall has been placed and add it to the list of vald moves
if (newReality.board.SetVertWall(pos, newReality.currentplayer, newReality.opponent))
{
}
}
}
}
}
return children;
}
public double getScore()
{
double score = 0;
//finding my shortest path to a goal
List<GameSpace> mypath = new List<GameSpace>(board.findPath(true, currentplayer, opponent));
//finding opponents shortest path to a goal
List<GameSpace> opppath = new List<GameSpace>(board.findPath(true, opponent, currentplayer));
score = 100 * (opppath.Count - mypath.Count) + currentplayer.numberwalls - opponent.numberwalls;
return score;
} [/source]

Edit: For some reason my negamax function won't show right with the code tag [img]http://public.gamedev.net//public/style_emoticons/default/sad.png[/img]

private static double NegaMax(BoardView myview, double alpha, double beta, int depth, int color)
{
if (depth <= 0 || myview.currentplayer.checkWin())
return myview.getScore();
List<BoardView> validmoves = new List<BoardView>();
foreach (BoardView child in validmoves)
{
double temp = -NegaMax(child, -beta, -alpha, depth - 1, -color);
if (temp > alpha)
alpha = temp;
if (alpha >= beta)
break; // cut-off
}
return alpha;
}

edit 2: Found a bug when I was calling swap. Did more checking that swapping occurs at the right time. Man that getchild function is atrocious :S Edited by Wizuriel

##### Share on other sites
Hmm, I think there's a lot of room for optimisation. One suggestion would be as follows:
- Have the player and opponent positions stored outside the board.
- Attach the previous shortest path to a board.
- This means that any time a player moves (rather than wall-building), your board (including pathing) doesn't change, so you don't need to instantiate a new board.
- Explore moves before walls to take maximum advantage of this.
- Exploring this way you only have to re-path if the opponent steps into your pre-calculated path, you step off your shortest path, or when a wall is built that blocks your path.

The majority of the branches are wall-building related, but pathfinding is an expensive part of the inner loop to reduce. Also I would imagine that most of the walls you create would not block your own path.

##### Share on other sites
Huzzah just got the alpha beta working (at least to the best I can debug/see) [img]http://public.gamedev.net//public/style_emoticons/default/biggrin.png[/img] I gave up on negamax and just used a more traditional alpha/beta look.

I don't think my program is slow because of the pathing. When I generate children each wall place requires pathing to be called a min of twice (once per each player till it can find a path to one of their goal spaces). My getchildren routine is doing that in a fraction of a second. I think my programs slow speed is just the amount of states it needs to explore and evaluate (also probably doesn't help for debugging it I kept links to each state to help debug, good think memory is cheap [img]http://public.gamedev.net//public/style_emoticons/default/biggrin.png[/img]).

Unfortunately even if my opponent just moves that can also reflect my best position. It is possible because of wall placements that player 1 can't place a wall to block player 2 without also increasing player 1's path. After player 2 has moved x squares though it is possible that you can now place a wall to increase just player 2's time despite both players following their optimal path.

Though now that I wrote all that I do think is player 1 path is shorter than player 2's path as long as both just move I shouldn't have to reevaluate player 1's positions. Unfortunately would need to still do so for player 2 :S

edit: Updated code
[source lang="csharp"] public static List<BoardView> GetBestMove(BoardView myview, int depth, int color)
{
var bestMoves = new List<BoardView>();

double highestScore = int.MinValue;
double tmpScore;
List<BoardView> validmoves = new List<BoardView>();
//getting all valid children based off the current players position
for (int x = 0; x < validmoves.Count; x++)
{
BoardView move = validmoves[x];
//doing minimax tree / negamax alpha beta pruning (tree?) for upcoming potential moves to evaluate how well this first move way
//tmpScore = NegaMax(move, int.MinValue, int.MaxValue, depth - 1, -color, color);
tmpScore = alphabeta(move, int.MinValue, int.MaxValue, depth - 1, -color, color);
if (tmpScore > highestScore)
{
bestMoves.Clear();
highestScore = tmpScore;
}
else if (tmpScore == highestScore)
{
move.board.myscore = highestScore;
}
}
return bestMoves;
}[/source]

Edit: Of course this one still doesn't show
private static double alphabeta(BoardView myview, double alpha, double beta, int depth, int color, int originalplayer)
{
if (depth <= 0 || myview.currentplayer.checkWin())
{
double score = myview.getScore(originalplayer);
return score;
}
if (color != originalplayer)
{
double minscore = beta;
List<BoardView> validmoves = new List<BoardView>();
foreach (BoardView child in validmoves)
{
minscore = Math.Min(minscore, alphabeta(child, alpha, beta, depth-1, -color, originalplayer));
beta = minscore;
if (alpha >= beta)
return minscore;
}
return minscore ;
}
else
{
double maxscore = alpha;
List<BoardView> validmoves = new List<BoardView>();
foreach (BoardView child in validmoves)
{
maxscore = Math.Max(maxscore, alphabeta(child, alpha, beta, depth-1, -color, originalplayer));
if (alpha >= beta)
{
return maxscore;
}
}
return maxscore;
}
}

[source lang="csharp"] public class BoardView
{
public Board board { get; set; }
public Player currentplayer { get; set; }
public Player opponent { get; set; }
public BoardView parent { get; set; }

//makes a clone of an old view
public BoardView(BoardView oldview)
{
parent = oldview;
board = new Board(oldview.board);
currentplayer = new Player(oldview.currentplayer);
opponent = new Player(oldview.opponent);
}

//makes a new board view by combining 2 players and a board
public BoardView(Board myboard, Player p1, Player p2)
{
board = new Board(myboard);
currentplayer = new Player(p1);
opponent = new Player(p2);
}

//swap the players
public void swap()
{
Player temp = currentplayer;
currentplayer = opponent;
opponent = temp;
}

/*
* returns a list of all avilable moves from a given view
* color = 1 = current player
* color = -1 = opponent
*/

public List<BoardView> getChildren(int color)
{
List<BoardView> children = new List<BoardView>();
Boolean swapped = false;
/*
* checking if the current player is what it should be according to the color call
* If color = 1 player = 1
* if color = -1 player = 2
* if I swap here it changes this level not the children, so will need to swap again later (since I just want the children to move opposite the parents, not the current level to swap
* who played)
*/

if (currentplayer.symbol != Global.getType(color))
{
swapped = true;
swap();
}

//getting all movement views from the current player
GameSpace myspace = board.grid[currentplayer.position];
List<GameSpace> movement = new List<GameSpace>(AStar.neighborsAndJumps(myspace, board, opponent.symbol));
foreach (GameSpace mymove in movement)
{
BoardView newReality = new BoardView(this);
newReality.board.MovePawn(newReality.currentplayer.position, mymove.SQNum, newReality.currentplayer.symbol);
newReality.currentplayer.position = mymove.SQNum;
}
//if the current player has walls left, find all valid moves
if (currentplayer.numberwalls > 0)
{
/*
* 0 = horz walls
* 1 = vertical walls
*/
for (int y = 0; y < 2; y++)
{
//go through and try a wall at every valid position
for (int x = 0; x < board.grid.Count; x++)
{
int pos = 0;
int temp = opponent.position;
//finding where I should start the wall placement. If my goal[0] is > size (length of board) than it means I'm heading towards the back squares
if (currentplayer.goal[0] >= Global.SIZE)
{
/*
* opponent is moving to the front squares. Check if current player or opponent is closer to the goal.
* If I'm closer start placing walls behind me, if opponent is closer place walls behind opponent
*/
if (currentplayer.position < opponent.position)
temp = currentplayer.position;
/*
* starting at the far right of the row the opponent or current player are on (depending who is closer to the goal). Keep moving left checking each square for wall placement.
* Hopefully this will quickly find a good spot to place a wall that slows the opponent and not me
*/
pos = (temp / Global.SIZE) * Global.SIZE + (Global.SIZE - 1) - x;

if (pos < 0)
pos += board.grid.Count;
}
else
{
//opponent is moving to the back squares
if (currentplayer.position > opponent.position)
temp = currentplayer.position;
/*
* starting at the far left of the row the opponent or current player are on (depending who is closer to the goal). Keep moving right checking each square for wall placement.
* Hopefully this will quickly find a good spot to place a wall that slows the opponent and not me
*/
pos = (temp / Global.SIZE) * Global.SIZE + x;
if (pos >= board.grid.Count)
pos -= board.grid.Count;
}

BoardView newReality = new BoardView(this);
if (y == 0)
{
//if true the wall has been placed and add it to the list of vald moves
if (newReality.board.SetHorzWall(pos, newReality.currentplayer, newReality.opponent))
{
}
}
else
{
//if true the wall has been placed and add it to the list of vald moves
if (newReality.board.SetVertWall(pos, newReality.currentplayer, newReality.opponent))
{
}
}
}
}
}
/*
* If I swapped above than undo the swap at the current level
*/
if (swapped)
swap();
return children;
}

public double getScore(int color)
{
bool swapped = false;
if (currentplayer.symbol != Global.getType(color))
{
swapped = true;
swap();
}
double score = 0;
//finding my shortest path to a goal
List<GameSpace> mypath = new List<GameSpace>(board.findPath(true, currentplayer, opponent));
//finding opponents shortest path to a goal
List<GameSpace> opppath = new List<GameSpace>(board.findPath(true, opponent, currentplayer));
score = 100 * (opppath.Count - 1 - mypath.Count) + currentplayer.numberwalls - opponent.numberwalls;
if (swapped)
swap();
return score;

}[/source]

edity 2: I'm not really understanding the killer heuristic, Right now I always start with movement towards the goal (followed by the rest since it is a short list) and for my wall placement I start by placing walls between my opponent and their goal in a way that won't block me. Is there anything else I can try with that? Edited by Wizuriel

##### Share on other sites
I put together a C++ quoridor engine last night. It typically searches depth 4 in less than a second, and it already plays better than I do. I haven't done anything about move ordering (other than at the root, where I use iterative deepening), I haven't implemented transposition tables, and I am not reducing any moves yet. I haven't tried null-move pruning either, which I think might actually work for this game. So I expect I'll get a much better search with a little bit of effort.

For evaluation I am using something like
[code]return 20*(his_distance_to_goal - my_distance_to_goal) + 10 + small_random_perturbation;[/code]

I'll post the program in a few days, when I am satisfied with it. Edited by alvaro

##### Share on other sites
I was thinking that this shouldn't be that big of a problem given a decent A* implementation. Brute forcing it shouldn't be that big of a deal. *shrug*

##### Share on other sites
I was thinking that this shouldn't be that big of a problem given a decent A* implementation. Brute forcing it shouldn't be that big of a deal. *shrug*
[/quote]

I am not sure what you have in mind. This is a two-player game, so some form of tree search is probably required to play it well. A* won't do much more than compute how far away you are from the goal right now, which I do many many times in a search.

Which reminds me... I didn't even use A* for pathfinding! I started with some dead simple find-all-distances-to-goal-and-pick-the-number-where-the-pawn-sits method, and I didn't bother substituting it by something faster. I could compute the distance-to-goal map for each player at the root (or at nodes with high enough depth left) and use them as the heuristic in A*. Since walls only ever get added, that is an admissible heuristic (i.e., an underestimate of the true distance). Maybe I'll try that tonight.

##### Share on other sites
Now I'm really curious what I did wrong for mine to take hours to do depth 4 searching.

I've tried shrinking the board down to 5x5 with fewer walls to test it and I'm positive my algorithms are all working correctly (if not quickly).

In a nutshell what I do is send the current view to my function.
-Try to move pawn each direction (check if way is blocked by a wall, if not you can move, if the node your moving has the opponent I get his move directions and add then to yours (since opponent square won't see your piece as his it returns that your square is not valid so doesn't infinite loop).
-Try to place wall on each square
-When placing a wall I check that it doesn't hit another wall or go between a wall (My wall pieces have a start and end point making it pretty easy to tell the difference between 1 wall and between 2 walls which would be legal)
-If wall position doesn't hit a wall do A* search on both player and opponent. I do this until I find a path for each of them (so min of once per pawn, max of board length), if both still have exits the wall is good.

run all those children in the alpha beta function

##### Share on other sites
Make sure you are not doing any dynamic memory allocation at all. I don't know how feasible this is with languages like C# and Java, but in C or C++ it's fairly straight forward.

[quote name='Wizuriel' timestamp='1346855811' post='4976834']
-When placing a wall I check that it doesn't hit another wall or go between a wall (My wall pieces have a start and end point making it pretty easy to tell the difference between 1 wall and between 2 walls which would be legal)
[/quote]
Ooops! I guess I am implementing that rule wrong. I don't allow a wall that crosses something that blocks perpendicularly, whether it's a single wall or the place where two join. I don't think that should matter a whole lot, though. More work for tonight.

##### Share on other sites
[quote name='alvaro' timestamp='1346855020' post='4976827']
I was thinking that this shouldn't be that big of a problem given a decent A* implementation. Brute forcing it shouldn't be that big of a deal. *shrug*
[/quote]

I am not sure what you have in mind. This is a two-player game, so some form of tree search is probably required to play it well. A* won't do much more than compute how far away you are from the goal right now, which I do many many times in a search.
[/quote]
Proper placement of walls is based on the combined ideas of[list]
[*]Lengthening the other player's path
[*]Not doing an illegal move that blocks a complete path
[/list]
Obviously, the longer you can make his path without affecting your own, the better. This is where A* comes in. You want to know which of your potential wall placements will do the most damage without being illegal or making your own situation markedly worse. That is the fitness of your decision.

Now you can certainly try to do it all the way to the end (i.e. full tree search to leaf nodes) but the branching factor does get annoying. The good news is that there are a finite number of walls (20) so that while the maximum depth may be deeper than that (you don't always have to place a wall), there are only 20 plys where you really have significant branching because of a new wall.

Even searching a few layers in (which is what most human players would do anyway) will provide enough insight to know if/when/where to place walls.

Again, having a screaming A* routine expedites the search through the possibility space.

##### Share on other sites

This topic is 1856 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
628674
• Total Posts
2984161

• 13
• 12
• 10
• 9
• 9