Hi there. Ive been workin in a connect4 game, in c, and the algorithm ive done, is really simple and sometimes easy too beat. Heres the pseudo-code of it
See if comp can win
If so win it
See if player can win
If so block it
For cicle(stops when it has tested all possibilites, in this case 7 columns)
see if comp can create a line of 2 or 3 pieces in column I, and copies the number of "connected" pieces to a variable Y
see if puting the piece in that column will originate victory of the player
if so cicle restarts in next column
put variable the "play" in a binary tree
restart cicle
THe "play" thats gonna be played will be the one with the most "connected pieces"
Now i know this algorithm aint good and it has some flaws,so ive been trying to "upgrade" it to a better one and to implement "prediction", so that the computer can thinkin a play 2 or 3 turns in advance. Can you help me? Ive been searching this forum, but from what i see people here use minimax and alpha-beta algorithms. Can you give me a "help" or "tutorial" page for this algorithms, or can you help me upgrade my algorithm?
Thanks in advanced, and please dont flame me.
Thanks again!
[edited by - PacketLoss on May 23, 2003 12:39:05 PM]

# Help with connect4 algorithm

Started by PacketLoss, May 23 2003 05:37 AM

10 replies to this topic

Sponsor:

###
#2
Anonymous Poster_Anonymous Poster_*
Guests - Reputation:

Posted 23 May 2003 - 11:10 AM

Minimax is the basic algorithm generally used for what is called a game playing search, where you have two players competing against each other.

To implement it, you should have some data structure that represents the entire state of a game (don''t represent the state of the game with globals, try to encapsulate it all into one data structure. This includes the board and which player''s turn it is). Then you also need to be able to generate all the valid moves that a player can make from a given state. For connect 4 this is simple, since you just go through each column and if it isn''t full then it is a valid move. For each valid move, you generate a new board state that represents the board after making that move, and then the algorithm gets recursive. You look at all the valid moves possible from after that state.

Of course the search space is too big in a game like connect 4, so you will want to keep track of the depth you have searched to and terminate the search after it has gone down a few levels (start with a small number like 4 and try increasing it slowly. It is a tradeoff between time and how far ahead it looks. Each level you look ahead will take approximately 7 times longer, like looking 10 levels ahead will take about 7 times longer than looking 9 levels ahead. This is because there are usually about 7 valid moves).

When it reaches the maximum depth, or one of the players has won, you need to have a function that evaluates the board, and gives a numeric estimate of how good the board is for player 1. If player 1 won, you return an extremely high value. If player 2 won, return an extremely low value.

Now here''s where the minimax comes in: when the recursion unwinds, if it was player 1''s turn you keep the move with the highest value. If it was player 2''s turn you keep the move with the lowest value. This is because player 1 takes the moves best for player 1 that will result in the highest value, and player 2''s best moves are those that result in the lowest value.

This traces all the way back to the current move, and you choose the move that would result in the highest value. In minimax, it is convention that the first player is always the one trying to maximize the value.

Minimax with alpha-beta pruning speeds things up a little bit by using a little trick that reduces the number of states that need to be evaluated. Try to understand and implement ordinary minimax first.

To implement it, you should have some data structure that represents the entire state of a game (don''t represent the state of the game with globals, try to encapsulate it all into one data structure. This includes the board and which player''s turn it is). Then you also need to be able to generate all the valid moves that a player can make from a given state. For connect 4 this is simple, since you just go through each column and if it isn''t full then it is a valid move. For each valid move, you generate a new board state that represents the board after making that move, and then the algorithm gets recursive. You look at all the valid moves possible from after that state.

Of course the search space is too big in a game like connect 4, so you will want to keep track of the depth you have searched to and terminate the search after it has gone down a few levels (start with a small number like 4 and try increasing it slowly. It is a tradeoff between time and how far ahead it looks. Each level you look ahead will take approximately 7 times longer, like looking 10 levels ahead will take about 7 times longer than looking 9 levels ahead. This is because there are usually about 7 valid moves).

When it reaches the maximum depth, or one of the players has won, you need to have a function that evaluates the board, and gives a numeric estimate of how good the board is for player 1. If player 1 won, you return an extremely high value. If player 2 won, return an extremely low value.

Now here''s where the minimax comes in: when the recursion unwinds, if it was player 1''s turn you keep the move with the highest value. If it was player 2''s turn you keep the move with the lowest value. This is because player 1 takes the moves best for player 1 that will result in the highest value, and player 2''s best moves are those that result in the lowest value.

This traces all the way back to the current move, and you choose the move that would result in the highest value. In minimax, it is convention that the first player is always the one trying to maximize the value.

Minimax with alpha-beta pruning speeds things up a little bit by using a little trick that reduces the number of states that need to be evaluated. Try to understand and implement ordinary minimax first.

###
#4
Anonymous Poster_Anonymous Poster_*
Guests - Reputation:

Posted 24 May 2003 - 10:12 PM

One more thing: You should try to get the recursive part to work right first, and then possibly work on tuning the evaluation function. A really simple evaluation function that doesn''t require much work is:

if first player won: return 1000000

else if second player won: return -1000000

else return 0

implementing minimax with this evaluation function can create a decent player, espeically if you set it to search for a depth of about 8 or higher. With this evaluation function, the minimax search will always choose a move to lead to a guaranteed victory within 8 moves if such a move exists, and avoid paths that will give the opponent a guaranteed victory within 8 moves. A more sophisticated evaluation function would allow it to take into account for features, and include them as part of its numeric evaluation. For example, having 3 peices in a row might add 10 to the evaluation.

Another tip: if from the current position, the opponent is guaranteed to win, then an evaluation function like this will act apathetic and just take the first (or last, depending on implementation) move available to it, since all moves will look the same (they all result in the opponent winning). But this is foolish, since the opponent might not be looking as many moves ahead. So the evaluation function should keep track of the number of moves made so far, so that it can evaluate a loss 8 moves in the future as being better than a loss 2 moves in the future, for example.

If you really get into it, you could try using timing to choose when to break off the search. More sophisticated implementations of minimax search to depth 1, then start over searching to depth 2, then 3, and so on, until they detect that a certain amount of time has passed. At that point they break off the search and use the results of the deepest search they did. So if it is in the middle of a depth 9 search and time runs out, it would stop and use the best move it found from the depth 8 search. Chess programs often have a feature where you force the computer to make its move early. I am pretty sure this is just making it timeout right away, rather than giving it its full time allocation to find a move.

By using time to limit the depth of the search, the program will work better on faster computers which may or may not be desirable. It will also tend to play better, since it will be able to search to a greater depth in the late game when there are less valid moves, allowing it to find a way to guarantee victory 16 moves ahead for example.

Of course implementing alpha-beta pruning will also allow it to search faster, but it is a bit hard to explain the trick that makes this work, and the improvement isn''t really necessary for now.

if first player won: return 1000000

else if second player won: return -1000000

else return 0

implementing minimax with this evaluation function can create a decent player, espeically if you set it to search for a depth of about 8 or higher. With this evaluation function, the minimax search will always choose a move to lead to a guaranteed victory within 8 moves if such a move exists, and avoid paths that will give the opponent a guaranteed victory within 8 moves. A more sophisticated evaluation function would allow it to take into account for features, and include them as part of its numeric evaluation. For example, having 3 peices in a row might add 10 to the evaluation.

Another tip: if from the current position, the opponent is guaranteed to win, then an evaluation function like this will act apathetic and just take the first (or last, depending on implementation) move available to it, since all moves will look the same (they all result in the opponent winning). But this is foolish, since the opponent might not be looking as many moves ahead. So the evaluation function should keep track of the number of moves made so far, so that it can evaluate a loss 8 moves in the future as being better than a loss 2 moves in the future, for example.

If you really get into it, you could try using timing to choose when to break off the search. More sophisticated implementations of minimax search to depth 1, then start over searching to depth 2, then 3, and so on, until they detect that a certain amount of time has passed. At that point they break off the search and use the results of the deepest search they did. So if it is in the middle of a depth 9 search and time runs out, it would stop and use the best move it found from the depth 8 search. Chess programs often have a feature where you force the computer to make its move early. I am pretty sure this is just making it timeout right away, rather than giving it its full time allocation to find a move.

By using time to limit the depth of the search, the program will work better on faster computers which may or may not be desirable. It will also tend to play better, since it will be able to search to a greater depth in the late game when there are less valid moves, allowing it to find a way to guarantee victory 16 moves ahead for example.

Of course implementing alpha-beta pruning will also allow it to search faster, but it is a bit hard to explain the trick that makes this work, and the improvement isn''t really necessary for now.

###
#5
Members - Reputation: **457**

Posted 24 May 2003 - 10:14 PM

Connect 4 has been solved. I believe the solution uses expert features to significantly reduce the search, then it''s just a simple lookup during play. I can''t find the paper for the life of me, but google has knowledge of it

In a perfect game, I believe the first player wins unless the second plays perfectly too.

This is another learning-based approach:

http://www.cs.tau.ac.il/~evend/Workshop/sadna-games-fun_approx.ppt

And one for the record:

http://ai-depot.com/LogicGames/MiniMax.html

Artificial Intelligence Depot - Maybe it''s not all about graphics...

In a perfect game, I believe the first player wins unless the second plays perfectly too.

This is another learning-based approach:

http://www.cs.tau.ac.il/~evend/Workshop/sadna-games-fun_approx.ppt

And one for the record:

http://ai-depot.com/LogicGames/MiniMax.html

Artificial Intelligence Depot - Maybe it''s not all about graphics...

###
#6
Crossbones+ - Reputation: **17464**

Posted 25 May 2003 - 01:56 AM

Yeah, the game is solved. But that doesn''t mean that you can''t have fun trying to make a program to play it.

Actually, the first player has a winning strategy. That means that it can win, no matter what the second player does.

I made a strong program to play this many years ago, and I learned a lot in the process. Once you get the program working, you can make tons of improvements to the basic minimax. This is what I did:

- Alpha-beta. Essential. Practically doubles your search depth.

- Clever ordering of the moves. Helps alpha-beta a lot.

- Hash tables. Avoids redundant searches and can also help to sort the moves.

- A good evaluation function. This is perhaps even more important than good searches, and requires several things:

* Knowledge of the relative importance of different threats

* Knowledge of the game result when all possible 3-piece threats have already been created (more a math problem than programming)

- Boosting the search on forced moves. They shouldn''t count as moves at all.

- Opening book. There is no point in having your program thinking from scratch in positions that happen all the time. You build a database of known opening positions with the recommended moves and associated probabilities, and your program just follows the recommendation without thinking. Be careful what you put here!

- Writting the evaluation function in assembly. This was important in 1995, but today''s compilers do a better job.

In the end, my program (running on a 486 at 50MHz, 12 seconds per move) only had to think for a few moves in each game. The early moves were part of the opening book and the on last ~20 moves, the program was powerful enough to search the entire tree.

I made the program for a contest in a Spanish magazine, and I won. It was a lot of fun!

Actually, the first player has a winning strategy. That means that it can win, no matter what the second player does.

I made a strong program to play this many years ago, and I learned a lot in the process. Once you get the program working, you can make tons of improvements to the basic minimax. This is what I did:

- Alpha-beta. Essential. Practically doubles your search depth.

- Clever ordering of the moves. Helps alpha-beta a lot.

- Hash tables. Avoids redundant searches and can also help to sort the moves.

- A good evaluation function. This is perhaps even more important than good searches, and requires several things:

* Knowledge of the relative importance of different threats

* Knowledge of the game result when all possible 3-piece threats have already been created (more a math problem than programming)

- Boosting the search on forced moves. They shouldn''t count as moves at all.

- Opening book. There is no point in having your program thinking from scratch in positions that happen all the time. You build a database of known opening positions with the recommended moves and associated probabilities, and your program just follows the recommendation without thinking. Be careful what you put here!

- Writting the evaluation function in assembly. This was important in 1995, but today''s compilers do a better job.

In the end, my program (running on a 486 at 50MHz, 12 seconds per move) only had to think for a few moves in each game. The early moves were part of the opening book and the on last ~20 moves, the program was powerful enough to search the entire tree.

I made the program for a contest in a Spanish magazine, and I won. It was a lot of fun!

###
#7
Members - Reputation: **122**

Posted 30 May 2003 - 02:56 PM

quote:Original post by alexjc

Connect 4 has been solved. I believe the solution uses expert features to significantly reduce the search, then it''s just a simple lookup during play.

**Velena**was the first to solve it in this manner:

Velena, A Shannon C-type program which plays connect four perfectly There are links to papers on this website.

I found a program called Mustrum that can solve the game in under an hour from the starting position with a 1.5GHz machine. I assume that it does not expert strategy, but perhaps it does it sorting the moves during alpha-beta.

Here''s a great article about Expert Play in Connect-Four, James D. Allen to help you make a more effective evaluation function.

Jason Doucette - online résumé page: www.jasondoucette.com

projects, real-time graphics, artificial intelligence, world records, wallpapers/desktops

*"Great minds discuss ideas, average minds discuss events, small minds discuss people." - Anna Eleanor Roosevelt, 1884-1962*

###
#8
Members - Reputation: **198**

Posted 31 May 2003 - 05:14 AM

Connect4 may have been solved for a 7*6 board, but it''s much more fun playing on a 20*10 board for instance. I once coded a Connect4-player once, based on a recursive search function. But you need a pretty good evaluation function to really tell you what''s a good move and what''s not.

Has anyone had a go at trying to make an connect4 application that can handle arbitrary boards?

Thanks,

Edo

###
#9
Members - Reputation: **122**

Posted 01 June 2003 - 11:45 AM

Yes, I have. But, it was only a small project for a day or two to get used to writing C++, since at the time I hadn''t written much in it. The links I gave in my last post are actually helpful in a variable sized board, if the height is an even number that is. I am not going to get into the reasons here, take a look at some of those articles if you want to see.

You are quite correct that you need a very good evaluation function for it to play properly. In some cases, a game can be lost after 4 or 5 moves, and no search will solve that with any reasonable time (it would have to see to the end of the game, in which case you have the game solved already).

Jason Doucette - online resume page: www.jasondoucette.com

projects, real-time graphics, artificial intelligence, world records, wallpapers/desktops

You are quite correct that you need a very good evaluation function for it to play properly. In some cases, a game can be lost after 4 or 5 moves, and no search will solve that with any reasonable time (it would have to see to the end of the game, in which case you have the game solved already).

Jason Doucette - online resume page: www.jasondoucette.com

projects, real-time graphics, artificial intelligence, world records, wallpapers/desktops

*"Great minds discuss ideas, average minds discuss events, small minds discuss people." - Anna Eleanor Roosevelt, 1884-1962*###
#10
Members - Reputation: **118**

Posted 03 June 2003 - 09:10 AM

quote:Original post by alvaro

- Alpha-beta. Essential. Practically doubles your search depth.

- Clever ordering of the moves. Helps alpha-beta a lot.

- Hash tables. Avoids redundant searches and can also help to sort the moves.

Alpha-beta will not double your search depth.

Let''s talk about the effective branching factor first. The effective branching factor is how much longer it takes to perform a search one ply deeper. So if it takes 1 minute to search to depth 8, and it takes 3 minutes to search to depth 9, then your effective branching factor is 3, since depth 9 took 3 times longer than depth 8. This means that to get to depth 10 it will probably take around 9 minutes (3 times longer than depth 9).

Min-max will perform a search on every single move. So if there are 8 legal moves in most connect-4 positions, then the effective branching factor for a min-max search of a connect-4 position is 8. This means that to search to depth 1, you only have to look at 8 moves. Depth 2 = (8 * 8) = 64, depth 3 = (8 * 8 * 8) = 512, and so on.

Here is where alpha-beta comes in. Alpha-beta lowers your effective branching factor. The effectiveness of alpha-beta depends entirely on how well you order the moves you search. So if you can make a good guess at what the best move is, search it before the other legal moves, and your search will take less time. Basically alpha-beta works by searching the first move, then when it searches the next move it says, "The first move was better than this move, so I''m not going to spend as much time searching it." If you don''t search the best move first though, then alpha-beta can''t say, "I already have a better move," and it has to search each move completely until it finds the best move. It has been proven that if you have *perfect* move ordering (meaning you search the best move first, second best second, etc.), you will lower your effective branching factor to the square root of the "real" branching factor. So if your branching factor for min-max in connect-4 was 8, and you were able to get perfect move ordering, your branching factor would go down to about 2.8, which means you get to search deeper. Using min-max, to search to depth 3 we had to look at (8 * 8 * 8) = 512 positions. Using alpha-beta, with our branching factor of 2.8, we have to look at (2.8 * 2.8 * 2.8) = 22 positions. Obviously 22 is a LOT better than 512. Effective branching factor is not constant, so sometimes the search might take 3.0 times as long, somtimes 2.6 times as long, but on average it will be around 2.8 (the square root of 8), if you have good move ordering.

If you have random move ordering (meaning you don''t try to search the best move first), alpha-beta will still help, but it won''t be as effective. If you have the moves ordered from worst to best, then alpha-beta will not help you at all, because alpha-beta becomes pure min-max (has to search every move).

The transposition table will be able to further lower your effective branching factor, because it will stop you from doing expensive searches on positions that you have already searched. The transposition table also helps you order your moves better. Before you do a search, you can look in the transposition table to see if the position has a "best move" stored (maybe from a previous search). If it does, you can search that move first, and that will help alpha-beta out.

There are various other ways to get good move ordering. In chess we use history tables and killer tables. We also use iterative deepening and internal iterative deepening, which helps out a lot. And we use some special case knowledge sometimes, such as "if my opponent just captured my piece, try all of the moves that re-capture his piece", because that will probably be the best move (otherwise we lose our piece).

###
#11
Members - Reputation: **122**

Posted 05 June 2003 - 02:33 AM

Great news guys. Ive got the algorithm to work, with a 8 play search. The evaluate function isnt that great yet, but the computer is a very decent player. Thx for all the help guys :D

Im using negamax with alphabeta cutoffs

Btw my algorith, with your help, beats the crap outta my mates algorithm heheh :D

Again my thx

Im using negamax with alphabeta cutoffs

Btw my algorith, with your help, beats the crap outta my mates algorithm heheh :D

Again my thx