• 13
• 15
• 27
• 9
• 9

# AlphaBeta pruning algorithm, returning best move.

This topic is 2756 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

What's up forum!

I am on my way to implement simple chess AI. Score for the best move(actually the best score for a particular chess board is returned by AlphaBeta function. Code below.

int AlphaBeta(Node *root, int nDepth, int alfa, int beta){        if(nDepth == 0) return Eval(root->board);        GenMov(root);        if(root->children.size() == 0) return Eval(root->board);        for(int i = 0 ; i < (int)root->children.size(); i++){                int val = -AlfaBeta(&root->children, --nDepth, -beta, -alfa);                if(val >= beta){                        return beta;                }                if( val > alfa){                        alfa = val;                }                //root->children.pop_back();        }        return alfa;}

In order to get coordinates of figure which is to be moved I have to keep the whole game-tree level, what's more, every node has to be assigned with Eval(board) value, finally I seek for the node with value returned by algorithm and extract coordinates from Node class.
This is rather ineffective, time and memory consuming method.

How to modify AlphaBeta to get coordinates without performing all those mentioned tasks?

[Edited by - tomekdd on August 28, 2010 6:37:38 AM]

##### Share on other sites
You don't need to do any such thing, and the alpha-beta algorithm should consume very little memory.

It doesn't look like your program has a notion of "move". Instead you just have boards that are children of a board. I recommend you make a class Move and have the generator create a list of these (much lighter objects than a board). You don't even need to have more than one board at all. Simply write functions to make a move and to undo it, and keep working on the same board all the time.

You are only interested in obtaining the best move at the root of the search tree. You should write a separate function to search the root, which can return a move. Once you implement iterative deepening, time control and printing of some search information, this function will be different enough from the general alpha-beta function that the little bit of code duplication won't be a problem.

##### Share on other sites
Let's assume I don't use iterative deepening and depth of the game tree is fixed and let's put alfabeta aside for a while.

Node and Move are defined as fallows :

class Node{
public:
vector<Node> children;
char board[64];
char cPlayer;
int iBoardEval;
Move mov;
};

class Move{
public:
int iNewPosition;
int iOldPosition;
};

Function genMoves(char *board, char cPlayer) generates all possible moves and boards for a given player and stores results in vector<Node> children.

But this way only one level of the game tree is created for a particular node.
How to recursively create n-levels of game tree?

##### Share on other sites
You seem to have this notion that you need to have the tree in memory and then search it, but this is not the case.

The Move class I am thinking of would look something like this:
enum MoveType {  MT_Normal,  MT_PawnTwoStepsForward,  MT_EnPassant,  MT_Castle,  MT_PromoteQueen,  MT_PromoteKnight,  MT_PromoteRook,  MT_PromoteBishop};struct Move {  Square from, to;  MoveType move_type;};

When you generate moves from a position, you only create these little (2 to 12 bytes) structures that describe only the move, not the resulting board. Then you can write a function that performs a move on a board and a function that undoes the effects of a move on a board.

With those tools in hand, you can write your search as a depth-first traversal of the tree using a recursive function, and you don't need to ever generate the whole tree in memory.

##### Share on other sites
To tell the truth I am not a pro in implementing graph searching algorithms, but as far as I know to use BSF game tree have to be created at the very beginning(or a vector with all vertices?)

So eventually how it is going to differ from AlphaBeta? Both algorithms are to get the best board eval. and return it to the root. I don't see the point..

What's more, I still can't figure it out how one board is to suffice. If it's player A turn I have to create all possible moves, then for every move I create all possible moves which apply to player B and so on... And eventually I will dig up to the leaves of game tree.

Here's my slightly modified AlphaBeta :
Move AlfaBeta(Node *root, int nDepth, int alfa, int beta, int x, int y){	Move tmp;		if(nDepth == 0){		tmp.iFinalPos = y;		tmp.iInitPos = x;		tmp.iEval = Eval(root->board);		return tmp;	}else GenMoves(root);	if(root->children.size() == 0){		tmp.iFinalPos = y;		tmp.iInitPos = x;		tmp.iEval = Eval(root->board);		return tmp;	}	for(int i = 0 ; i < (int)root->children.size() ; i++){		Move v = AlfaBeta(&root->children,nDepth-1,-beta,-alfa,root->children.move.iInitPos, root->children.move.iFinalPos);		v.iEval = -v.iEval; //NegaMax??		if(v.iEval>= beta){			tmp.iEval = beta;			tmp.iFinalPos = y;			tmp.iInitPos = x;			return tmp;		}		if(v.iEval > alfa){			alfa = v.iEval;			tmp.iFinalPos = y;			tmp.iInitPos = x;		}	}	tmp.iEval = alfa;	return tmp;}

class Move{	public:		int iFinalPos;		int iInitPos;		int iEval;		Move(){			iFinalPos = iInitPos = iEval = 0;		}};

class Node{	public:		vector<Node> children;		char board[64];		char cPlayer;		Move move;		Node(char *b, char cPl, Move m){			memcpy(board,b,64);			cPlayer = cPl;			move = m;		}};

AlphaBeta seems to return valid evaluation of the board, but it doesn't work for InitPos and FinalPos.

It is how I see solution to my problem.

Your method is probably a way better, but I just don't get it, so if you can give some code snippets or just deeper explanation of your idea or tell what's wrong with my code I would do appreciate it!

##### Share on other sites
I am sort of busy this weekend, but I'll try to find some time to implement a simpler game using the ideas I described, and I'll try to keep it understandable.

##### Share on other sites
Quote:
 Original post by tomekddTo tell the truth I am not a pro in implementing graph searching algorithms, but as far as I know to use BSF game tree have to be created at the very beginning(or a vector with all vertices?)

I'll try to clarify alvaro's point with this analogy. If you have the set of all integers from zero to two million, do you first have to put all those numbers into a std::vector to be able to go through each of them and tell which of them are prime numbers? Of course not, since you can generate the numbers on demand: after having checked number 'i' for primality, you know you need to check the number 'i+1' next, unless 'i+1' > 2,000,000 when you stop. So, you just increment the previously checked number by one, and call the CheckIfPrime() function again for that newly generated number.

How did you avoid having to put all these numbers to a std::vector, but still be able to operate on them? You defined a Successor function which tells you the successor, or the next element, that comes after the element 'i', namely 'i+1'.

With minimax game tree search, it is no different. You do not have to put your whole game tree into memory to be able to operate on the tree. At any given time, you only need to look at one board state, so all you need to keep in memory is that board state. The rules of the game give you the equivalent of the Successor function, but this time, there are multiple Successors (children) of the given state, so you do need to keep track of which children you have already searched, and which you haven't. But this is very little amount of memory (at minimum, just a single integer at each parent of the current node telling which child index to look at next)

Using the rules of the game, you can transform the current board state to a next board state, on-demand, with a legal Move action. Minimax is a bit more complicated than the prime-search example above, that we have to be able to go back up the tree as well, so we solve this by generating Undo moves that match each Move action we have made.

Using the Move and Undo actions, we can successfully navigate the game tree without ever having to hold but a single game board state in memory at once. To be able to quickly tell which node to look at next when we make an Undo move, we have to have some book-keeping information for each parent of the current node, but that only requires #current-search-depth entires of stack memory.

I hope that was a successful analogy.

##### Share on other sites
After reading a few papers on 'introductory to graph algorithms'
and your posts over and over again I worked out a neat solution to my problem. Works perfectly fine!.

Thanks for help folks.