Jump to content
  • Advertisement
Sign in to follow this  
Chen Levy

Negamax with alpha-beta purning. [SOLVED]

This topic is 3088 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm developing a game for school project. The game is "Hexxagon" and i have decided to use Negamax algorithm. I implemented two version of it but neither one work as it should. It returns move from two or one step forward if it is the best move instead of move in the first level.(highest level,root). I need your help to solve it i tried a lot of different things already including search for solutions from other similar problems. here is my JAVA code: First Algorithm: public Move NegaMax(int depth, State state) { return NegaMax(state, depth, new Move(null, Integer.MIN_VALUE + 1 ), new Move(null, Integer.MAX_VALUE),1); // negamax(origin, depth, -inf, +inf, 1) } int countTest = 0; public Move NegaMax(State state, int depth, Move alpha, Move beta,int val) { Move possibleMoves[] = null,moveX; State newState = null; State copyS; if (depth==0 || endOfGame(state)) { // System.out.println("Done!!!!!!"); state.getMove().setScore(state.getMove().getScore()*val); return state.getMove(); } // copyS = new State(state); possibleMoves = getAllPossibleMoves(state); for(int index=0;index<300 && possibleMoves[index]!=null;index++) { copyS = new State(state); newState = copyS.executeMove(possibleMoves[index],HBoard.getNeighbors(copyS,possibleMoves[index].getTo()),HBoard.getNumOfNeighbors()); newState.setMove(possibleMoves[index]); newState.getMove().setScore(newState.getNumOfPieces(2)-newState.getNumOfPieces(1)); moveX = NegaMax(newState, depth-1,new Move(beta,-beta.getScore()),new Move(alpha,-alpha.getScore()), -val).negateScore(); alpha = max(alpha,moveX); if (alpha.getScore()>=beta.getScore()){ break; } } System.out.println(alpha +" Depth: "+depth); return alpha; } public Move max(Move X, Move Y) { return (X.getScore() > Y.getScore()) ? X : Y; } Second: private Move BestMove = null; public Move NegaMax(int depth, State state) { Move possibleMoves[] = null; int highScore = Integer.MIN_VALUE + 1; highScore = NegaMax(state, depth , Integer.MIN_VALUE + 1 , Integer.MAX_VALUE , 1); // negamax(origin, depth, -inf, +inf, 1) System.out.println(highScore); return BestMove; } public int NegaMax(State state, int depth,int alpha, int beta,int val) { Move possibleMoves[] = null; State newState = null; State copyS = null; int bestScore = Integer.MIN_VALUE + 1,score; if (depth==0 || endOfGame(state)) { return (state.getScore()*val); } possibleMoves = getAllPossibleMoves(state); for(int index=0;index<300 && possibleMoves[index]!=null;index++) { copyS = new State(state); newState = copyS.executeMove(possibleMoves[index],HBoard.getNeighbors(copyS,possibleMoves[index].getTo()),HBoard.getNumOfNeighbors()); newState.setScore(newState.getNumOfPieces(2)-newState.getNumOfPieces(1)); score = -NegaMax(newState, depth-1,-beta,-alpha, -val); if (score > bestScore) { bestScore = score; BestMove = possibleMoves[index]; } if (bestScore > alpha) { alpha = bestScore; } if (alpha>=beta){ System.out.println("------------------------------------Cut----------------------------"); return alpha; } } // System.out.println(alpha +" Depth: "+depth); return bestScore; } [Edited by - Chen Levy on April 12, 2010 7:52:40 PM]

Share this post


Link to post
Share on other sites
Advertisement
In your first algorithm, alpha and beta are moves? From that, it's hard to tell how much of the alpha-beta algorithm you understand... but you seem to have some misconceptions.

Do you have a working negamax implementation without alpha-beta pruning?

Share this post


Link to post
Share on other sites
Quote:
Original post by alvaro
In your first algorithm, alpha and beta are moves? From that, it's hard to tell how much of the alpha-beta algorithm you understand... but you seem to have some misconceptions.


Yes alpha beta are moves because i want to return Move and not move's score.
I know i don't do something right and my goal is to figure out what it is.
i have a basic understranding of the algorithm and i followed couple of simulations to understand what the problem is but it didn't help so much.

Quote:

Do you have a working negamax implementation without alpha-beta pruning?


No, i didn't start from implementing negamax without alpha-beta pruning.


Thanks for the reply
i will be more than happy to hear more.

Share this post


Link to post
Share on other sites
Quote:
Original post by Chen Levy
Yes alpha beta are moves because i want to return Move and not move's score.


Oh, you definitely have some misconceptions. Returning a move only makes sense from the root node, and you should have separate code for that. But the function that is called recursively should return a score, and alpha and beta should definitely be scores.

Quote:
Quote:

Do you have a working negamax implementation without alpha-beta pruning?


No, i didn't start from implementing negamax without alpha-beta pruning.


Well, I strongly recommend doing that first. alpha-beta complicates things quite a bit, so don't try it until you have a working negamax implementation.

Share this post


Link to post
Share on other sites
Thanks! it helps me a lot.

I did few changes at the first algorithm and now it returns a move only from the root node and it works fine. The calculation time is long (2mins up to 10mins) but that is because the number of moves can reach to more than 100 per state.

So problem solved. Have a good week.


Fixed code:


// NegaMax with alpha-beta purning.
public Move NegaMax(int depth, State state) {

return NegaMax(state, depth, new Move(null, Integer.MIN_VALUE + 1 ), new Move(null, Integer.MAX_VALUE),-1); // negamax(origin, depth, -inf, +inf, 1)


}

public Move NegaMax(State state, int depth, Move alpha, Move beta,int val) {

Move possibleMoves[] = null,tempMove = null;
State newState = null;
State copyS = null;


if (depth==0 || endOfGame(state)) {

tempMove = new Move(null,(state.getNumOfPieces(1)-state.getNumOfPieces(2))*val);
return tempMove;

}

possibleMoves = getAllPossibleMoves(state);

for(int index=0;index<300 && possibleMoves[index]!=null;index++) {

copyS = new State(state);

newState = copyS.executeMove(possibleMoves[index],HBoard.getNeighbors(copyS,possibleMoves[index].getTo()),HBoard.getNumOfNeighbors());

tempMove = NegaMax(newState, depth-1,new Move(beta,-beta.getScore()),new Move(alpha,-alpha.getScore()), -val).negateScore();

tempMove.setMove(possibleMoves[index]);

if (tempMove.getScore()>alpha.getScore()) {

alpha = new Move(tempMove,tempMove.getScore());

}
if (alpha.getScore()>=beta.getScore()){

break;
}
}

return alpha;
}




Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!