Sign in to follow this  
BitSet

[SOLVED] NegaMax with AlphaBeta

Recommended Posts

Hello! I'm writing chess engine from scratch and I encountered huge problem with NegaMax and AlphaBeta pruning. I've checked four following solutions: 1. pure MiniMax 2. MiniMax with AlphaBeta 3. pure NegaMax 4. NegaMax with AlphaBeta Solutions #1, #2, #3 work properly but unfortunately most wanted solution #4 doesn't. I found the same problem already discussed without solution here: http://www.gamedev.net/community/forums/topic.asp?topic_id=514673 I really have no idea what I can do wrong if pure NegaMax works. With AlphaBeta I pass -beta as alpha and -alfa as beta. When alpha becomes greater or equal to beta I return the best state already found. Because pure version works I don't think that evaluation may be the problem. I return positive value if side on move is winning and negative otherwise. I noticed that solution #4 checks much less nodes that solution #2. I guess #4 should check same number of nodes as #2 and of course much less than #1 and #3. I've already browsed source code of several engines and I don't see the difference in algorithm despite the fact that my implementation of #4 doesn't work properly. I use GCC 4.2.4. I have also checked MinGW which probably uses GCC 3 with same results. Please help... [Edited by - BitSet on March 7, 2010 6:37:31 AM]

Share this post


Link to post
Share on other sites
I've found solution. It's really amazing!

I used to set initial value for alpha to INT_MIN and for beta to INT_MAX.
I changed initial value for alpha to -INT_MAX and for beta to INT_MAX.

Does anyone can explain why it is important for NegaMax to have symmetrical range?

Now I have solutions #1, #2, #3 and #4 working!

Very happppppppppppppppppppy!

Share this post


Link to post
Share on other sites
Negating INT_MIN doesn't do what you think it does...

Edit now when I finished eating :)

It's not symmetrical range that matters. It's the fact you negate the lowest possible value of an int, -2^(num_bits - 1) and expect to get its positive when the maximum value is in fact 2^(num_bits - 1) - 1 that's wrong.

Share this post


Link to post
Share on other sites
To clarify what Beyond_Repair wrote...

I'm guessing you're writing this in Java, but it should apply to most languages...

The range of an integer is from INT_MIN to INT_MAX.
INT_MIN is equal to -2^31.
INT_MAX is equal to 2^31-1.
-INT_MIN should be equal to 2^31, which is bigger than INT_MAX, which means it's outside the range of an integer.
Thus, the result of -INT_MIN is... Well, I'm not sure. But it definately isn't what you expect it to be.

Share this post


Link to post
Share on other sites
Quote:
Original post by lingfors
Thus, the result of -INT_MIN is... Well, I'm not sure. But it definately isn't what you expect it to be.


Although languages may not make any guarantees about the result of overflowing, in practice all implementations use a two's complement representation, in which -INT_MIN is the same as INT_MIN.

Share this post


Link to post
Share on other sites
No, I stay away from Java - only C, Cpp and asm.

Yes I understand it was silly mistake leading to integer overflow (usually overflows happen as result of adding or subtracting in some loop but here I obtained overflow in smart fashion - with negation).

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this