[SOLVED] NegaMax with AlphaBeta

Started by
6 comments, last by alvaro 14 years, 1 month ago
Hello! I'm writing chess engine from scratch and I encountered huge problem with NegaMax and AlphaBeta pruning. I've checked four following solutions: 1. pure MiniMax 2. MiniMax with AlphaBeta 3. pure NegaMax 4. NegaMax with AlphaBeta Solutions #1, #2, #3 work properly but unfortunately most wanted solution #4 doesn't. I found the same problem already discussed without solution here: http://www.gamedev.net/community/forums/topic.asp?topic_id=514673 I really have no idea what I can do wrong if pure NegaMax works. With AlphaBeta I pass -beta as alpha and -alfa as beta. When alpha becomes greater or equal to beta I return the best state already found. Because pure version works I don't think that evaluation may be the problem. I return positive value if side on move is winning and negative otherwise. I noticed that solution #4 checks much less nodes that solution #2. I guess #4 should check same number of nodes as #2 and of course much less than #1 and #3. I've already browsed source code of several engines and I don't see the difference in algorithm despite the fact that my implementation of #4 doesn't work properly. I use GCC 4.2.4. I have also checked MinGW which probably uses GCC 3 with same results. Please help... [Edited by - BitSet on March 7, 2010 6:37:31 AM]
Advertisement
as far as i know, you rather pass -max(alpha, bestScoreYet) as beta and not -alpha
I've found solution. It's really amazing!

I used to set initial value for alpha to INT_MIN and for beta to INT_MAX.
I changed initial value for alpha to -INT_MAX and for beta to INT_MAX.

Does anyone can explain why it is important for NegaMax to have symmetrical range?

Now I have solutions #1, #2, #3 and #4 working!

Very happppppppppppppppppppy!
Negating INT_MIN doesn't do what you think it does...

Edit now when I finished eating :)

It's not symmetrical range that matters. It's the fact you negate the lowest possible value of an int, -2^(num_bits - 1) and expect to get its positive when the maximum value is in fact 2^(num_bits - 1) - 1 that's wrong.
To clarify what Beyond_Repair wrote...

I'm guessing you're writing this in Java, but it should apply to most languages...

The range of an integer is from INT_MIN to INT_MAX.
INT_MIN is equal to -2^31.
INT_MAX is equal to 2^31-1.
-INT_MIN should be equal to 2^31, which is bigger than INT_MAX, which means it's outside the range of an integer.
Thus, the result of -INT_MIN is... Well, I'm not sure. But it definately isn't what you expect it to be.
Quote:Original post by lingfors
Thus, the result of -INT_MIN is... Well, I'm not sure. But it definately isn't what you expect it to be.


Although languages may not make any guarantees about the result of overflowing, in practice all implementations use a two's complement representation, in which -INT_MIN is the same as INT_MIN.

No, I stay away from Java - only C, Cpp and asm.

Yes I understand it was silly mistake leading to integer overflow (usually overflows happen as result of adding or subtracting in some loop but here I obtained overflow in smart fashion - with negation).
I got asked this in a job interview: If b and c are int variables in C and b/c results in an overflow, what are the values of b and c? I thought it was very clever.

This topic is closed to new replies.

Advertisement