Chess AI with Neural Networks

Started by
11 comments, last by willh 11 years, 7 months ago

If you would code a chess AI with NN how would you format the input and the output?


Directly feeding the piece positions to a neural network is unlikely to be useful. If you really want to apply neural networks to chess, I suggest using a conventional game tree search, with a neural network as the evaluation function. Instead of feeding it the raw board state, though, I'd recommend using a number of hand-crafted features, such as: difference between opponent and self for number of possible moves (a crude measure of mobility), whether opponent or self controls the center 4 squares, difference between opponent and self for standard material measure: queen = 9, rook = 5, etc., and so forth.

An additional challenge with this approach is that no direct performance feedback will be available. Most artificial neural networks are constructed via supervised learning, through which errors are directly fed to learning mechanism. When playing games, win or loss performance is only known at the end of the game (after many recalls of the neural network model).
Advertisement

[quote name='Eralp' timestamp='1346597479' post='4975738']
If you would code a chess AI with NN how would you format the input and the output?


Directly feeding the piece positions to a neural network is unlikely to be useful. If you really want to apply neural networks to chess, I suggest using a conventional game tree search, with a neural network as the evaluation function. Instead of feeding it the raw board state, though, I'd recommend using a number of hand-crafted features, such as: difference between opponent and self for number of possible moves (a crude measure of mobility), whether opponent or self controls the center 4 squares, difference between opponent and self for standard material measure: queen = 9, rook = 5, etc., and so forth.[/quote]

I agree with that. In particular, if you use no hidden layers, you'll get a linear combination of the features, which is roughly how most chess programs work. I think of the output neuron as having a sigmoid transfer function, so the score is predicting the probability of winning or -more precisely- the expected number of points to be awarded at the end of the game (loss=0, draw=1/2, win=1). During the search you don't need to use the sigmoid transfer function, because the only thing minimax search cares about is how scores compare to each other, which is not changed by the sigmoid function.

An additional challenge with this approach is that no direct performance feedback will be available. Most artificial neural networks are constructed via supervised learning, through which errors are directly fed to learning mechanism. When playing games, win or loss performance is only known at the end of the game (after many recalls of the neural network model).
[/quote]

There are things that can be done. The guys that made Blondie24 used genetic algorithms to select the strongest players. The fact that you can learn anything with as little feedback as just the result of the game is interesting, but not very practical.

Another option is to make a database of games, and then train the network to predict the result of the game when shown positions in it. This is probably what I would try.

Yet another option is to make the score predict the future score. The algorithm of choice here is TD(lambda).
You could use the NN to build an evaluation function, as Alvaro suggested.

Basically, feed it things like 'passed pawns', 'player captures', 'opponent captures', etc.. All of the things you would calculate as part of a regular evaluation function become inputs in to the network. The network would output a score that determine how good (or bad) a particular position is.

In theory it should work just fine. In practice though it is going to be difficult to provide training data.

This topic is closed to new replies.

Advertisement