quote:Original post by Carrot
Searching a Chess game state-space is just about within the grasp of a modern PC
I didn''t mean to imply by this that the WHOLE state-space is searched, just that searching is a viable option in Chess, in Go it is not.
quote:
A good chess program today can search maybe 20 plies in a reasonable amount of time. That means 10 moves for each side. Evenly played games between top computers will last into the hundreds of moves (translating into 200-400 plies or more). 20 plies doesn''t even begin to touch that.
When compairing the two games you have to use a relative scale. Playing ten plies ahead in Chess would be considered a much better Chess player than the same technique as a Go opponent.
This is my whole point, the strength of each computer opponent is measured by how well it plays against a human player.
quote:
Chess is far from being "solved". It''s still far from being better than the best humans.
Granted, but in Go we talking about getting to the level of an amature player.
Finally, on the use of ANNs as a Go-board evaluator. I have used ANN''s trained using ''Temporal Difference (lambda)'' learning, but with the nets organised in a hierarchial fashion, with the lower level ANNs evaluating at a local level, the outputs feeding more and more general networks in an attempt to build an abstraction of the board.
Although the ANNs were trained using TD(lambda), the inter-connections between ANNs were altered according to a GA weighting.
I wrote this as a final year thesis and was never tested extensively, but the initial results seemed interesting.
The AI learned basic local evaluations very quickly (because the ANNS were relatively small and repeated throughout the board hence many examples), while the more abstracted higher-level ANNS appeared to know when a certain area of the board was beyond salvaging, and would move to capture another area instead.
In essence, the network as a whole WAS a move-selector as well as being the evaluator. The next stone to be played was selected from a depth search of one. I got away with this (kind of) because of the nature of the TD(lambda) learning.
The AI would select a move not really ''knowing'' why it was the best move!