Archived

This topic is now archived and is closed to further replies.

CHESS PROGRAMMING

This topic is 5704 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have read about Chess programming on this site.I am student of Mathematics & Computing studying in Indian Institute of Technology. I want to know whether CHESS PROGRAMMING is still a hot topic like it used to be till few years back especially after DEEP BLUE beating world champion. How much is the use of AI in it and what are the prospects of research in it?Who are the leading organisation and people in this field? Can independent research be done by oneself in this? Regds kaushik

Share this post


Link to post
Share on other sites
Check out the Generation 5 website.

P.S. Mods don't like it when you post the same question in different forums. If you post, then find another forum that it better fits into, delete the misplaced post.

[edited by - Vikato on May 29, 2002 11:28:02 PM]

Share this post


Link to post
Share on other sites
sorry sir for that..
but i didn''t knew where to put it..

Share this post


Link to post
Share on other sites
Chess is rather ''done'' in terms of effective AI (as opposed to interesting AI approaches to chess).

If you want something interesting, challenging and a ''hot topic'', look into developing AI for playing Go.

Cheers,

Timkin

Share this post


Link to post
Share on other sites
Searching a Chess game state-space is just about within the grasp of a modern PC (in fact, whether you use a basic PC clone or a supercomputer doesn''t really make much difference because we a generally talking about a logarithmic scale).

So typically this is how computer chess algorithms work - just add a good heuristic for ''pruning'' the search.

The branching factor in Chess (basically the number of possible moves given any board position) is about 35. The branching factor for Go is more like 200, meaning searching this game tree is not a viable option, hence a far more interesting problem (in my opinion anyway!).

There is at least a second reason why Go is a more interesting problem than Chess, and that is that the rules for Go are extremely simple compared to Chess and quite often board evaluations can be estimated by a human player based solely on visual cues (as opposed to Chess where complex interactions between pieces need to be resolved in order to evaluate the board).

This is why the best Go computer player can generally play as well as someone just starting to learning to play the game. After a handfull of games though, the human player has usually learned to beat the AI.

Share this post


Link to post
Share on other sites
quote:
Original post by Carrot

There is at least a second reason why Go is a more interesting problem than Chess, and that is that the rules for Go are extremely simple compared to Chess and quite often board evaluations can be estimated by a human player based solely on visual cues (as opposed to Chess where complex interactions between pieces need to be resolved in order to evaluate the board).




Sorry, you are incorrect. In the past I played Go at competition level for a number of years(never got my Dan though ;-() and I can assure you the interactions between stones can become incredibly complex. The wonderful thing about the game is that how one evaluates a board position works on many levels and is almost fractal in nature. This is hard to explain in words. It''s only something you start to realize after you become fairly adept at the game, around 6 kyu''ish (and that takes a fair bit of playing for your average Joe!). Unfortunately a lot of researchers working on the AI for Go do not play this well so they remain unenlightened. I think this is a major factor restricting development of Go AI.

Go, in general, is a very misunderstood game.






Stimulate

Share this post


Link to post
Share on other sites
I realise that the interactions between the stones can be very complex, but what I meant was that it is much easier for a human player to evaluate a Go board than a chess board given similar levels of competance at both games.

The reason for this is because the interaction between the stones in a game of Go depend on the spatial relationships between the stones (hence the good visual cues). In a game of Chess each piece has its own individual characteristics making visual cues less important (whereas in Go all stones are equal).

Share this post


Link to post
Share on other sites
It''s much easier to get an overall ''feel'' for the game with a quick glance. In this respect the board evaluation is a little like pattern recognition (which is the point I think you are trying to make). But all this ''feel'' is doing, is directing your attention to several subareas of the board which in turn resolve themselves at an even more detailed level. The final evaluation of the stones can be an extremely complex process. Just as complex as any chess game, if not more so. (but then this whole debate has been done to death by chess and Go players for years) Chess players think chess is more complex. Go players vice-versa. As hardly any good chess players play Go at all well and any great Go players play chess well this debate is likely to run on for centuries.






Stimulate

Share this post


Link to post
Share on other sites
yeah, thats true, the Go/Chess debate is an old one, but I''m not debating that as such.
I''m comparing the two games within an AI context - Go is much more difficult in this respect.

You seem to thing that I regard Go as a game lacking complexity!
If you look back at any of my posts in this thread I''ve never said anything like that. I said the rules are simpler (they are).

That ''feel'' for the board you talk about really sums up my point.
A computer Go player lacks that intuition you talk about. Now I know you say that it just direct your attention to a general area of the board, but you underestimate that power.

If we could get a computer algorithm could even do this we''d be a much closer to finding a decent AI player! In fact the complex level of individual stone-interactions you talk of is the area where the AI would excell.



Share this post


Link to post
Share on other sites
From what I've read, most go AI's have attempted to cull out all portions of the board that aren't "relevant" to the game, but this is a tricky thing to do. For instance, a "ladder" move series could continue for ten or so moves, making an empty portion of the board crucial. In order for a computer to recognize this possiblity however, you have to look at the entire board anyways, so the benefit of the culling is made fairly negligable.

Fup, this "feel" you have for the game is courtesy of your large, complex brain, and I challenge you to pick out exactly what it is that you recognize in order to get that feel. Maybe this could be accomplished through a multi-layer neural net, but that raises the whole question of what would constitute the training set. Any thoughts on this?



[edited by - Mordoch Bob on May 31, 2002 7:37:03 PM]

Share this post


Link to post
Share on other sites
quote:
Original post by Mordoch Bob
Fup, this "feel" you have for the game is courtesy of your large, complex brain, and I challenge you to pick out exactly what it is that you recognize in order to get that feel. Maybe this could be accomplished through a multi-layer neural net, but that raises the whole question of what would constitute the training set. Any thoughts on this?




hehe, I''m well aware of that. I think I hinted in my first post that I have no idea how to analyse that ''feel'', it just happens when you reach a certain level of compatancy.

As far as I''m concerned Go is so mind bogglingly complicated I try not to think about it anymore (although Go is what got me interested in AI in the first place). Hopefully, one day I''ll be able to spare the time to have a proper crack at it but that''s not going to be for a while.
I think a neural net could be used to aid play of the mid game of Go. You could build a net which acts as an evaluation function. The net is trained using a genetic algorithm and learns through games played amongst the population (no human intervention is rqd). David Fogel and another guy whos name I can''t remember have recently proved this can be a successful technique - they applied it to checkers and evolved an ANN which played at a very high standard within just a few epochs.
I think for Go though, a good idea would be to train two nets. One to perform the evaluation and another to act as a kind of ''retina'' which moves around the board selecting areas of interest for evaluation (size and position).
The main problem of course is the size of the bloomin search tree. It''s just enormous. I think some sort of breakthrough in game theory is required tbh, one that isn''t based on minimax.
One thing''s for sure though: that million dollar prize isn''t going to be won for a looooong time...

Carrot: I don''t think you believe the game is not complex. I am simply contesting your statement that "often board evaluations can be estimated by a human player based solely on visual cues " Like I said, a human player can make an evaluation *at a certain level* based on visual clues, but then one has to dig deeper and calculate local interactions to provide a full analysis of the game''s state. Both steps (and the ones inbetween) are equally important. To make a play based *solely* on an evaluation based on the first depth of this process would be (probably) okay against a player 2/3 steps down in the ratings but folly when playing against an equal. And disaster for sure in the end game.

And I certainly don''t underestimate the power of pattern recognition!





Stimulate

Share this post


Link to post
Share on other sites
quote:
Original post by fup
One thing''s for sure though: that million dollar prize isn''t going to be won for a looooong time...



The prize offer lapsed in 2000... the sponsor of the prize died in ''97 I think.

quote:
Original post by fup
And I certainly don''t underestimate the power of pattern recognition!



But I tend to think it''s actually not going to prove useful in this situation (in it''s current state). The human brain tends to treat pattern recognition differently to pattern classifiers such as ANNs. This is why ANNs struggle to pick out sub-structures within patterns. Particularly one''s that are evolving.

Humans have the benefit of being able to deduce higher dimensional information from low demensional representations. We use lighting and perspective effects to see the world in 3D, whereas our retina is a 2D array (although this is not strictly true).

I believe that applying pattern recognition to Go needs to take this into account. There is higher dimensional information about the locations of stones on a Go board that depends on their relationship to each other. This is true of chess and other games as well. This higher dimensional information is contained within the rules of the game that dictate which patterns have which abilities and how changing a patter in a subtle way can lead to a ripple effect through the board.

At least this is my interpretation of the Go-AI problem...

I''d be interested to hear peoples opinions.

Cheers,

Timkin

Share this post


Link to post
Share on other sites
But like I say, the technique has already been proved to be sound. The Fogel technique is pattern recognition afterall. okay, checkers is not Go, but the underlaying principal behind the method is sound. The trained network *has* effectively learned a "higher dimensional" representation of the gamestate.

If you could apply minimax to a reasonable depth to Go then this technique would work. But, alas, as we all know, you can''t.

It could however be applied to a 9 X 9 Go board, which, incidentally, is still proving harder than chess to create an AI for.






Stimulate

Share this post


Link to post
Share on other sites
I think there was a discussion of the application of neural networks to Go, given their successful implementation in checkers and backgammon, on the computer-go mailing list. However, the general consensus was that even given 81 inputs for a 9x9 board (and that''s the bare minimum, and doesn''t allow for consideration of previous turns, etc.), there''s a snowball''s chance in hell of the net converging on a given intelligible solution. The reasons for this are two-fold: one, there''s just a whole lot of inputs, and each is randomly initiated, so the time it will take to reach an equilibrium is unknown.
More importantly, though, the strategy contained within a game of Go is extremely complicated, and it''s difficult to say whether one could say that a given position (in the early to middle portions of the game at least) is definitively "good" or "bad" without either a whole lot of context (including, most likely, a personality breakdown of each player), or a foreknowledge of the outcome of the game. So there''s the question of whether there''s a pattern there that can be recognized by anything smaller than something approaching the complexity of the human brain, but I imagine it would be worth a shot if I ever really get around to it.

Oh, and fup . . . sorry if I missed the point of your earlier post there, but a lack of sleep makes my brain liable to miss subtleties.

Share this post


Link to post
Share on other sites
quote:
Original post by Mordoch Bob
I think there was a discussion of the application of neural networks to Go, given their successful implementation in checkers and backgammon, on the computer-go mailing list. However, the general consensus was that even given 81 inputs for a 9x9 board (and that''s the bare minimum, and doesn''t allow for consideration of previous turns, etc.), there''s a snowball''s chance in hell of the net converging on a given intelligible solution.



The network is used solely as an *evaluation* function in the technique I''m talking about. A Minimax type method is used to search the game tree and the ANN is called to evaluate the position. The ANN *does not* decide the move. If you are interested I can email you Fogel''s article?
There''s no problem with a net having 81 inputs or even a thousand inputs - I dunno why they think that''s a problem. It''ll make the training slower but it''ll still work.

quote:
Original post by Mordoch Bob
Oh, and fup . . . sorry if I missed the point of your earlier post there, but a lack of sleep makes my brain liable to miss subtleties.


I''m sure most of the regulars here have that problem. I had 2 hours last night! Bloody gf snoring away as I type. Sorry if I missed yours!






Stimulate

Share this post


Link to post
Share on other sites
If this is being used solely as an evaluation function then, what do you use as the training data set? How do you determine whether or not a given 9x9 configuration is good or bad? One could presumably get a bunch of games played by Go masters, and then determine that every move that led to victory is good, and all that led to a loss are bad, but that''s a huge oversimplification. Beyond that though, I''m kind of at a loss on how to do this. I''ve always thought that it would be much more useful if one could create a Go ANN that would work to actually predict the possible movements of the opponent, and that the output could be used to cull the game tree. Once you''ve determined the possible movements made by the opponent, then you can minimax to your hearts content.

Obviously one net wouldn''t suffice to predict everyone''s choice of moves, but one could perhaps have a series of nets trained to the play of various players, and then try and correlate the current opponent to one of those nets. I don''t even know if this would work, but I suppose it would be worth a shot if I have a large block of time when I''ve got nothing better to do. Oh, and fup please do send me the paper you''re referring to, it sounds interesting.

Share this post


Link to post
Share on other sites
quote:
Original post by Mordoch Bob
If this is being used solely as an evaluation function then, what do you use as the training data set? How do you determine whether or not a given 9x9 configuration is good or bad? One could presumably get a bunch of games played by Go masters, and then determine that every move that led to victory is good, and all that led to a loss are bad, but that''s a huge oversimplification. Beyond that though, I''m kind of at a loss on how to do this.



This is the credit assignment problem and it has no easy answer.

quote:
Original post by Mordoch Bob
I''ve always thought that it would be much more useful if one could create a Go ANN that would work to actually predict the possible movements of the opponent, and that the output could be used to cull the game tree.



A Bayesian Network would be a far more appropriate tool for this task. ANNs are pattern recognition tools (and classification tools), not decision making tools.


quote:
Original post by Mordoch Bob
Once you''ve determined the possible movements made by the opponent, then you can minimax to your hearts content.



Perhaps you might substitute probable for possible in this thought???

Cheers,

Timkin

Share this post


Link to post
Share on other sites
Bob: A training set is not required as the networks use a genetic algorithm to ''evolve'' their weights. See my website for details.

I''ve just sent you an email regarding Fogel''s article.




Stimulate

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
I thing that Chess Programming is always a hot topic, because even if DEEP BLUE beat the world champion, the power of his hardware war really impressive and show that we have not found a quick way to have optimal solution. You can have a look to the MTD(f) algo which seems to be the more efficient algo now. You can have a look to www.cinag.fr.st which is the chess game i wrote. The software in this page has not been updated since 6 month but the developpement is not stopped, i have wrote a 3D versions of but not currently released. If you want to have more information you can try to mail me at sauron@webmails.com and perhaps could you help me in the developpement of a chess software.

Share this post


Link to post
Share on other sites
quote:
Original post by Carrot
Searching a Chess game state-space is just about within the grasp of a modern


This is not true by any means. It isn''t even close to being true. A good chess program today can search maybe 20 plies in a reasonable amount of time. That means 10 moves for each side. Evenly played games between top computers will last into the hundreds of moves (translating into 200-400 plies or more). 20 plies doesn''t even begin to touch that.

quote:
Original post by Carrot
The branching factor in Chess (basically the number of possible moves given any board position) is about 35.


If you''re using a MinMax algorithm, you are correct. Of course, no one has used that for probably 50 years. Using the alpha-beta algorithm, the branching factor is about 6, and using other pruning techniques such as null move and others, the effective branching factor is around 2 or maybe 3 sometimes for the best programs.

quote:
Original post by Carrot
There is at least a second reason why Go is a more interesting problem than Chess, and that is that the rules for Go are extremely simple compared to Chess


Perhaps, but when you take into consideration that there are many different sets of rules for Go, it''s not always possible to write a go program that plays legal all of the time. This isn''t a huge deal, but I think that this is more annoying than having to add in some code for en passant captures or castling in chess.

I''ll comment on some other comments from other posters as well. Deep Blue wasn''t all it was cracked up to be. It ran on ridiculous hardware for it''s time, and it still wasn''t able to defeat a world champion. Yes, it won the match, but it didn''t win the match due to Deep Blue. It won the match due to an opening trap that was added into Deep Blue''s opening book by Grandmaster Joel Benjamin. In other words, Deep Blue had nothing to do with it. If Kasparov doesn''t lose that game, he wins the match, and no one even remembers Deep Blue.

In addition to that Deep Blue was highly unsophisticated, relying solely on it''s brute force capabilities. It had a pretty bad effective branching factor as I recall.

Chess is far from being "solved". It''s still far from being better than the best humans. Just recently GM Smirin won his match against 4 of the top chess programs in the world, and he did so in convincing fassion, finishing the match undefeated. In his first game against the reigning microcomputer world champion he mated the program in under 30 moves. That''s domination, only it''s not by the computers, it was by a human.

Generally when you see that a grandmaster lost a match to a computer, it was generally because that GM had no clue how to play against a computer. Computers play a highly different style than humans, and when a human tries to play his normal human style against a computer, he gets squashed. You can say "yeah yeah whatever" but the facts are that there are many weak masters who wouldn''t stand a chance against a player like Kasparov who can have consistent successful results against the top computer chess programs. Being involved seriously in the computer chess scene for a few years now, I have seen quite a few games where untitled players made fools out of top computer programs.

Go is of course much harder for a computer due only to the fact that it''s branching factor is 361 in the opening position, as opposed to 20 in the opening position for chess. One day computers will be able to search go to many plies, and they will make the best humans look like fools. Go isn''t innately "harder" to be good at. There are just too many possible continuations for a human or computer to analyze.

If you''re new to all of this and want to make a game playing program, checkers is probably a good choice. It''s rules are simpler than chess or go, and the branching factor is much lower in both games, so you can make a very strong program without having to have much knowledge about special algorightms and such.

Another poster said something about MTD(f) being more efficient than alpha-beta. While that might technically be true, it''s not significantly more efficient, and at times it can backfire. Choosing MTD(f) over alpha-beta will not make your program significantly better. Making your evaluation function more accurate, and working on pruning techniques is what will make your program better.

Russell

Share this post


Link to post
Share on other sites
quote:
Original post by Timkin
Perhaps you might substitute probable for possible in this thought???


Yeah, I did that twice in there...I blame that whole "escaping from the dingy hell-hole known as high school in ten days" thing.

quote:
Original post by Timkin
A Bayesian Network would be a far more appropriate tool for this task. ANNs are pattern recognition tools (and classification tools), not decision making tools.


Good point. I was just thinking in terms of the earlier discussion regarding ANNs only. Have there been attempts to apply Bayesian nets to Go?

Fup, I'm fairly tired and just skimmed through your website, but I'm curious what you would use to determine the fitness of each neural net you evolved. If it were used to pick a move, then I suppose you could evolve it on the basis of whether or not it won, but what other than a training set could determine the "fitness" of an evaluation function? If I've missed something along the way, feel free to point it out.

EDIT: fixed the formatting



[edited by - Mordoch Bob on June 4, 2002 2:58:59 AM]

Share this post


Link to post
Share on other sites
quote:
Original post by Mordoch Bob
Have there been attempts to apply Bayesian nets to Go?



Not to my knowledge, but then I don''t know of every effort being put into Go programming. I do know that ANNs have been applied to the local evaluation of game state.

The problem with using a BN is eliciting the probabilities. This could be done by analysis of professional games (as much ANN learning is) but this still gives the problem of novel play, or a lack thereof actually. If your system is only trained on a limited subset of the state space, then it will only ever play games within that subset... and will be incapable of making reasonable decisions outside that state space. At best, you could get it to give a best guess, but of course with no guarantees that this would be appropriate!

Cheers,

Timkin

Share this post


Link to post
Share on other sites
quote:
Original post by Carrot
Searching a Chess game state-space is just about within the grasp of a modern PC



I didn''t mean to imply by this that the WHOLE state-space is searched, just that searching is a viable option in Chess, in Go it is not.

quote:

A good chess program today can search maybe 20 plies in a reasonable amount of time. That means 10 moves for each side. Evenly played games between top computers will last into the hundreds of moves (translating into 200-400 plies or more). 20 plies doesn''t even begin to touch that.



When compairing the two games you have to use a relative scale. Playing ten plies ahead in Chess would be considered a much better Chess player than the same technique as a Go opponent.
This is my whole point, the strength of each computer opponent is measured by how well it plays against a human player.

quote:

Chess is far from being "solved". It''s still far from being better than the best humans.



Granted, but in Go we talking about getting to the level of an amature player.

Finally, on the use of ANNs as a Go-board evaluator. I have used ANN''s trained using ''Temporal Difference (lambda)'' learning, but with the nets organised in a hierarchial fashion, with the lower level ANNs evaluating at a local level, the outputs feeding more and more general networks in an attempt to build an abstraction of the board.

Although the ANNs were trained using TD(lambda), the inter-connections between ANNs were altered according to a GA weighting.
I wrote this as a final year thesis and was never tested extensively, but the initial results seemed interesting.
The AI learned basic local evaluations very quickly (because the ANNS were relatively small and repeated throughout the board hence many examples), while the more abstracted higher-level ANNS appeared to know when a certain area of the board was beyond salvaging, and would move to capture another area instead.

In essence, the network as a whole WAS a move-selector as well as being the evaluator. The next stone to be played was selected from a depth search of one. I got away with this (kind of) because of the nature of the TD(lambda) learning.
The AI would select a move not really ''knowing'' why it was the best move!

Share this post


Link to post
Share on other sites
Carrot, do you have documentation, source, or an exe of that project? It sounds quite interesting.

Share this post


Link to post
Share on other sites
Bob: The ''fitness'' of an ANN is based solely on how many games won/drawn/lost against the rest of the population. No prior knowledge about the game needs to be known. That''s one of the reasons I like this method.

The article I sent you the link for describes the process in enough detail. Maybe you should get some sleep and then read it again... ;0)




Stimulate

Share this post


Link to post
Share on other sites