• Advertisement
Sign in to follow this  

TicTacTo NN

This topic is 4417 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I read the NN article on ai-junkie were he uses genetic algotithms to get good weights. I thought about making a NN thats plays TicTacTo with that method. The input would be 18 binary nodes that represent the board. Each square has two nodes (emty = 0,0, X=1,0, O=0,1) and the output is four binary nodes that represent the move (the squares are numbered 1-9 in binary code i.e. - 1=0,0,0,1, 2=0,0,1,0 ... 9=1,0,0,1) I'll have about 200 hidden nodes (one layer) that have an "uperboundry" function (if bigger that x output 1, else 0). In order to get the weights I'll have 100 TTT players in every generation that play against eachother, the ones that win the most have the best chance to go to the next generation. Since the TTT players don't even know that they're not alowed to go in a spot that is already taken, that kind of move will be an aunomatic loss. After about 1000 generations I should have an unbeatable player right? Can anyone please give me advice, warnings, or anything else! (I'm going to try to implement this in a few days so anything would be helpfull) Thanks!!

Share this post


Link to post
Share on other sites
Advertisement
1) Why only one layer? Two layer nets are much more powerful. Or, do you mean two layers since you're talking about hidden nodes which single-layered ANNs don't have?
2) I don't know how much you know about GA, so this might be obvious for you. A good tip is to guarantee the top 10% to go to the next gen, and don't forget the crossovers etc.

Otherwise it sounds like an interesting idea. I'm not too sure about the 1000 number though, try and create some performance measuremeant so you'll know when to stop.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement