• Advertisement

Archived

This topic is now archived and is closed to further replies.

Tic Tac Toe AI question...

This topic is 5055 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Ok right now im working on relearning c++, for the second time, and I found an old program I was working on called TicAIToe, its basicially a smart tic tac toe game where the computer can accuratly block your moves. Ok now my question is for the AI is it a bad design to type out an if statement to determine what blocks the player picked and if there is a line (combo) then the computer places its x into the last space in the combo? I cant think of a better way to do it, it just seems kind of rediculus to have a 200+ line tic tac toe game, or am I doing it right?

Share this post


Link to post
Share on other sites
Advertisement
200 lines could either be way too much or it might be fine. It''s hard to comment to whether or not code is ugly without seeing the actual code.

Share this post


Link to post
Share on other sites
Sounds as if combination of ''for'' statements should do the job. Check each column and see if it contains two opposition pieces and one empty space. If so, put a piece in the empty space. Do the same for the rows and you might need to handle the two diagonals in a special case…

Share this post


Link to post
Share on other sites
I would use data files... makes it so much simpler and easier to code for (and adding learning abilities are easier- at least some types)

Share this post


Link to post
Share on other sites
Are you at all interested in doing Tic Tac Toe AI the "right way" with MiniMax? There are some good resources in the "Articles & Resources" section above on doing exactly that.

Dave Mark - President and Lead Designer
Intrinsic Algorithm -
"Reducing the world to mathematical equations!"

Share this post


Link to post
Share on other sites
ok, the TTT game board is jsut a simple sqaure right? So you could represent it with a two-dimensional array in C++ or a vector. The access the colum and row you need like this:

// prototypes
int Board[3][3]; // init''s a 3x3 board.
int rows;
int colums;

// in main
int x,y;
for (int Board; // whatever else here)
// loop through and see where the advailable places are
// then place the piece in the most optimal one.

// you could use nested for loops and vectors, this should
// give you a good idea of what you need to do though. I hope.

If-else loops are ok, but they make for a much longer program that is harder to read. This solution is a bit better in my opinion. Good luck.

My Kung Fu is stronger.
neo88

Share this post


Link to post
Share on other sites
I must say, I love Tic Tac Toe AI questions, as for once I feel that I can actually be helpful. My T3AI (no reference to any Ahnold movies intended) is located here, I believe. Feel free to check it out, although I'm not sure that it's the most recent version of that code. Ahh, good times back then!


- CD
Edit: Link fixed.


// Brought to you by Code_Dark
| [( Politics Forum Beta-Test )]


[edited by - Code_Dark on April 13, 2004 8:10:00 PM]

Share this post


Link to post
Share on other sites
quote:
Original post by InnocuousFox
Are you at all interested in doing Tic Tac Toe AI the "right way" with MiniMax?


Dave, why is Minimax the "right way" in your opinion? It is merely "one way" to solve Tic Tac Toe (and no, I''m not suggesting people use ANNs).

Timkin

Share this post


Link to post
Share on other sites
quote:
Original post by Timkin
Dave, why is Minimax the "right way" in your opinion? It is merely "one way" to solve Tic Tac Toe (and no, I''m not suggesting people use ANNs).

It beats the hell out of "[typing] out an if statement to determine what blocks the player picked and if there is a line (combo) then the computer places its x into the last space in the combo" Sorry if I inadvertantly excluded some of the more esoteric methods.


Dave Mark - President and Lead Designer
Intrinsic Algorithm -
"Reducing the world to mathematical equations!"

Share this post


Link to post
Share on other sites
I always liked to assign weights to each possible place where the computer could place a tile and then choose one randomly. Makes it at least a little bit interesting to play against the AI, as it will not always make the same moves. Was quite easy and short to implement as well, if I recall correctly.

Share this post


Link to post
Share on other sites
I got mine to read off datafiles, then made those of a Minimax!
The datafiles heald weights for each position, the ai tries to get just ahead of the opponant, (Look at opponants score (determined by weightings), look at your score, find the one square you can pick to either win, or move your scores closer (with you winning ))

Add a little randomization and it was a nice Ai to play with, (added in prog. Weight changers, to allow people to change the ''Algarithmically Precision Calculated Weightings''). And see how the AI played... Nice weekend project...

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
I wrote a TTT program many moons ago that would learn how to play. Alas, after a short period of time I could no longer beat it, so I stopped playing.

Bob

Share this post


Link to post
Share on other sites
quote:
Original post by Timkin
Dave, why is Minimax the "right way" in your opinion? It is merely "one way" to solve Tic Tac Toe (and no, I''m not suggesting people use ANNs).


I would say that minimax is "one right way". Using a bunch of nested if-statements is not really a "right way" in my opinion. It might work, but it is ugly code, and error prone, and it only works for one situation. What if you want to change the board size? You have to completely rewrite your monster if-statement, and figure out the new best moves to play. If the board is big enough, you won''t be able to figure out the best move to play in any situation, and the nested if-statements will begin to consume huge amounts of memory, because you''re essentiall storing the entire game tree. That''s fine for a 3x3 game of tic-tac-toe, but what about a 50x50 game of tic-tac-toe?

Also, when I say "minimax is one right way," I don''t really mean minimax, but "minimax-based". I can''t think of any situation where you should start with minimax (maybe to verify that your alpha-beta search is working correctly). Alphabeta provably produces the exact same results as minimax, and it is a pure win with no downside compared to minimax, and it is not any more complex (two more if statements and two more parameters to the search function).

Share this post


Link to post
Share on other sites
tic tac toe doesnt need an ai, there are only so many possible setups, you can just as well make an array of those.

Not like you can win the game anyway

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
The AlphaBeta/Minimax methods cannot really be considered AI in any sense of the word - For such small search spaces they provide a SOLUTION to the problem and are then just a problem solving method. If you want to get yourself used to Alpha/Beta searching then fine..

TicTacToe is one area where this problem solving method can be applied tho you would probably learn more about it if you implimented alpha/beta in regards to connect-4, checkers, or chess. The key factor here being that any eveluation of position is not learned but is hard-coded

..in that sense even the best chess engines that use Alpha/Beta have at the back end a series of static IF/THEN statements that define the value of a position which is no different than implimenting tic-tac-toe with a giant IF/THEN block. The knowledge is hard-coded in both cases.

Reinforcement learning would be a true application of AI methodology here - Sometimes reinforcement learning is combined with Alpha/Beta - ExChess is a chess engine that does this using Temporal Difference Learning.

To recap - if you want to learn about searching game trees then go for an alpha/beta player - if you want to learn about A.I. then go for Reinforcement Learning/Neural Networks. Perhaps be adventurous and apply both at the same time - but in such a case tic-tac-toe is a horrible choice for a game since alpha/beta will solve the game nearly instantly.

- Rockoon

Share this post


Link to post
Share on other sites
I like ANN''s... i think awile back i build a tic-tac-toe game in basic that used that... Worked great... then i couldn''t beat it... so i wiped its memory, tried again... it was a good weekend project.:-)

Share this post


Link to post
Share on other sites
I don''t get why some people don''t remember that A.I. stands for ARTIFICIAL Intelligence, not real intelligence. If you want to say that an alpha-beta search doesn''t have much use in A.I. fields other than board game playing, that is fine, but it is still artificially simulating some kind of intelligent behavior. I also don''t get why people want to throw ANN''s at every "A.I." problem. If you define "intelligence" as "the way humans do it" (which we aren''t even sure of), then why bother? That''s like a "scientist" who is closed minded and doesn''t consider certain theories in his "research" because he has an agenda to advance, or because he has certain preexisting beliefs, or whatever else.

I mean, can anyone even define what intelligence is?????

*ducks*

Unfortunately, alpha-beta based algorithms more or less blow away other approaches when it comes to things like playing chess.

As for information being hard coded... we need to take small steps. If you want to create some computer program that will be able to look at it''s environment and learn how to play chess, and then get better at it, and then challenge the best programs and humans in existence, keep on dreaming. That project will be a miserable failure. It''s like trying to build a space ship to fly to the moon before the engine had been invented. Even if you created an exact working model of the brain, most people would give up after seeing only small improvements in a few years. Of course, it takes a human being many more years than that to be competent at things, and many more years of hard, focused training to be among the best in the world at something. So maybe we are setting our goals a little bit high? There has to be some information hard coded for something as complex as a board game. It''s very complex, and things are very concrete. Many times there is zero wiggle room, and ANNs are good at generalizing, not hammering out concrete details like looking 20 moves ahead in a chess position and finding the one sequence of moves that saves the game.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
quote:
Original post by Russell
I don''t get why some people don''t remember that A.I. stands for <u>ARTIFICIAL </u> Intelligence, not real intelligence. If you want to say that an alpha-beta search doesn''t have much use in A.I. fields other than board game playing, that is fine, but it is still artificially simulating some kind of intelligent behavior.



This is where you are wrong. Alpha/Beta does not simulate intelligent behavior. Alpha/Beta is a Minimax search algorithm. You only percieve intelligence in alpha/beta when the hard coded end node evaluation is capable of defeating you. This isn''t about neural networks being the only form of A.I. - this is about alpha/beta NOT being a form of A.I.

The litmus test for A.I. is not that it is able to beat you.

Unless.. are you of the contention that, say, a string search is also intelligent behavior?

The act of searching is not intelligent behavior.. be it a string search or a tree search. Generalization is intelligent behavior. Learning is intelligent behavior. Alpha/beta does neither.

quote:

As for information being hard coded... we need to take small steps.



Correction - we already made that small step a long time ago - we no longer need to make it. Alpha/Beta is a nice tree searching algorithm that relies on the ability of its end-node evaluation. It is also not the best known tree searching algorithm. If you really meant to say that small steps in upgrading the end-node evaluation is what is needed - then you are correct. That is where reinforcement learning comes into play.

If all you have is a hard-coded end-node evaluation, then thats all you have. In such a case you are not experimenting with artificial intelligence. You are only micromanaging the end-node evaluation of a tree search function.

quote:

If you want to create some computer program that will be able to look at it''s environment and learn how to play chess, and then get better at it, and then challenge the best programs and humans in existence, keep on dreaming.



So chess programs like ExChess, Knightcap, and so on don''t exist? These programs are better than 99.95% of all human U.S. Chess Federation members. While they can''t beat the best humans, that in no way lends itself to your point.

Did you know that the world champion computer backgammon player uses neural networks? And that it stomps on all humans that go against it? The best Go playing computer is also using neural networks.

These programs are greater than the sum of their parts. They learned and continue to learn. Their learning converges to a limit imposed by their respective algorithms. The current ExChess will never get substantialy better against the bst opposition because its learning algorithm is near the limit of its effectiveness based on the state information taken into account. This is where A.I. research is - More state information means slower convergence - How do we speed up convergence? What new learning algorithms await? What if the opponent isnt the best?

quote:

That project will be a miserable failure.



last words? Can we quote you later? "640K ought to be enough for anybody" comes to mind.

quote:
Of course, it takes a human being many more years than that to be competent at things, and many more years of hard, focused training to be among the best in the world at something. So maybe we are setting our goals a little bit high?



Of course, you seem to have confused human time with computer time... not as if taking your assumptions about the equivilency as being correct would validate the idea you are trying to proffer.

quote:

There has to be some information hard coded for something as complex as a board game. It''s very complex, and things are very concrete. Many times there is zero wiggle room, and ANNs are good at generalizing, not hammering out concrete details like looking 20 moves ahead in a chess position and finding the one sequence of moves that saves the game.


You seem to have this black and white mentality - as if its either a neural network or its an alpha/beta tree search - this assumption that both cant be applied at the same time is your undoing.

This is not to say that A.I. algorithms will be able to beat alpha/beta with a hard-coded end-node evaluation over the long run - thats not the point - The alpha/beta method without learning only represents a single form of optimality, a subset of Game Theory, and that it will not take into account the specific weaknesses of its opponent.

As an example..

Weaker chess engines such as Crafty, in a sense, perform better than the strongest engines (Fritz and Junior) against average opposition. Crafty wins faster than Fritz will.

This is because crafty is less game-theory optimal - doesnt have the same search depth - has an inferior end node evaluation - and so forth - this divergence away from Game Theory Optimal represents a more greedy strategy. An "inferior" alpha/beta end-node search is "superior" in this respect. When you apply this idea to gambling games it becomes quite clear immediately that the game theory method should only be the last resort of an automated player. The game theory method is not A.I. at all.

ExChess, over time, will learn to beat a rookie player in fewer moves than Fritz or Crafty is capable of. ExChess will learn what a particular piece on a particular square is worth against a particular opponent over time. Fritz and Crafty do no such thing. They are not A.I.

Share this post


Link to post
Share on other sites
quote:
Original post by Anonymous Poster
...
Oh well. It seems you didn''t understand much of what I said. Maybe that''s my fault.

My main beef with what you said (or whoever said it, can''t tell one AP from another) was that you assume that if anything is hard coded, then it isn''t AI. Therefore, I made the claim that to create a program that has absolutely nothing hard coded, which can then 1) learn to play chess 100% on its own, 2) improve its play 100% on its own, and 3) be competitive with the best playing entities in the world on its own, is not a realistic goal at this point, hence my example of building a space shuttle to travel to the moon before we had simpler machines like cars (smaller steps). If someone had tried to skip a few steps between riding horses and travelling to the moon (big step), the project would have failed miserably. I''m not saying it will never happen. I''m saying that to create some brain that can basically learn anything and be among the best in the world at anything, all on its own learning, is not a realistic goal at this point in time. If you want to quote me on that, go ahead. I''m really only stating the obvious here. I''d be thrilled to be proven wrong, really. That would be amazing.

Share this post


Link to post
Share on other sites
Russell,

I didn''t propose that people use a production system instead of minimax. For some reason, Dave made that suggestion when he responded to my question... don''t ask me why he did though!?

Yes, I would certainly classify minimax based methods as one way that is known to work... but that''s not to suggest it''s the only way, or that it''s THE way to solve such problems.

If that were the case in AI, there wouldn''t be much innovation now, would there!

Cheers,

Timkin

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
quote:
Original post by Russell
quote:
Original post by Anonymous Poster
...
Oh well. It seems you didn''t understand much of what I said. Maybe that''s my fault.

My main beef with what you said (or whoever said it, can''t tell one AP from another) was that you assume that if anything is hard coded, then it isn''t AI.



That wasnt the point made at all. The point made was that for tic-tac-toe, an alpha-beta would consist entirely of hard-coded information.

No different than a series of IF/THEN''s defining which move to make. While the form of the code is different, they are only abstactly different. This is Game Theory not A.I.

Game Theory is a great subject all by itself. I believe it lessens both the subjects of Game Theory and A.I. when an algorithm from one domain is so often touted as a member of the other domain. Its an understandable misconception since chess playing computers are so often touted by MEDIA is being an example of computer intelligence. ''Man vs Machine'' events have extremely little to do with A.I.

quote:

Therefore, I made the claim that to create a program that has absolutely nothing hard coded [...]



Never was it mentioned that AI needs to have nothing hard-coded.


Share this post


Link to post
Share on other sites
quote:
Original post by Siaon
tic tac toe doesnt need an ai, there are only so many possible setups, you can just as well make an array of those.

Not like you can win the game anyway


If you''re going to bother writing a tic tac toe game you should be able to support arbitrarily large boards. Remember that in reality most of all games are technically ''finite''. Chess is finite for example. Even more abstract games like soccer could be determined to be finite if you took the step of subdividing the field into regions and made the assumption that any given point there is a maximum of one player in any one given region.

Share this post


Link to post
Share on other sites
quote:
Original post by Anonymous Poster
The AlphaBeta/Minimax methods cannot really be considered AI in any sense of the word...

The key factor here being that any eveluation of position is not learned but is hard-coded ...

Reinforcement learning would be a true application of AI methodology here...

... if you want to learn about A.I. then go for Reinforcement Learning/Neural Networks.


In the statements above, you have disqualified 90% of todays "game AI" as not being AI at all. Since this is a game development-related board, I think you may want to stick with the definition of AI as defined by the game development industry.



Dave Mark - President and Lead Designer
Intrinsic Algorithm -
"Reducing the world to mathematical equations!"

Share this post


Link to post
Share on other sites
quote:
Original post by Anonymous Poster
This isn''t about neural networks being the only form of A.I. - this is about alpha/beta NOT being a form of A.I.

The act of searching is not intelligent behavior.. be it a string search or a tree search. Generalization is intelligent behavior. Learning is intelligent behavior. Alpha/beta does neither.

In the game development world, this is horseshit. If your player has a simple if/then that processes "if I am being shot at, then run", it is a small, simple chunk of AI. No generalization, no learning, but damnit if it isn''t intelligent. After all, wouldn''t we do the same?

Dave Mark - President and Lead Designer
Intrinsic Algorithm -
"Reducing the world to mathematical equations!"

Share this post


Link to post
Share on other sites
quote:
Original post by Timkin
I didn''t propose that people use a production system instead of minimax. For some reason, Dave made that suggestion when he responded to my question... don''t ask me why he did though!?
I did no such thing. Don''t confuse your inferences with my implications. I simply claimed that minimax, et al was completely legit as a much more suitable way of solving the problem than what the original poster suggested.



Dave Mark - President and Lead Designer
Intrinsic Algorithm -
"Reducing the world to mathematical equations!"

Share this post


Link to post
Share on other sites

  • Advertisement