Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

BeerNutts

Learning AI

This topic is 6183 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Well, I''ve created a neat little AI game (right now it''s Tic-Tac-Toe, but can be ported to any board/card based game) that basically begins life knowing nothing about the strategy of how to win at Tic-Tac-Toe, and only knows the rules of the game, and he "learns" how to win from experience playing a computer opponent, or human. It''s a neat program, that basicaly creates a very large state-based network, and after it''s played enough games, you cannot beat it, and he always plays the optimal spots. I''m in the middle of porting it over to Checkers (I have to change the interface), but I was wondering if anyone out there has any experience with this, and if so, could you point me to something like this, or maybe offer some advice? I''d like to see other things of this nature. Thanks, Will

Share this post


Link to post
Share on other sites
Advertisement
Sorry, I can't really help with the original request, but I did have a question: is your state-based learning intelligent enough to spot equivalent states? (For example, in tic-tac-toe, this state:

o | |
----------
o | x | o
----------
| x |

Is effectively the same state as

| | o
----------
o | x | o
----------
| x |

In this case, you have a pair of states that are vertically symmetrical. There are other forms of symmetry that would yield an effectively equivalent state, too.

The benefits of spotting these identical states are obviously that you have a smaller state-space to search through, which should speed up any calculations.

Edited by - Kylotan on June 28, 2001 1:53:37 PM

Share this post


Link to post
Share on other sites
You are correct, those states are exactly the same. But, I wrote the AI engine (I call it TheBrain) such that it knows nothing about the particular''s of a certain game. This way, it can be ported to any other board-game. I don''t know of any other board games which are symetrical like that (I know Checkers is not), so I don''t want to optimize it for any particular game.
Besides, tic-tac-toe is a simple game, and there''s really not that many states (3^9 = 19683).

Thanks for the suggestion though.

BeerNutts

Share this post


Link to post
Share on other sites
Without knowing anything of the methodology of your game agent or indeed the algorithm you implemented, it is hard to comment on it or to discuss it meaningfully! One thought though...

quote:
Original post by BeerNutts
But, I wrote the AI engine (I call it TheBrain) such that it knows nothing about the particular''s of a certain game.


There must be at least some level of interface between the game state and the agent. While most people do not recognise this as a part of the agent, generally it is. The specific response behaviour of the agent will be determined by the perceived benefit of it''s actions (which are interpreted by the interface), particularly if it is a learning agent (i.e., there must be a utility function somewhere in the model, whether stated explicity or implicitly hidden in the game model). I would be very interested to hear about your agent if you have managed to dissociate the agent architecture from it''s environment.

Would you mind elaborating just a little on the specific AI methodology you used?

Cheers,

Tim

Share this post


Link to post
Share on other sites
Well, in a way, you could say the agent is seperated form the game, but, of course, it can''t be totally seperated. There are basically 2 functions the agent must interact with; The Game Rules (where is a legal move?), and the game over (has someone won the game, or is it a draw?).

Basically, TheBrain plays a game by consulting with it''s past game experiences, and if it has been in the same state as the current game state, it makes a decision based on what move he did next, and what it resulted in. Once the game is over, TheBrain traverses each move in the current game, and marks the state based on whether he won or lost.
This way, all the agent needs to know is what moves are valid, and if the game has been won or lost. It does not need to know any strategies related to a particular game, only how to move, and what consitutes winning.

As for the particular game states (ex: how does it know what the game board''s dimensions are?), it currently just uses a two dimensional array, with the size determined at compile time. So, again, it will consult an outside macro (GAME_BOARD_SIZE_X, GAME_BOARD_SIZE_Y) for this property.

OK, when I said, "knows nothing about the particular game," I was only partially right. The Brain engine itself knows nothing about what game it plays; rather, he consults an outside source to find a legal move, and to see if he''s won (basically, he calls GameGetValidMove() and GameCheckGameOver(), where these functions are defined outside of the scope of TheBrain).

Nutts

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
test

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Well i''ve just written a simple card game which i''ve copied off my mobile called Bantumi. And i''m just writing some AI now for it. I''m going to implement a learning Neural Net and some Genetics algorithms so i can leave it over night whilst really clever NNs start getting generated and teach them selves.

Share this post


Link to post
Share on other sites
Hey, someone said something about NNs. I''ve been trying to understand how they work and why they work and how to use them. PLZ Xplain!!!

Zach Dwiel

Share this post


Link to post
Share on other sites
A neural network is basically an array of nodes (neurons like in your brain), and an array of link pathways which join up the nodes. You''ve got input links which input the NN and output links which give the out.

Each link calculates the value its going to send to the node by multiplying its input by a specific weight value (property of each link), and sends the multiplied value to its target node.

Each node then has a threshold, so if the sum of all its inputs reach this value it then fires out of its output links.

These values the hopefully filter down from input links->nodes->links etc. -> output links.

Then you can get your output by probably using the output channel with the greatest value.

If that sounds complicated then its probably my explaining, because they are really simple to get up and running. Its just the training which is the bugger.

Share this post


Link to post
Share on other sites
Thanx for the info, but I guess I just doen''t grasp how it all works. I guess I''ll try to do a basic implemention of a net, but I just realized that I don''t know what kind of inputs or outputs neural nets give and recieve. Thanx for the basic definition though. Time to go surfing for neural nets I guess!

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!