Archived

This topic is now archived and is closed to further replies.

VVoltz

Neural Network Learning??

Recommended Posts

Well, I''m doing a 2d turn based strategy game, blah, blah, blah. Please, I wanna know (if I decide to work with NN), how long does it take for a NN to ''learn'' and help the game AI with goo decicions??, take for example a chess game with Neural Networking, with those variables, how long will wou have to train the system for it to be able to beat you?????

Share this post


Link to post
Share on other sites
The answer to your question will depend on what sort of neural network involved, and how it is being used. Neural architectures vary widely in their behavior and their selection involves trade-offs. What is probably more important, though, is what you want the network to learn, and how often you mean to update it.

Can you explain how you expect to use it?

-Predictor
http://will.dwinnell.com

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Well, still with the chess game example, mmmm...
Let''s say the NN learns about moves that do more damage to the other side and some defensive positions (is that enough explanation?); it also learns about the importance of its pieces (and their effect on winning using some tactics).

About the updates, it could be about each game (or if its better, each turn).

Also let''s take a 2 layer (is that correct notation, it is in spanish) NN.



Share this post


Link to post
Share on other sites
ANN''s (not NN''s NN''s would be organic, do not confuse the two)
would be quite easy to train (if small) bt the larger they get, the longer training takes, but the ''smarter'' the ANN would Be.

Humans are Human oriented, it is because of there nature: a design flaw-greed, jelosy the solution: AI- never greedy, and they stick to there ethics no matter what.

Share this post


Link to post
Share on other sites
That Anonymus Poster was me by the way.

OK, here on my College (University), I asked an ANN (thanks for the correction) specialist (Dr. Roberto Carranza, maybe some of you know him, he is supposed to be the best mathematic of my country) about my problem, like this Chess example and he said:

"Why use a ANN for some problem we can solve without it? (He didn''t undertand that I didn''t want to use any EXPERT MOVEMENTS from known attacks and deffenses), the problem is a quite big and it would take a lot of effort, so I don''t think that ANN would be the best way to solve it".

So, I''m really considering not using ANN for my project.

Share this post


Link to post
Share on other sites
hes definitely right...it would take an incredibly long time to teach a NN to play chess (btw, its ok to call it a NN around here, we know you arent trying to teach some neurons in a petri dish to play chess...thats over on the bioengineering board) without preprogramming it. you certainly couldnt train it manually, youd have to play it against recorded games...and even then youd have to decide how "bad" a move the network just made...it wouldnt be fun and it wouldnt be fast.

besides, chess is a pretty easy problem to figure out...its completely deterministic and exact. its much much easier to do it this way (ok, relatively easier...you still need a supercomputer to sometimes beat kasparov).

anyway, there are many other avenues to pursue here, i just think with the game youre describing NN would not be a good solution. much like chess, i assume, you will have lots of pieces on the map...there are much easier ways.


and to nice coder: sometimes a greedy and back stabbing ai is the best solution to a problem...thats why humans have done so well on eath.

Share this post


Link to post
Share on other sites
Or
you could let the networks autotrain...

make a bunch of chess boards (not necessarily displayed) and pit ANNs vs ANNs. Then use genetic algorithms to make better AIs...

Just a thought, I think the influence of ai-Junkie is showing...

_________________________________________
"Why, why why does everyone ask ''why'' when ''how'' is so much more fun"
-Spawn 1997

[edited by - Infuscare on January 25, 2004 1:48:42 AM]

Share this post


Link to post
Share on other sites
i think that would be really really interesting, purely from an ai investigation/frakenstein standpoint...your little creations going off in the world and doing ok for themselves.

unfortunately once again (and im totally not trashing your idea, just being practical), its a matter of time...even if you dont evaluate whole games, i guess you could stick them in at certain random points of saved games to practice the beginning, middle and end game, even then it would take an incredibly long time. imagine a population of even 100, creating the net, playing several moves (even more if you want to evolve *strategies*), 100 times, then making a new population.

furthermore, to get a full spectrum of strategies several seperate populations would have to evolve, then tested against one another at a much later point...and not to mention the huge network youd need to evolve any match even interesting to watch (just imagine the huge number of inputs and outputs[outputs for pieces and the moves they should make] then imagine *you* having to write the function relating the two...the problem space is mind boggling).

competitive populations are cool though...they get much more vicious...over fitting can be a problem though, unless, like i said, you have many seperate populations (or play against both other populations and pre-recorded games).

something like this might be interesting in an art installation, a year later you can come back and see how good they are after 12 months of competing, but it would probably not be practical in this sense.

Share this post


Link to post
Share on other sites
It still wouldn''t be good to include NNs in games because the technology maybe we have but still it wouldn''t be really fun, I did this as one of my projects and believe me, it first takes a whole lot of your time... it is not fun playing with the computer playing 1000 games of TTT and still it made mistakes... it did work though.

Share this post


Link to post
Share on other sites
quote:
Original post by snyp
It still wouldn''t be good to include NNs in games because the technology maybe we have but still it wouldn''t be really fun, I did this as one of my projects and believe me, it first takes a whole lot of your time... it is not fun playing with the computer playing 1000 games of TTT and still it made mistakes... it did work though.


Why couldn''t one ship the game with the neural network already trained?

-Predictor
http://will.dwinnell.com

Share this post


Link to post
Share on other sites
quote:
Original post by TerranFury
Baked. It''s been done. But it''s not very interesting, n''est-ce pas?



Why couldn''t the neural network continue to learn once in the field? Would that be interesting enough?

-Predictor
http://will.dwinnell.com


Share this post


Link to post
Share on other sites
Guest Anonymous Poster
As for how ANN''s are applied in the chess engines that apply them..

Such chess engines are made up of three basic parts..

1) a Search() function which incorporates a minimax search such as alphabeta, MTD(f), etc.. and returns the best move and the minimax score found within the search depth.

2) a static Eval() which evaluates a static chess position (usualy at the end nodes of Search()) and returns a value representing which side is winning or losing at that node. This is the function they want to replace with an ANN.

3) The self-learning algorithm

Now consider a position P.. If we Search(P, Depth) to some specific depth we will get a minimax value V which is a better evaluation than we would get if we skipped the search and went straight to the eval.

The goal of ANN''s in these engines is to learn a new eval (I''ll call it NNEval(P)) to more closely match Search(P, Depth). If you can train NNEval(P) to always match Search(P, 6) for example, then NNEval() could be said to be equal to a Depth 6 minimax search.

Now replace the original Eval() with NNEval() in your Search(P, 6) and you have an ''effective'' depth of 12.

In practice you will not find a NNEval() such as this but the idea is a sound one. Most chess engines have a rather stupid end node evaluation and rely on great search depths to find the minimax move. A smart eval ''effively'' extends the search at the cost of taking more time to evaluate nodes which means fewer actual nodes searched.

If NNEval() ''effectively'' extends the search more than is lost due to the end node evaluation taking longer, then its a better alternative than not using it.

It should be pointed out that the strongest chess engines do NOT use ANN''s or any other ''learning'' algorithms in regards to their searching ability. Most top engines do some simple opening book learning but that is the extent of it.

There could be any number of reasons for this.. perhaps it is computationally infeasable to make an ANN based eval operate fast enough to cover the ground lost in the extra processing.

Or perhaps they just havent implimented their ANN''s in a well optimised fashion. A lot of computer chess literature is based on techniques to make the engine a computational monster. The tricks used to make these engines faster are very extensive - one of the current best engines, ''Fritz'', will search millions of nodes per second on todays single processor desktops computers and completely obliterates chess engines that use ANN''s.

For other games, ANN''s have proven very competitive to non-ANN implimentations.. The current computer Backgammon champion uses ANN''s and the best Go playing computers also use ANN''s.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
go and backgammon use ANNs because ANNs are used primarily for detecting and evaluating patterns... this is especially necessary in GO because searching is such a stupid idea

Share this post


Link to post
Share on other sites
Disclaimer: i'm not an expert
as far as i see it, i would not use ANN for decision system but for a representation system, a ANN would help an AI to recognize context and narrow the field of the schearch by eliminate by "experiance" some move, the ANN would have to learn online, then you must find a fct of evaluation of error wich would fct as a retroregulation, something like seeing the outcome of an action being bad or good
such an ann would not have only input from the board but also from the actual state of the decision system, the previous context, in order the ANN make inference between is own action and the way the game evolve, and then outputting a "temperature" of action, that would allow the system to "feel" if a move is good or bad
actually the system would have to learn, but i didn't remove the search ability of minimax fct, best, it enhance it through time

my favorite use of ann as i have toy with them is more for ginving evaluation hint

>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>

[edited by - neoshaman on March 24, 2004 7:37:46 AM]

Share this post


Link to post
Share on other sites
I am also not an expert...

Say you have an ANN, you can use positive/negitive reinforcement (meighting manipulation?) to retrain it. (encouraging/discouraging behaviours)

You teach it by the use of a simple Minimax (depth 1)
If its move is better then what it usually does, reward it (positive reinforcement), if it is worse, punish it (negitive reinforcement). if the same, reward it up to a point (to encourage future expantion, but allow the curent value to be learnt).

Game selling point: The AI learns to play off you *Learning its own style*, or you can get it to learn by itself!!

Perhaps have the rewards/punishments in steps, and allow the user to ''vote'' on how good the AI''s move was?

Share this post


Link to post
Share on other sites