Jump to content

  • Log In with Google      Sign In   
  • Create Account

FREE SOFTWARE GIVEAWAY

We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.


Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!


Neural Network - Discussion


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
103 replies to this topic

#21 IADaveMark   Moderators   -  Reputation: 2533

Like
0Likes
Like

Posted 22 July 2008 - 02:06 PM

Quote:
Original post by ID Merlin
I don't have a reference to cite, it may have been an earlier post here, but one of the things that NNs are not good at is dealing with sequences of events.

That is correct. One way around it would be to have some of your inputs map to historical data at discrete time periods (e.g. 1 second ago, 5 seconds ago, 15 seconds ago) or to discrete points on a historical event list (e.g. CurrentEvent - 1, CurrentEvent - 2, etc.)

Quote:
I know that NNs are fairly good at learning to discern various distorted characters in a CAPTCHA image, for instance, but that is hardly a good "game", is it?

The reason for this is that NNs are suited to pattern matching. For example, an OCR system will take pixelized data and use each pixel as an input. Depending on which pixels are on and which are off, it can make a reasonable assumption of which letter or number it is. It is saying, in essence, "this kinda looks like this known pattern which I have mapped to output 'A'".

For game purposes, you are not using pixels but data inputs from the world around you. Health, damage per second, number of allies, number of enemies, proximity of powerups, etc. That's great and all, but I can build the same decision model with weighted sums and/or decision trees... and in a more custom-tailored way.

But the NN can learn on the fly, you say? So can reinforcement learning or dynamic decision trees. And they can do it in a more controlled way.

*shrug*
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC

Professional consultant on game AI, mathematical modeling, simulation modeling
Co-advisor of the GDC AI Summit
Co-founder of the AI Game Programmers Guild
Author of the book, Behavioral Mathematics for Game AI

Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Sponsor:

#22 BLiTZWiNG   Members   -  Reputation: 349

Like
0Likes
Like

Posted 22 July 2008 - 04:31 PM

Colin McRae Rally 2 used a neural network to drive the AI cars.

I also belive that Battlecruiser 3000AD used one as well.

And the afore mentioned Black & White.

Aside from those 3 games, I'm not really aware of any others, not to say there aren't any.

CMR2 is in my mind, the perfect kind of use for a neural net though.



#23 sion5   Members   -  Reputation: 100

Like
0Likes
Like

Posted 22 July 2008 - 07:34 PM

Ok, some intresting arguements there which I will be sure to pick up on in my writting.

Forgetting now that im actually using NN's to automate a game agent(which evidently is not the best way to solve that particular problem). Is it fair to say that NN's are good at what they do, but its difficult to find an application for them?

Andre'LaMothe of Xtreme Games worte an article about NN's and one thing that struck me was this part:

"The key to unlocking any technology is for a person or persons to create a Killer App for it. We all know how DOOM works by now, i.e. by using BSP trees. However, John Carmack didn't invent them, he read about them in a paper written in the 1960's. This paper described BSP technology. John took the next step an realized what BSP trees could be used for and DOOM was born."

So it took BSP 33years to go from theory to practice! Dont get me wrong im not expecting to be creating the next Doom using NN's but maybe the arguement could be that their not being applied correctly?



#24 Hnefi   Members   -  Reputation: 386

Like
0Likes
Like

Posted 22 July 2008 - 09:06 PM

Quote:
Original post by sion5
Forgetting now that im actually using NN's to automate a game agent(which evidently is not the best way to solve that particular problem). Is it fair to say that NN's are good at what they do, but its difficult to find an application for them?

No, it's the other way around. It's easy to find applications for NN's - they are, in a sense, the ultimate heuristic - but they are so general that they don't perform specific operations well.

The biggest drawback is one I haven't seen mentioned here, and that is that backprop NN's tend to get stuck in local minima. This is a critical weakness. It can be alleviated with various tricks such simulated annealing or other hill-climbing algorithms, but the more tricks you pull the less efficient the learning procedure becomes. The ultimate expression of this is using GA's, which you asked about. Using GA as a learning algorithm avoids the local minima problem almost completely, but you pay for it by throwing away all structural knowledge you have of the problem which manifests itself as extremely poor learning rates.

NN's may become useful if we learn how to compensate for their weaknesses without sacrificing their strengths. It has been demonstrated that NN's and other statistical methods benefit greatly from appropriate preprocessing mechanisms, but they still do not approach hand-coded methods for stability, correctness or efficiency. In the end, most statistical methods disregard the actual structure of the problem they are trying to solve, which is why they fail. Classical AI techniques are typically better suited; a combination of the two may be best (but is difficult to achieve; I'm working on that problem myself).

Then there are statistical methods that allow for structured learning as a compromize. I've had some early success (didn't pursue it further) with Bayesian networks implemented with a learning algorithm. The price in runtime performance was heavy (superexponential in regards to the number of nodes, IIRC), but the rate of learning as a function of observed data was relatively impressive.
Quote:
So it took BSP 33years to go from theory to practice! Dont get me wrong im not expecting to be creating the next Doom using NN's but maybe the arguement could be that their not being applied correctly?

Could be. It is evident that NN's are very powerful - just look at ourselves. As I mentioned, NN's benefit greatly from proper preprocessing. Preprocessing can be (but usually isn't) done other NN's. This suggests that a tightly controlled hierarchical structure of NN's can achieve better results than we've seen so far. It makes sense from a theoretical perspective too, though I won't go into that here. But to actually design such a hierarchy properly is not possible at this point in time. Due to the nature of NN's there are no structural design methods to apply, and the solution space is much too large for random exploration to see any real success. I doubt much progress will be made withing the next two decades in this particular area, unless a breakthrough of Einsteinian proportions is made.

#25 sion5   Members   -  Reputation: 100

Like
0Likes
Like

Posted 22 July 2008 - 10:37 PM

Thanks Hnefi, that was a very informative post. I dont understand the later part of this comment:

Using GA as a learning algorithm avoids the local minima problem almost completely, but you pay for it by throwing away all structural knowledge you have of the problem which manifests itself as extremely poor learning rates.

I undertand the avoiding of the local minima problem but what do you mean by saying you do so at the cost of throwing away all structural knowledge you have of the problem which manifests itself as extremely poor learning rates??

#26 Hnefi   Members   -  Reputation: 386

Like
0Likes
Like

Posted 22 July 2008 - 11:23 PM

Quote:
Original post by sion5
Thanks Hnefi, that was a very informative post. I dont understand the later part of this comment:

Using GA as a learning algorithm avoids the local minima problem almost completely, but you pay for it by throwing away all structural knowledge you have of the problem which manifests itself as extremely poor learning rates.

I undertand the avoiding of the local minima problem but what do you mean by saying you do so at the cost of throwing away all structural knowledge you have of the problem which manifests itself as extremely poor learning rates??

Structural knowledge is the knowledge you have about how to solve the problem. You can use that to make the search for a solution more efficient. A* in pathfinding is an example of this; you use the knowledge that distance is an optimistic approximation of node fitness in combination with the knowledge that an optimistic approximation is permissible for that particular algorithm to efficiently and correctly reach a solution.

Neural nets already throw away a lot of structural information, but not all. In the case of backpropagation, a typical example of this is to assign different learning rates to different parts of the network; often, you let each neuron keep track of a measure of "inertia" that makes the search for the optimum much more efficient. This "inertia" is obtained by measuring how much the neuron needed to adjust its weights in the past and let that affect the learning rate in the present. By doing so, you are using the knowledge that this particular neuron is close to or far from a stable optimum, so it makes sense to keep it more or less stable.

GA cannot do this. You cannot get a meaningful idea of what parts of the genomes should be kept stable, because recombining them may alter this completely, even to the point of inverting the fitness of the entire genome. You can make no guarantees that it is better or worse to swap one part of a genome with another. It's all random.

This is a problem with all statistical methods, but GA is one of the worst offenders. That's why it's so general and also so inefficient.

#27 sion5   Members   -  Reputation: 100

Like
0Likes
Like

Posted 23 July 2008 - 12:13 AM

Please dont slate me on this as I have not done enough research into the implementation but ....

For my NN I was going to introduce the GA to establish the synaptic weights. The fitness of each weight would be determined by how close to the training data output the NN's output is. The chromosomes with the highest fitness "in theory" will be the weights that give the output we require.

If I haven't explained this well please let me know and I will try and elaborate further.

#28 Hnefi   Members   -  Reputation: 386

Like
0Likes
Like

Posted 23 July 2008 - 12:22 AM

Quote:
Original post by sion5
Please dont slate me on this as I have not done enough research into the implementation but ....

For my NN I was going to introduce the GA to establish the synaptic weights. The fitness of each weight would be determined by how close to the training data output the NN's output is. The chromosomes with the highest fitness "in theory" will be the weights that give the output we require.

If I haven't explained this well please let me know and I will try and elaborate further.

That is the standard way of doing it, and it works (but inefficiently). What this solution does is make a number of copies of the neural net, evaluate all of them, and then randomly (with a preference for more fit nets) choose nets/chromosomes to "mate". Which genes get switched between two chromosomes is random; you don't know how different parts of the chromosome affect other parts (and by extension, the fitness of the entire chromosome), so you can't make a reliable statement about which parts of the chromosome should remain stable.

It works, just don't expect it to learn anything significant in realtime.

#29 sion5   Members   -  Reputation: 100

Like
0Likes
Like

Posted 23 July 2008 - 12:57 AM

I dont understand why you would want it to learn anything at runtime? You would have trained the NN offline. All you are doing is using it runtime.

#30 Hnefi   Members   -  Reputation: 386

Like
0Likes
Like

Posted 23 July 2008 - 01:09 AM

In that case, I don't really see the point. Why not simply use decision trees instead?

#31 Ohforf sake   Members   -  Reputation: 1832

Like
0Likes
Like

Posted 23 July 2008 - 01:26 AM

I'm not that much into NNs but my advice would be to do some testing. There are libs like FANN that can be used to set up and train NNs without much knowledge and to play with them.
Just set up a small framework and try to solve some problems. When I did this, I was amazed at how bad it actually works. It's by far not as easy as pluging some test data in, and after 4 hours of learning s.th. usefull comes out.
So as I said, do some testing and you will see, what they are good at and when they are simply a pain in the ass.

As of NNs beeing able to respond to untrained, new scenarios: I highly doubt this works, or at least works better than the "old-school" algorithms.
As was already stated, NNs are good at pattern recognition, where "patterns" would be specific situations in your case. Now the NN will be fine at recognizing the pattern, even if there are small changes (like an enemy more or less, a bit more or less health, ...), but as soon as something completely weird happens, eg. a pattern the developer didn't think of and there for didn't train, it won't know what to do.
Example: If anyone here played Crysis, go ahead, run into an enemy camp, climp on a tower and wait there, the AI won't know what to do, because the devs didn't do much scripting for that scenario. Now, if an NN was used, it would be pretty much the same thing, it wouldn't know what to do for this pattern, as the devs didn't train it. I can't imagine the NN would take it's chewing gum and a pair of boots, and build an rocket launcher from that to blast you down that tower.

Now if you still want to stick with NNs I would try a modular approach. Hnefi already said, that preprocessing the data helps a lot. I would try a network of NNs, with a NN for each specific task. For example a bot would have an NN for target prioritizing, an NN for abstract decision making, an NN for targeting, an NN for movement, ...
That way you can train each module individually and put all parts together in the end, wireing outputs of higher-level modules to inputs of lower-lever ones. Debugging also gets easier, cause if something goes wrong, you can check the outputs of each module to see who fucked up.
However what you get in the end (if it works) has the same functionality as decisiontrees so there is no real benefit.

The only thing that could help NNs is to put some work in learning. If you find a good and fast way of learning from past mistakes, you could try to implement an AI for an RTS game.
Maybe again with a modular approach (eg. one NN for enemy prediction, one NN for building, one NN for general strategy, one NN for mikro-movement, ...). Or some sort of hybrid approach with an old-school AI managing stuff, but an NN that can steer some weights inside the hardcoded AI and serve as "intuition".
However in both cases with the ability to learn from the match by "looking at the replay" or s.th. and figure out a way to perform better in the future.
It would be really cool, having an AI not falling for the same trick over and over again.
I'm pretty sure this can be done way easier without NNs but you never know...


I guess this wasn't much of a help, but you really picked a tough topic and you should really keep your expectations low.
On the other hand, I quite like these discussion-threads, as they are usually an interesting read...

#32 sion5   Members   -  Reputation: 100

Like
0Likes
Like

Posted 23 July 2008 - 02:10 AM

Hnefi, thanks again for your input in the discussion. Its a research topic that i'm investigating so i'm going to stick with NN's :-)

Ohforf sake, thanks aswell for your input, you also raised some intresting points.

So, the conclusion is that almost everyone in the industry hates NN's but academics love them :-) at least until they enter the industry

As I have said previously, I haven't worked in the industry and im guessing that whilst universities are teaching the technology, its a subject that can only truly be understood from experience!

#33 IADaveMark   Moderators   -  Reputation: 2533

Like
0Likes
Like

Posted 23 July 2008 - 02:59 AM

Quote:
Original post by sion5
So, the conclusion is that almost everyone in the industry hates NN's but academics love them :-) at least until they enter the industry

As I have said previously, I haven't worked in the industry and im guessing that whilst universities are teaching the technology, its a subject that can only truly be understood from experience!

And now we have come full circle to my original comment regarding how the industry greets someone from academia - especially a student - with skepticism. Visit my link on that exact comment. It is to a blog post by Damian Isla (Bungie... i.e. Halo 2 & 3) where he laments that new students come to him trumpeting their prowess by having knowledge of A* and NNs. The former is the most-written-about subject in game AI except for maybe FSMs and the latter is not useful and therefore irrelevant. But the schools keep injecting the students with it anyway, telling them that it is a useful skill, and sending them off into interviews in the biz.


Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC

Professional consultant on game AI, mathematical modeling, simulation modeling
Co-advisor of the GDC AI Summit
Co-founder of the AI Game Programmers Guild
Author of the book, Behavioral Mathematics for Game AI

Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

#34 IADaveMark   Moderators   -  Reputation: 2533

Like
0Likes
Like

Posted 23 July 2008 - 05:35 AM

Quote:
Original post by kirkd
Just my $0.02 worth - I'll try to keep it to $0.02. 8^)
...
OK. $0.03.

And with the plummeting US dollar, it's more like $0.04. I move that the "2 cents" cliche' be revised to account for inflation.


Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC

Professional consultant on game AI, mathematical modeling, simulation modeling
Co-advisor of the GDC AI Summit
Co-founder of the AI Game Programmers Guild
Author of the book, Behavioral Mathematics for Game AI

Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

#35 shurcool   Members   -  Reputation: 439

Like
0Likes
Like

Posted 23 July 2008 - 07:14 AM

Quote:
Quote:
Original post by Kylotan
or you can pick a method that explicitly accounts for all the scenarios a developer can envisage and get it working more reliably.

Original post by shurcool
And what happens in a scenario that the developer did not originally envisage?

After thinking a little more about this, I just wanted to bring more attention to this point.

Are there any valid responses to that question?

#36 Omid Ghavami   Members   -  Reputation: 998

Like
0Likes
Like

Posted 23 July 2008 - 07:37 AM

Quote:
Original post by shurcool
Quote:
Quote:
Original post by Kylotan
or you can pick a method that explicitly accounts for all the scenarios a developer can envisage and get it working more reliably.

Original post by shurcool
And what happens in a scenario that the developer did not originally envisage?

After thinking a little more about this, I just wanted to bring more attention to this point.

Are there any valid responses to that question?


Patch

#37 Hnefi   Members   -  Reputation: 386

Like
0Likes
Like

Posted 23 July 2008 - 08:13 AM

Quote:
Original post by InnocuousFox
Quote:
Original post by sion5
So, the conclusion is that almost everyone in the industry hates NN's but academics love them :-) at least until they enter the industry

As I have said previously, I haven't worked in the industry and im guessing that whilst universities are teaching the technology, its a subject that can only truly be understood from experience!

And now we have come full circle to my original comment regarding how the industry greets someone from academia - especially a student - with skepticism. Visit my link on that exact comment. It is to a blog post by Damian Isla (Bungie... i.e. Halo 2 & 3) where he laments that new students come to him trumpeting their prowess by having knowledge of A* and NNs. The former is the most-written-about subject in game AI except for maybe FSMs and the latter is not useful and therefore irrelevant. But the schools keep injecting the students with it anyway, telling them that it is a useful skill, and sending them off into interviews in the biz.

Actually, I must disagree here - but maybe I'm the exception that proves the rule. When I took the "Neural networks and learning systems" course at my university, we were taught that ANN's, while interesting, are not useful in practice. We were taught how they work as a theoretical foundation for and comparison to other techniques. In the AI courses I've taken, ANN's have been consistently downplayed as irrelevant; the view is that even if they did do something useful, it wouldn't matter because it doesn't help us actually solve any problems. It'd be a black box, useful for engineers wanting to build something that works but worthless for researchers who want to understand how things work. But again, maybe my university is the exception that proves the rule.

Quote:
Original post by shurcool
Quote:
Quote:
Original post by Kylotan
or you can pick a method that explicitly accounts for all the scenarios a developer can envisage and get it working more reliably.

Original post by shurcool
And what happens in a scenario that the developer did not originally envisage?

After thinking a little more about this, I just wanted to bring more attention to this point.

Are there any valid responses to that question?

Neural networks do reasonably well at generalizing; that's part of their appeal. If an unexpected situation was to occur, it is not impossible that a neural network would be able to deal with it efficiently. How well it does depends on many things; the domain, the pre- and post-processing mechanisms, how well the net was trained, how the net is organized, what role it actually plays in the decision making mechanism etc.

#38 IADaveMark   Moderators   -  Reputation: 2533

Like
0Likes
Like

Posted 23 July 2008 - 08:45 AM

Quote:
Original post by Hnefi
Neural networks do reasonably well at generalizing; that's part of their appeal. If an unexpected situation was to occur, it is not impossible that a neural network would be able to deal with it efficiently. How well it does depends on many things; the domain, the pre- and post-processing mechanisms, how well the net was trained, how the net is organized, what role it actually plays in the decision making mechanism etc.

Regardless of the tool, any decision system is at the mercy of how many inputs are hooked up to it. If you fail to include an input as a possible critieria and yet that piece of information becomes the difference between two otherwise similar scenarios, your agent will not know what to do. Again, this is regardless of the tool used - NNs, BTs, HFSMs, whatever. It's a knowledge representation issue first.


Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC

Professional consultant on game AI, mathematical modeling, simulation modeling
Co-advisor of the GDC AI Summit
Co-founder of the AI Game Programmers Guild
Author of the book, Behavioral Mathematics for Game AI

Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

#39 Hnefi   Members   -  Reputation: 386

Like
0Likes
Like

Posted 23 July 2008 - 09:11 AM

Quote:
Original post by InnocuousFox
Regardless of the tool, any decision system is at the mercy of how many inputs are hooked up to it. If you fail to include an input as a possible critieria and yet that piece of information becomes the difference between two otherwise similar scenarios, your agent will not know what to do. Again, this is regardless of the tool used - NNs, BTs, HFSMs, whatever. It's a knowledge representation issue first.

I'm not sure I understand what you mean. Neural networks are strictly signal processors; their input domain is perfectly defined. If you attach a neural net to a camera, then any possible image sequence from that camera will be valid and defined input for the network. Attaching additional sensors is not possible without remodeling and retraining the net, but that is a weakness the same way it's a weakness of algebra that "1+cat" is undefined; it's a non-issue. It may not be able to deal with all situations intelligently, depending on previously mentioned factors, but it will always be able to make a decision.

I don't see how it can be a knowledge representation issue, because NN's do not model knowledge explicitly. NN's deal strictly with signals, not abstract representations.

#40 sion5   Members   -  Reputation: 100

Like
0Likes
Like

Posted 23 July 2008 - 07:54 PM

Quote:
Original post by InnocuousFox
Quote:
Original post by sion5
So, the conclusion is that almost everyone in the industry hates NN's but academics love them :-) at least until they enter the industry

As I have said previously, I haven't worked in the industry and im guessing that whilst universities are teaching the technology, its a subject that can only truly be understood from experience!

And now we have come full circle to my original comment regarding how the industry greets someone from academia - especially a student - with skepticism. Visit my link on that exact comment. It is to a blog post by Damian Isla (Bungie... i.e. Halo 2 & 3) where he laments that new students come to him trumpeting their prowess by having knowledge of A* and NNs. The former is the most-written-about subject in game AI except for maybe FSMs and the latter is not useful and therefore irrelevant. But the schools keep injecting the students with it anyway, telling them that it is a useful skill, and sending them off into interviews in the biz.


Truth is academia is there to encourage innovation. I'm sorry but anyone can work in a factory pushing out the same product one after the other, but it takes academics to say "Hey wait, surely this can be done better?". Graphics and Audio has come on leaps and bounds in the past few years but where is A.I?? I absolutely love playing games as much as I like trying to create them but it really irritates me that games should be at the forefront of A.I development yet subjects like statistics and robotics are way in front?! If everyone's attitude is that we have found the best solution then there will never be an advancement in this domain.

Back to subject. Are there any readers who are working on/ have worked on high profile games that have tried using NN technology?




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS