Any help required?

Started by
48 comments, last by Mathematix 22 years, 7 months ago
Greetings fellow developers. Should any require info on the specifics of neural nets, I''m at your service.
Advertisement
Hi, thanks for the offer. I''m trying to show that long-term potentiation in plastic neuron controllers can lead to homeostatic adaptation to perturbations from the environment. Basically I''d like to be able to show that transient features in the environment become intransient due to phylogenetically adapted plasticity in the controller of the simulated organism and that a tendency for internal stability leads to external stability and behavioural homeostasis.

Can you help?

Mike
I admit, I am in tears of laughter and dear not repeat the above. :D Would this be biology by any chance??

Anyway, let''s start at the beginning. What specific data will be available for analysis and what information do you hope to receive from the network? Try to limit this as much as possible to limit the size of your network.
quote:Original post by Mathematix
Greetings fellow developers.

Should any require info on the specifics of neural nets, I''m at your service.


I would be interested in reading your thoughts about how you would apply ANN to the AI
decision-making needs of the typical First Person Shooter or Real Time Strategy game.

In other words, what AI decisions that have to be made in those games, would you think
to be good candidates for being practically solved using an ANN? And Why? For instance,
there is a need for pathfinding in both of these types of games. Would you consider that
decision to be a good ANN candidate? And if so, why? What about other AI decisions?

BTW, I think one of the reasons that ANNs are rarely found used in computer games is
that developers are not able to find places to effectively use them. Perhaps opinions
such as yours might give someone an idea of a good way to deploy an ANN in a game?

Thanks,

Eric
Hello! Nice to see a willing person around
I''d like to second Geta on his question, that would be interesting for me and my Master''s... although I have another question that I can''t find a simple answer to :

How on Earth do you know how many layers / how many nodes per layer to put ? I totally understand the idea of weights, heck, I even understand the idea of splitting the space of samples in two until each class can be in its own little part of the space (forgive the certainly awful phrasing here).
But what is the thing that makes you say : "well, we need another node on this layer, see?" ?? (I have a vagure memory from a book that actually explained that the layers served to split the space in two, n times, where n would be the number of layers; but nothing about the nodes on a layer...

thanks for any insight you might give


Sancte Isidore ora pro nobis !
-----------------------------Sancte Isidore ora pro nobis !
(Before you start reading, bit about academic homeostasis at the top, reply about ANN''s in games half way down. Wouldn''t want you to bore yourselves now would I)

I only wrote that first reply because it''s what I''m working on at the moment for my dissertation and buggered if I can get it to work ;-). Homeostasis, btw, is something I feel is very important in neural networks and is the basis of learning during the lifetime of an agent. To quickly spill my guts and get on with a more game related discussion, if you imagine that an agent is born into an environment that is always the same, then no learning is necessary and all adaptation can be achieved phylogenetically (by evolution).
The idea of using homeostasis as a behavioural mechanism is that the agent may have certain internal variables that must remain within bounds such as heat, heart rate, pain level, hunger levels etc for the agent to continue functioning. If the organism is evolved so that adaptation in the environment is homeostatic, so that the agent''s behaviour keeps those internal variables within bounds then adaptation and learning are adjustments to behaviour due to transient (i.e. non-permanent) properties of the environment (where predators live, general climate changes), where changes to behaviour are aimed at causing the behaviour to adjust to _continue_ to keep the variables within bounds.
If adjustments to internal parameters (neuron firing rates, synaptic strengths) only occur when the essential variables are out of bounds, then evolving for internal stability while evolving for a specific behaviour should, in theory, evolve for an organism that retains external, behavioural stability by retaining internal stability in a changing environment.

Imagine an RTS AI has inputs from the world and outputs to actions, everything is going smoothly and its troops are _not_ being slaughtered (essential variables are in bounds), then it doesn''t adjust its evolved behaviours.
Suddenly the human player retaliates, the AI''s troops are being killed quicker than sin and the essential variables disappear out of bounds, then the AI''s behaviour is adjusted by rules evolved to change in a direction that will retain those essential variables within bounds. Hopefully, by adjusting its behaviour to regain internal stability, it''s being slaughtered is halted and it gets back into a more stable defensive or offensive position.

Sod it, I''ll just post my dissertation here when I''ve finished it.

Anyway...

On the subject of ANN''s in computer games (at last).
The problem with ANN''s is that they are an abstracted form of AI. They''re not easily human understandable finite state rules, they''re numbers connected to other numbers that give a numeric output. With a final state rule you can read it, have an expected event, see the event not occur, re-read the rule, see your mistake and debug. With ANN''s you can look at the network, see some numbers, stare at the numbers, cry a bit and go down the pub. If it doesn''t evolve to work you''re buggered.
The good thing about ANN''s is they are very easily evolvable as they are numeric values with a smooth genotype-phenotype mapping. With logical rules, evolving an answer has a very brittle fitness landscape. This means that a small change in the logical statement (think LISP, adding or removing a tree) can have huge phenotypic effects. This, in evolution, is bad and can lead to low local optima with no easy way of escape. Rolling hills fitness landscape good, manhatten skyline fitness landscape bad.

How to use ANN''s for an FPS then? Well, if you have a number of logical facts as inputs such as position of guns on the level, distance of agent from all these guns, position of enemies on level, their weapons, your health, their speed, your speed etc. you could, in theory have this as an input to a neural network and your next action as an output. Some kind of polling network, maybe, 25 possible next actions (find health, find sniping rifle, find bfg-2000, find cover, just f**cking run etc) then the one that got the highest score from the network could be kicked in to work with your handwritten code for actually performing the actions. You could, of course, have 10 outputs of amount to turn left, amount to turn right, gun to use, amount to move forward etc. and use them directly but that is unlikely to work, on a hunch, without a very complex, continuous-time (i.e. internal state) network that could be written by hand much quicker. I''m not saying it would work, but I''d have a backup plan. Perhaps even have a sub-sumption architecture with actions such as move from A to B as the base levels and goals, such as find health as the later evolved levels, all working together. Sounds a bit harsh if you ask me, but it could work if your company gives you the time for R&D.

So, ANN''s then, use them as heuristics for your next move, not necessarily directly without any other form of AI.

As to the number of layers? Personally I''d ditch the feed-foward network, evolve a fully connected network, possibly including recurrence, allow for the number of neurons to be increased by evolution i.e. a variable length genotype, but have a neutral addition mechanism, such as add the neuron but make all of the connections from it have zero weighting, so it makes no difference initially but can have its weights evolved onwards, letting evolution decide.

This is a long post, I''m not feeling great, so, goodnight ;-)

Mike
quote:Original post by MikeD
Hi, thanks for the offer. I''m trying to show that long-term potentiation in plastic neuron controllers can lead to homeostatic adaptation to perturbations from the environment. Basically I''d like to be able to show that transient features in the environment become intransient due to phylogenetically adapted plasticity in the controller of the simulated organism and that a tendency for internal stability leads to external stability and behavioural homeostasis.

Can you help?

Mike


ROFLMAO: You''re a cruel, cruel man Mike Ducker! hehehe
FAO: Geta.

Hello Geta,

The first thing to remember about neural networks is that they were specifically created to search for frequently occuring patterns in examples presented to them. Neural nets are composed of what are called ''feature detectors'' that are designed for this specific purpose. The problem that you wish to solve for your first-person shooter is primarily a path finding problem, for which neural networks are not ideal.

Another branch of AI that is regularly used for such problems is A*. This algorithm involves dividing up the environment into a set of locations/vertices, and using these locations to determine the lowest cost (distance) to get to the desired location. This location could be a player him/herself, or the location of the enemy flag in a CTF game. The only real application for neural networks in such a game would be if you wished for the other bots to learn the movement patterns of players and other bots around the arena. This is such a complex, and time consuming thing to achieve that I am trying my best to delay my first attempt at solving such a problem for as long as possible!

Hope this answers your question.



Regards,
Mathematix.
FAO: ahw.

Greetings,

I gather you are talking about linear and non-linear decision boundaries. The golden rule when deciding what you wish for a network to learn is that both the inputs and outputs are definite and concise. Definite being ''to the point'' and concise being ''in very short terms''. Adhering to these basic principles ensures (before starting your network design!) that your decision boundary is linear, and hence, the network able to classify patterns that it recognises.

There are a few offerings of formulae that help you to approximate the number of hidden layers required and the number of neurons per layer. Alas, sometimes with AI, an approximation can be pretty useless if your training examples are ambiguous. The method for determiniting a suitable number of hidden layers, etc, would be to start with a fairly small number and build on these, and use back propagation with supervised learning (that with an average squared error, and one of the easiest to implement). Of course, having an average squared error is the best way to ensure that there is convergence in your network. Also, beware of what you set your learning rate parameter, and momentum constant to, as these values set greatly affect the performance of the network.

In summary:

1) Is the problem solvable by a neural net? Do a linear decision boundaries exist with which the network can classify examples?
2) Limit the number of inputs and outputs to your net.
3) Choose a supervised learning algorithm to keep track of events.
4) Be very careful how you choose your learning rate parameter and momentum constant values.

Some great detailed tips are offered in the masters-level neural nets text, entitled:

"Neural Networks: A comprehensive foundation"
Author: Simon Haykin
ISBN: 0-02-352761-7
Publisher: Prentice-Hall.

This is a very detailed text that is not suitable for the beginner, but as you are a masters student, it should make valuable reading!

Enjoy!

Regards,
Mathematix.
MikeD! Will get back to you!

Regards,
Mathematix.

This topic is closed to new replies.

Advertisement