Evolutionary Algorithms + Neural Net

Started by
7 comments, last by Emergent 11 years, 6 months ago
I am trying to create a program that models simple artificial life similar to what the guy made in this video:

[media]
[/media]

So far my program has some critters that scurry around my screen looking for food but with no "brain". I've never messed with neural networks before but have been reading up on them in books and online for the past few days. This particular case seems a bit different from your basic neural network in some aspects though.

It seems that his NN doesn't use any type of learning or error measuring, but rather it does this from the evolution of the creatures? I'm thinking that the weights are randomly initialized and that the creatures that are best fit / survive in general have their weights copied over to the next generation (with crossover and random mutation)? He says the brain consists of a directed graph where the edges are weights, and I'm guessing this is what he means?
Advertisement
I have heard of this type of thing. More often I've heard of GAs using a simple rules-based tree where the rules get crossbred and mutated. Essentially any "brain" that can be summarised as a combination of numbers and logical operations could be cross-bred, although results may vary. Personally I think it's a waste to use NNs without using one of their major features.
Genetic Algorithm
Videos
Genetic Algorithms in Games

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

if you are looking for a NN algorithm that uses evolutionary methods have a look at http://www.cs.ucf.edu/~kstanley/neat.html. There are libraries there for different languages so you can just plug your inputs in if you don't want to implement it yourself.
Just a suggestion, but if you manage to get it work, you should try higher level evolution as in what benefits the society, not what benefits the individual.

Like have a few areas, with little traffic between them, and have inputs for positions and directions etc. of other beings.

So i would imagine the best societies beings would aim for food nobody else is aiming for, while the worst just go for the nearest food, slowing down their growth as they fight for food.

o3o


Just a suggestion, but if you manage to get it work, you should try higher level evolution as in what benefits the society, not what benefits the individual.

Like have a few areas, with little traffic between them, and have inputs for positions and directions etc. of other beings.

So i would imagine the best societies beings would aim for food nobody else is aiming for, while the worst just go for the nearest food, slowing down their growth as they fight for food.


That's interesting in that it now starts to involve game theory -- especially Nash equilibria. The problem is that "benefiting society" is really hard to codify without getting completely subjective.

One interesting note is that evolutionary algorithms will benefit society as long as you encode the right parameters. For example, if you were to rate the pieces of food not only by distance, but by proximity of other agents to it -- and then also keep track of successes and failures to acquire those items-- you will start to see agents being less competitive for the food since it benefits them to go slightly farther to get the guaranteed hit. As per Adam Smith, Nash, etc., you are still self-serving, but doing it in a way that is non-destructive to the group as a whole (via Pareto improvements).

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"


So i would imagine the best societies beings would aim for food nobody else is aiming for, while the worst just go for the nearest food, slowing down their growth as they fight for food.


Along those lines there is also Novelty Search http://eplex.cs.ucf.edu/noveltysearch/userspage/
BTW, as a follow up on the "fighting for food" and game theory bit, check out various references to the "Hawk-Dove game" and evolutionary game theory. Basic rules are:

  • Doves will eat together
  • Hawks scare off doves and eat the food
  • Hawks fight other hawks -- and can't eat while doing so

Therefore, if population is doves, it pays to be a hawk. If population is (multiple) hawks, it pays to be a dove. If there is 1 hawk, it is a toss-up, really.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"


That's interesting in that it now starts to involve game theory -- especially Nash equilibria. The problem is that "benefiting society" is really hard to codify without getting completely subjective.


Although I'd be the first to agree that the leap from real life to a game theoretic model is huge and fraught with peril, there are some standard ideas about what "social optimality" means.

The idea is just:
- You have n players
- Each player i will choose an action; we'll call it x[sub]i[/sub].
- Each player i gets a real-number reward, r[sub]i[/sub](x[sub]1[/sub], x[sub]2[/sub], ..., x[sub]n[/sub]).
- The "societal welfare" is the sum R(x[sub]1[/sub], x[sub]2[/sub], ..., x[sub]n[/sub]) = r[sub]1[/sub](x[sub]1[/sub], x[sub]2[/sub], ..., x[sub]n[/sub]) + ... + r[sub]n[/sub](x[sub]1[/sub], x[sub]2[/sub], ..., x[sub]n[/sub]) .
- The social optimum is the point (x[sub]1[/sub], x[sub]2[/sub], ..., x[sub]n[/sub]) that minimizes R.

The important thing is that the Nash equilibrium and the social optimum will not generally be the same point. The difference between social welfares at the two points is called the price of anarchy.

The classic example is the prisoner's dilemma. Referring to this table (the first one to pop up in Google Image Search),
Game-Theory-prisoners-dilemma.gif
the Nash equilibrium is (Defect, Defect), which gives rewards of (2,2) and a social welfare of 4. But had the players chosen (Cooperate, Cooperate), they'd have gotten rewards of (3,3) and a social welfare of 6. So for this game the price of anarchy is 6-4 = 2.

If you're a benign ruler who gets to choose the rules by which your society runs, you can choose to tax certain actions in such a way (i.e., modify the individual reward functions) so that the Nash equilibrium is the social optimum. The people who figured this out got a Nobel prize in economics (although in retrospect the math doesn't seem so difficult).

The extent to which these models describe the behavior of real humans is of course debatable, and is a subject of empirical study in psychology and the social sciences.

This topic is closed to new replies.

Advertisement