Emergent Intelligence

Started by
72 comments, last by Nathaniel Hammen 19 years, 7 months ago
I've been reading "Swarm Intelligence" that talks about intelligent behaviour emerging from large groups of not-so-intelligent creatures. I would like to know more about this, specificly working examples and demos. The book uses ants a lot as an example. What other work is being done in AI that deviates from the "lets build a brain" process, and goes more towards a "lets build a few million quasi-intelligent" bots, and watch them better themselves...?
Advertisement
well if you want to find stuff on "swarming" try googling "flocking". you can find _tons_ of algorithms and demos. i believe there are some on this site in the articles section.

Quote:Original post by Prozak
What other work is being done in AI that deviates from the "lets build a brain" process, and goes more towards a "lets build a few million quasi-intelligent" bots, and watch them better themselves...?


heh, pretty much all of the work being done in AI deviates from the "lets build a brain" process. it's pretty much impossible with our current understanding of the brain to try and build one. we simply don't know enough about how the actual system works to build anything resembling the same complexity.

you can look at state machines, path finding, neural networks and genetic algorithms to start. i'd also suggest sitting down and reading all the articles under the AI section on this site to give you some ideas about what's out there. i'm sure there are a number of good books suggested in the books section as well that can get you on the right track.

-me
"Emergent behavior" is really just a fancy way of saying "a coincidence that really looked cool". If you build an agent based model where each agent is thinking for itself using rules to define it's own behavior, sometimes these agents may do things that seem to be working with each other. In fact, there IS no cooperation happening - each agent is only secondarily aware of the other agents (if that), but they have started to do things "near" each other that makes them look like they are cooperating.

With a simple flocking behavior, this can be seen as each "boid" doing it's own thing... (i.e. the rules for moving with the world objects but not coliding with the world objects) and yet the perceived result is that they are moving "together".

In RTS games, there is a high degree of perceived emergent behavior. This happens when simple rules for each agent (unit) is designed in such a way that it tends to compliment other units' simple rules.

When I had a chance to ask Peter Moleyneux and Will Wright about designing and testing emergent behavior, Peter talked about how horrifying it is to watch your little system working along, doing amazing things that you didn't really design into it... but all the while knowing that at any moment it could collapse in on itself!

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

First off, 'Swarm Intelligence' is NOT the same as flocking behaviours (c.f. Craig Reynold's Boids). It's about 'Hive minds'.

As to Dave's comments...

Quote:Original post by InnocuousFox
"Emergent behavior" is really just a fancy way of saying "a coincidence that really looked cool". [ ... ] In fact, there IS no cooperation happening - each agent is only secondarily aware of the other agents (if that), but they have started to do things "near" each other that makes them look like they are cooperating.


This is absolutely NOT the definition of emergent behaviour or cooperative agents as used widely in the AI community and with reference to those definitions, the above carries several errors.

I don't have the time at the moment to spell out everything that is wrong with the above... I urge anyone who wants to know the truth about emergent behaviour and/or cooperative agents to read some literature.

For the time being, all I will say is that emergent behaviour is (typically) the result of nonlinear feedback mechanisms within sub-units of an entity that aren't specifically encoded to produce these 'emergent' outputs... while 'cooperative agents' specifically take into account the state and actions of other agents when making their own decisions (so cooperation IS encoded in individual entities). Hive minds are examples of BOTH of these systems... the sub-units act indivually yet display cooperation and the hive as a whole displays emergent behaviours that aren't specifically encoded within singular sub units.

Cheers,

Timkin
Thanks Timkin, either I didnt make myself clear, or got completely missunderstood, but the thread was going off in the wrong direction...

Im not looking for flocking behaviour, im looking for cooperative problem solving behaviour, even if that problem solving "emerges" from the "hive", an example would be how ants solve the "find the path between A & B that takes the less energy to make"...

..no one ant solves it, its a group effort, although I dont think any one of them realizes that...

So I would like to know what other practical works, even "visual" simulations are being worked upon with basis on EI...
*shrug*

Well, I'm just repeating what game AI designers, programmers and litterature consider "emergent behavior" to be. Perhaps it is different in the "real world"... but seeing as this is a game AI board, I didn't bother addressing contexts other than that of game AI.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Look at particle swarm optimization.
Theres at least 1 good article on it at citeseer.
Its rather interesting.

What it is designed for is stochatic finding of a single tuple(a maxima/minima) over the reals.

I've worked with it this summer, so I can answer questions regarding it. ^_^
~V'lionBugle4d
Quote:Original post by InnocuousFox
*shrug*

Well, I'm just repeating what game AI designers, programmers and litterature consider "emergent behavior" to be.


Could you provide some references Dave or perhaps identify these people? Either these people are wrong and are propogating erroneous information (in which case they should be correced before they set the industry back another 5 years) or you have misunderstood what they were saying/writing. All I can say is that I've never heard a game designer or programmer talk about emergent behaviour in the way you have, nor have I seen Game AI literature that refers to it as coincidence and involving no contextual awareness.

Cheers,

Timkin
Quote:Original post by Timkin
All I can say is that I've never heard a game designer or programmer talk about emergent behaviour in the way you have, nor have I seen Game AI literature that refers to it as coincidence and involving no contextual awareness.
Perhaps in my being flipant and terse, I did not express myself fully and/or overcompensated. However, since we have determined that this is not the goal of the OP, it is not appropriate for me to detail the differences here.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Prozak: Have you read anything about neural networks or genetic algorithms? A lot of times those can get some emergent behavior going on. NNs and GAs evolve in a way to solve their problems as best as possible.

EDIT: Here's an example of sort of emergent behavior I got once. I made a program that pitted 2 armies of triangles against eachother. The rules were, if you killed a guy on your own army you lost a some points, and if you killed a guy on the other army you gained some points. Every few minutes, I stopped the battle, and the guys with the best score moved on and "breeded" a new army. Then I would start over. Well, what I expected was that these armies would fight, but that's not what happened. At the beginning of each battle, the triangles would always shoot all of their bullets right away and never move. Obviously, since my triangles were lined up in rows and columns, a lot of friendly fire was happening. Well, after a few generations, my armies learned to shoot all of their ammo right away but not kill eachother. Right when the battle started they would all turn to a precise angle such that they could shoot everything and not kill anything. It was pretty neat, even though they weren't doing exactly what I had wanted them to do.

This topic is closed to new replies.

Advertisement