Jump to content

  • Log In with Google      Sign In   
  • Create Account

Robots Evolve And Learn How to Lie


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
32 replies to this topic

#1 JavaMava   Members   -  Reputation: 190

Like
0Likes
Like

Posted 25 January 2008 - 01:30 PM

Heres the link http://discovermagazine.com/2008/jan/robots-evolve-and-learn-how-to-lie Heres the text
Quote:
Robots can evolve to communicate with each other, to help, and even to deceive each other, according to Dario Floreano of the Laboratory of Intelligent Systems at the Swiss Federal Institute of Technology. Floreano and his colleagues outfitted robots with light sensors, rings of blue light, and wheels and placed them in habitats furnished with glowing “food sources” and patches of “poison” that recharged or drained their batteries. Their neural circuitry was programmed with just 30 “genes,” elements of software code that determined how much they sensed light and how they responded when they did. The robots were initially programmed both to light up randomly and to move randomly when they sensed light. To create the next generation of robots, Floreano recombined the genes of those that proved fittest—those that had managed to get the biggest charge out of the food source. The resulting code (with a little mutation added in the form of a random change) was downloaded into the robots to make what were, in essence, offspring. Then they were released into their artificial habitat. “We set up a situation common in nature—foraging with uncertainty,” Floreano says. “You have to find food, but you don’t know what food is; if you eat poison, you die.” Four different types of colonies of robots were allowed to eat, reproduce, and expire. By the 50th generation, the robots had learned to communicate—lighting up, in three out of four colonies, to alert the others when they’d found food or poison. The fourth colony sometimes evolved “cheater” robots instead, which would light up to tell the others that the poison was food, while they themselves rolled over to the food source and chowed down without emitting so much as a blink. Some robots, though, were veritable heroes. They signaled danger and died to save other robots. “Sometimes,” Floreano says, “you see that in nature—an animal that emits a cry when it sees a predator; it gets eaten, and the others get away—but I never expected to see this in robots.”
My qustion is how would one even begin to program something like that. The article makes it sound like its a well known theory that can be put into place with varying variables and use "evolution" to "grow" more intelligent AI. How is it done? How much could they learn? With a series of blinks could a language develop? I need to know more about, mostly how to program AI like this.

Sponsor:

#2 Trapper Zoid   Crossbones+   -  Reputation: 1370

Like
0Likes
Like

Posted 25 January 2008 - 02:10 PM

From the language used in the article I'd wager it's some kind of genetic algorithm. Genetic algorithms are method for exploring a range of possible solutions using a process modelled on evolution. You have a pool of possible solutions and a method for ranking them according to their fitness for the task. Then you "crossover" solutions by combining their simulated genes, with a bit of mutation thrown in to explore other possibilities. The Wikipedia site goes into more detail.

The tricky part of any genetic algorithm is you need to figure out two things first; a way of representing the solution in the form of a genetic code, and a fitness function for ranking them in order. For these robots, it's suggested that the genetic code maps to functions for responding to light, but it can be anything you like as long as it works with the crossover and mutation steps.

One of the downsides of genetic algorithms is that you may end up with a big population of duds, such as in this case robots that don't do anything except sit there and "die". This happens a lot if you don't set good rules for what your genetic code is and what the fitness function does. In theory if you have mutation you'll eventually get a better solution, but "eventually" can mean a very long time. From my dabbling with simple genetic algorithms I found it takes a lot of intuition to choose a good genetic representation and to design a fitness function that gives a good range of scores.

#3 steven katic   Members   -  Reputation: 274

Like
0Likes
Like

Posted 25 January 2008 - 03:44 PM

Quote:

With a series of blinks could a language develop?


yeah only if the programmer's programmed for it?

Quote:

I need to know more about, mostly how to program AI like this.


I find it a facinating research area, not so much because of what little has been achieved in it by way of "intelligence" or how little I know about it, but because of what it aspires to achieve: - The title(s) say it all -

evolutionary robotics, AI, artificial evolution.....

Even the language in the article title is so provocative and as anthropomorphically misleading as ever to the layman:

"robots evolve and learn how to lie"

What does that mean? The robot has a switch statement that gets switched based on some highly contrived and complex trigger modelled on our neural networks?

I am sure it means something else to the AI expert.

And my goodness a robot that lies! bit scary: I wouldn't buy a robot if I
couldn't be sure it will not lie to me. I have more than enough (human)
liars around me already including myself( Now why did I have to complicate things like that?).

Great title: robots evolve and learn how to lie.
Certainly enhances the potenial attraction of future funding.

Obviously, I am not an AI expert.

From the outside I may appear as an ignorant cynic, but the robot might see me more respectably as the devil's advocate? problem is: will the robot be lying or not?

Quote:

I need to know more about, mostly how to program AI like this.


enjoy the research

#4 iMalc   Crossbones+   -  Reputation: 2306

Like
0Likes
Like

Posted 25 January 2008 - 04:09 PM

For a genetic algorithm then I'd say it all comes down to the fitness function. Presumably the fitness of each colony was based on how much "food" each colony consumed. If causing robots from other colonies to die will mean that they get a greater share of the food then that would score well in the fitness function, hence it make sense that such an ability would likely evolve.

Furthurmore, if the fitness function for an individual includes the fitness result of the group, then I would highly expect the sacrificing behaviour to also emerge, especially if the group fitness is treated as more important than individual fitness.

#5 Hollower   Members   -  Reputation: 672

Like
0Likes
Like

Posted 25 January 2008 - 04:24 PM

You might like DarwinBots. You can see some interesting behaviors emerge.

Of course, steven's cynicism isn't unwarranted. The media always plays to the public's imagination, that the kinds of AI seen in hollywood movies are really happening in some lab somewhere. But there's no "intent" in these bots. They don't use lights to guide, warn, or deceive the other bots. They don't even know there are other bots. They flash them because the genes say to flash lights when such'n'such happens. It's "instinct". If the genes for that behavior remain that way it's because it worked for the previous generation.



#6 WeirdoFu   Members   -  Reputation: 205

Like
0Likes
Like

Posted 26 January 2008 - 02:58 AM

No one ever said that robots couldn't lie in the first place. Assuming that robots are no more than intelligent agents, then based on the definition of an intelligent agent, it wouldn't be surprising if an agent could lie. Simply put, an intelligent agent is one that perceives its environment and acts on its perceptions to reach a specific goal. So, if to reach the goal it must use misdirection or actively hide information from other agents, then it will, in essense, lie or state the truth that it wants others to believe. Conceptually (theoretically), it is as simple as that. Implementation, of course, is much different.

#7 Sneftel   Senior Moderators   -  Reputation: 1781

Like
0Likes
Like

Posted 26 January 2008 - 03:32 AM

Exactly. So some bots flash in response to food. Others flash in response to poison. Are these just simple linkages, the inevitable outcome of a tiny genome designed to allow a specific set of behaviors? Mercy, no! The ones that flash in response to food are HEROES, valiantly giving their lives for the strength of the colony. And the ones that flash in response to poison? Eeevil liars, conniving to put themselves in power.

I always hate these articles, because I think they're bad for the field. A situation like this might use genetic algorithms and whatnot, but it's basically just a hill-climbing algorithm. The researchers set up a situation where different behaviors have different degrees of optimality, sic their NNs and their GAs on them, and then act astounded for the reporters when the system converges on the optimum. The fact that a BFS might have produced the same or better results in 0.3 milliseconds? Not sexy enough. The result is that people think of AI as an attempt to ape the vagaries of human behavior, that if we can just program a system to like the same foods as we do it'll somehow become as smart as we are. It's regressive, pandering, and a waste of time and resources.

#8 ToohrVyk   Members   -  Reputation: 1591

Like
0Likes
Like

Posted 26 January 2008 - 03:51 AM

Fun. I think this kind of thing already existed in software form for at least a decade, but it's fun to see it applied using real-world robots. Nothing really interesting or new, though.

Quote:
Original post by Sneftel
The fact that a BFS might have produced the same or better results in 0.3 milliseconds? Not sexy enough.


When you support BFS, you support creationism.



#9 IADaveMark   Moderators   -  Reputation: 2472

Like
0Likes
Like

Posted 26 January 2008 - 03:58 AM

The thing I find fascinating here is that they used actual robots with only 30 genes instead of simply using software agents with a more complex environment and 100 genes. By using robotics and the type of physical sensors and locomotion techniques you are limited to in that arena, they trimmed down the potential for research on the GAs. In fact, they even lengthened the itteration time by not being able to "speed up the world." Put it into a simple 2D world with software agents and they could have had 50 generations in minutes.

The question is, were they doing robotics research (i.e. physical) or GA research (i.e. mental)?
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC

Professional consultant on game AI, mathematical modeling, simulation modeling
Co-advisor of the GDC AI Summit
Co-founder of the AI Game Programmers Guild
Author of the book, Behavioral Mathematics for Game AI

Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

#10 Sneftel   Senior Moderators   -  Reputation: 1781

Like
0Likes
Like

Posted 26 January 2008 - 09:19 AM

Quote:
Original post by InnocuousFox
The question is, were they doing robotics research (i.e. physical) or GA research (i.e. mental)?

Grant application research (i.e. fiscal).

#11 IADaveMark   Moderators   -  Reputation: 2472

Like
0Likes
Like

Posted 26 January 2008 - 09:50 AM

Hell... just go out and buy the Creatures series. This is hardly anything new, people.
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC

Professional consultant on game AI, mathematical modeling, simulation modeling
Co-advisor of the GDC AI Summit
Co-founder of the AI Game Programmers Guild
Author of the book, Behavioral Mathematics for Game AI

Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

#12 alexjc   Members   -  Reputation: 450

Like
0Likes
Like

Posted 26 January 2008 - 10:00 AM

I found some papers on the subject and posted them in my weekly roundup:

http://aigamedev.com/links/2008-week-4

Alex

#13 ibebrett   Members   -  Reputation: 205

Like
0Likes
Like

Posted 26 January 2008 - 10:19 AM

I think the more interesting point is how the "prisoners dillema" situation works. Is it better for the robots to have a strategy thats only good for them? or a strategy that helps everyone. Obviously usually a balance is reached, but i think an interesting point in this article is how quickly this shows up even on the smallest scale examples.

#14 Timkin   Members   -  Reputation: 864

Like
0Likes
Like

Posted 28 January 2008 - 01:21 PM

I too become annoyed when I see reports such as this one. Usually though I see this sort of stuff when I'm reviewing conference and journal papers prior to publication... researchers that claim big outcomes that are, in actuality, just a re-application of known results in a new domain.

GAs are a class of algorithms for solving blind search problems, such as certain optimisation problems. Their convergence properties are well known. They have been applied to problems in evolving populations of agents many times over (even in hardware). Nothing new here.

That the bots in this story evolved unexpected behaviours is not a clear indication of outstanding capabilities of these algorithms but rather an indictment on the lack of clear thought applied by the researchers as to what they would expect to see. There have been inumerable papers published on evolving both socially beneficial and individually beneficial agent behaviours in populations... it all gets down to what you choose as your objective function and what capabilities the agents have to sense each other and their environment.

One final comment... steven katic wrote:
Quote:


With a series of blinks could a language develop?

yeah only if the programmer's programmed for it?


If by this you mean that the programmer has to program in a language with which to communicate, then the above statement is not true. Co-evolution of language in bots (software and hardware) has been studied and shown to be possible. An article I read about a year ago threw a singularly large spanner into Noam Chomsky's (famous linguist for those that don't know) beliefs that grammar is hard-coded in the human brain (and cannot be learned)... an internet based research project involving bots around the globe that used video cameras to see a table top in front of them learned to communicate about the items on the desktop. The bots were able to form their own terms for the items in front of them and in communicating with each other had to develop an agreement on terms. Individual dialects developed as the bots interacted with each other and formed population niches... and the bots even learned a basic grammatical structure which they shared with new bots (that presumably made it easier for new bots to learn how to converse with existing bots). So from this research I would say that what is needed is a communications channel (a way of sending and receiving symbols) and an underlying method of adjusting syntax and semantics of internal symbols based on what is sent and received on the channel.

Anyway, I digress...

Cheers,

Timkin

#15 Sneftel   Senior Moderators   -  Reputation: 1781

Like
0Likes
Like

Posted 28 January 2008 - 02:05 PM

Quote:
Original post by Timkin
An article I read about a year ago threw a singularly large spanner into Noam Chomsky's (famous linguist for those that don't know) beliefs that grammar is hard-coded in the human brain (and cannot be learned)...

Huh... very cool. Link?

#16 steven katic   Members   -  Reputation: 274

Like
0Likes
Like

Posted 28 January 2008 - 03:02 PM

Quote:

One final comment... steven katic wrote:

Quote:

With a series of blinks could a language develop?

yeah only if the programmer's programmed for it?

and:

If by this you mean that the programmer has to program in a language with which to communicate......


The continued explanation you provide is not what I meant: that's to hard...
although interesting. :)

My response was much more simple, in keeping with my ignorant cynic theme:

So what did I mean by "yeah if the programmer's programmed for it?".

I was trying to imply the following generalization:

"there is no intelligence in Artificial Intelligence, and that any
signs of intelligence that ever becomes interesting to discuss
(such as developing a language with a series of blinks) must
orginate from the human creators of the software/system(s)/experiment."

Apart from sounding a little too obvious, the huge flaw in that statement is I gave no definition of intelligence! Will I need to?

But, hopefully, the generalization does clarify the point.
(If it doesn't, I'm sure you can safely ignore it if you wish)






#17 Roboguy   Members   -  Reputation: 794

Like
0Likes
Like

Posted 28 January 2008 - 04:06 PM

Quote:
Original post by Sneftel
Quote:
Original post by Timkin
An article I read about a year ago threw a singularly large spanner into Noam Chomsky's (famous linguist for those that don't know) beliefs that grammar is hard-coded in the human brain (and cannot be learned)...

Huh... very cool. Link?


Indeed. I'd be interested in the link as well. Although, haven't there been other experiments that have gone against that theory as well, or just other theories?

#18 owl   Banned   -  Reputation: 364

Like
0Likes
Like

Posted 28 January 2008 - 04:38 PM

so, @Timkin, you say that with a set of functions like:
go_left()
go_right()
go_forward()
go_backwards()
light_on()
light_off()
sense_light()
eat()

these robots can evolve a language between them? (Probably some color measurement and some timers would be required too.)

#19 Kylotan   Moderators   -  Reputation: 3338

Like
0Likes
Like

Posted 29 January 2008 - 03:52 AM

Hmm, I quite like this sort of article, but that's because instead of looking at how it potentially overhypes the robot/AI work, I look at the implications for psychology. I think it's very interesting to think that the ideas of altruism and deception, so often wrapped up in discussions of ethics and morals, can be viewed as simply specialised optimisations when placed into a wider population.

#20 makar   Members   -  Reputation: 122

Like
0Likes
Like

Posted 29 January 2008 - 08:41 AM

well i think the concept of lying to achieve some gain is actually a very likely behaviour that would emerge from any learning machine. A child will learn that from a very early age, lying can be beneficial.

adult: 'did you make this mess?'
child: 'yes'

*smack*

this action/response would give a negative result, and so the child would try something different next time:

adult: 'did you make this mess?'
child: .... 'no'
adult: 'hmmm ok, nevermind'

I think most learning methods would eventually learn to lie, if for nothing more than to try and avoid getting negative reactions






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS