Sign in to follow this  
JavaMava

Robots Evolve And Learn How to Lie

Recommended Posts

JavaMava    190
Heres the link http://discovermagazine.com/2008/jan/robots-evolve-and-learn-how-to-lie Heres the text
Quote:
Robots can evolve to communicate with each other, to help, and even to deceive each other, according to Dario Floreano of the Laboratory of Intelligent Systems at the Swiss Federal Institute of Technology. Floreano and his colleagues outfitted robots with light sensors, rings of blue light, and wheels and placed them in habitats furnished with glowing “food sources” and patches of “poison” that recharged or drained their batteries. Their neural circuitry was programmed with just 30 “genes,” elements of software code that determined how much they sensed light and how they responded when they did. The robots were initially programmed both to light up randomly and to move randomly when they sensed light. To create the next generation of robots, Floreano recombined the genes of those that proved fittest—those that had managed to get the biggest charge out of the food source. The resulting code (with a little mutation added in the form of a random change) was downloaded into the robots to make what were, in essence, offspring. Then they were released into their artificial habitat. “We set up a situation common in nature—foraging with uncertainty,” Floreano says. “You have to find food, but you don’t know what food is; if you eat poison, you die.” Four different types of colonies of robots were allowed to eat, reproduce, and expire. By the 50th generation, the robots had learned to communicate—lighting up, in three out of four colonies, to alert the others when they’d found food or poison. The fourth colony sometimes evolved “cheater” robots instead, which would light up to tell the others that the poison was food, while they themselves rolled over to the food source and chowed down without emitting so much as a blink. Some robots, though, were veritable heroes. They signaled danger and died to save other robots. “Sometimes,” Floreano says, “you see that in nature—an animal that emits a cry when it sees a predator; it gets eaten, and the others get away—but I never expected to see this in robots.”
My qustion is how would one even begin to program something like that. The article makes it sound like its a well known theory that can be put into place with varying variables and use "evolution" to "grow" more intelligent AI. How is it done? How much could they learn? With a series of blinks could a language develop? I need to know more about, mostly how to program AI like this.

Share this post


Link to post
Share on other sites
Trapper Zoid    1370
From the language used in the article I'd wager it's some kind of genetic algorithm. Genetic algorithms are method for exploring a range of possible solutions using a process modelled on evolution. You have a pool of possible solutions and a method for ranking them according to their fitness for the task. Then you "crossover" solutions by combining their simulated genes, with a bit of mutation thrown in to explore other possibilities. The Wikipedia site goes into more detail.

The tricky part of any genetic algorithm is you need to figure out two things first; a way of representing the solution in the form of a genetic code, and a fitness function for ranking them in order. For these robots, it's suggested that the genetic code maps to functions for responding to light, but it can be anything you like as long as it works with the crossover and mutation steps.

One of the downsides of genetic algorithms is that you may end up with a big population of duds, such as in this case robots that don't do anything except sit there and "die". This happens a lot if you don't set good rules for what your genetic code is and what the fitness function does. In theory if you have mutation you'll eventually get a better solution, but "eventually" can mean a very long time. From my dabbling with simple genetic algorithms I found it takes a lot of intuition to choose a good genetic representation and to design a fitness function that gives a good range of scores.

Share this post


Link to post
Share on other sites
steven katic    275
Quote:

With a series of blinks could a language develop?


yeah only if the programmer's programmed for it?

Quote:

I need to know more about, mostly how to program AI like this.


I find it a facinating research area, not so much because of what little has been achieved in it by way of "intelligence" or how little I know about it, but because of what it aspires to achieve: - The title(s) say it all -

evolutionary robotics, AI, artificial evolution.....

Even the language in the article title is so provocative and as anthropomorphically misleading as ever to the layman:

"robots evolve and learn how to lie"

What does that mean? The robot has a switch statement that gets switched based on some highly contrived and complex trigger modelled on our neural networks?

I am sure it means something else to the AI expert.

And my goodness a robot that lies! bit scary: I wouldn't buy a robot if I
couldn't be sure it will not lie to me. I have more than enough (human)
liars around me already including myself( Now why did I have to complicate things like that?).

Great title: robots evolve and learn how to lie.
Certainly enhances the potenial attraction of future funding.

Obviously, I am not an AI expert.

From the outside I may appear as an ignorant cynic, but the robot might see me more respectably as the devil's advocate? problem is: will the robot be lying or not?

Quote:

I need to know more about, mostly how to program AI like this.


enjoy the research

Share this post


Link to post
Share on other sites
iMalc    2466
For a genetic algorithm then I'd say it all comes down to the fitness function. Presumably the fitness of each colony was based on how much "food" each colony consumed. If causing robots from other colonies to die will mean that they get a greater share of the food then that would score well in the fitness function, hence it make sense that such an ability would likely evolve.

Furthurmore, if the fitness function for an individual includes the fitness result of the group, then I would highly expect the sacrificing behaviour to also emerge, especially if the group fitness is treated as more important than individual fitness.

Share this post


Link to post
Share on other sites
Hollower    685
You might like DarwinBots. You can see some interesting behaviors emerge.

Of course, steven's cynicism isn't unwarranted. The media always plays to the public's imagination, that the kinds of AI seen in hollywood movies are really happening in some lab somewhere. But there's no "intent" in these bots. They don't use lights to guide, warn, or deceive the other bots. They don't even know there are other bots. They flash them because the genes say to flash lights when such'n'such happens. It's "instinct". If the genes for that behavior remain that way it's because it worked for the previous generation.

Share this post


Link to post
Share on other sites
WeirdoFu    205
No one ever said that robots couldn't lie in the first place. Assuming that robots are no more than intelligent agents, then based on the definition of an intelligent agent, it wouldn't be surprising if an agent could lie. Simply put, an intelligent agent is one that perceives its environment and acts on its perceptions to reach a specific goal. So, if to reach the goal it must use misdirection or actively hide information from other agents, then it will, in essense, lie or state the truth that it wants others to believe. Conceptually (theoretically), it is as simple as that. Implementation, of course, is much different.

Share this post


Link to post
Share on other sites
Sneftel    1788
Exactly. So some bots flash in response to food. Others flash in response to poison. Are these just simple linkages, the inevitable outcome of a tiny genome designed to allow a specific set of behaviors? Mercy, no! The ones that flash in response to food are HEROES, valiantly giving their lives for the strength of the colony. And the ones that flash in response to poison? Eeevil liars, conniving to put themselves in power.

I always hate these articles, because I think they're bad for the field. A situation like this might use genetic algorithms and whatnot, but it's basically just a hill-climbing algorithm. The researchers set up a situation where different behaviors have different degrees of optimality, sic their NNs and their GAs on them, and then act astounded for the reporters when the system converges on the optimum. The fact that a BFS might have produced the same or better results in 0.3 milliseconds? Not sexy enough. The result is that people think of AI as an attempt to ape the vagaries of human behavior, that if we can just program a system to like the same foods as we do it'll somehow become as smart as we are. It's regressive, pandering, and a waste of time and resources.

Share this post


Link to post
Share on other sites
ToohrVyk    1595
Fun. I think this kind of thing already existed in software form for at least a decade, but it's fun to see it applied using real-world robots. Nothing really interesting or new, though.

Quote:
Original post by Sneftel
The fact that a BFS might have produced the same or better results in 0.3 milliseconds? Not sexy enough.


When you support BFS, you support creationism.

Share this post


Link to post
Share on other sites
IADaveMark    3731
The thing I find fascinating here is that they used actual robots with only 30 genes instead of simply using software agents with a more complex environment and 100 genes. By using robotics and the type of physical sensors and locomotion techniques you are limited to in that arena, they trimmed down the potential for research on the GAs. In fact, they even lengthened the itteration time by not being able to "speed up the world." Put it into a simple 2D world with software agents and they could have had 50 generations in minutes.

The question is, were they doing robotics research (i.e. physical) or GA research (i.e. mental)?

Share this post


Link to post
Share on other sites
Sneftel    1788
Quote:
Original post by InnocuousFox
The question is, were they doing robotics research (i.e. physical) or GA research (i.e. mental)?

Grant application research (i.e. fiscal).

Share this post


Link to post
Share on other sites
ibebrett    205
I think the more interesting point is how the "prisoners dillema" situation works. Is it better for the robots to have a strategy thats only good for them? or a strategy that helps everyone. Obviously usually a balance is reached, but i think an interesting point in this article is how quickly this shows up even on the smallest scale examples.

Share this post


Link to post
Share on other sites
Timkin    864
I too become annoyed when I see reports such as this one. Usually though I see this sort of stuff when I'm reviewing conference and journal papers prior to publication... researchers that claim big outcomes that are, in actuality, just a re-application of known results in a new domain.

GAs are a class of algorithms for solving blind search problems, such as certain optimisation problems. Their convergence properties are well known. They have been applied to problems in evolving populations of agents many times over (even in hardware). Nothing new here.

That the bots in this story evolved unexpected behaviours is not a clear indication of outstanding capabilities of these algorithms but rather an indictment on the lack of clear thought applied by the researchers as to what they would expect to see. There have been inumerable papers published on evolving both socially beneficial and individually beneficial agent behaviours in populations... it all gets down to what you choose as your objective function and what capabilities the agents have to sense each other and their environment.

One final comment... steven katic wrote:
Quote:


With a series of blinks could a language develop?

yeah only if the programmer's programmed for it?


If by this you mean that the programmer has to program in a language with which to communicate, then the above statement is not true. Co-evolution of language in bots (software and hardware) has been studied and shown to be possible. An article I read about a year ago threw a singularly large spanner into Noam Chomsky's (famous linguist for those that don't know) beliefs that grammar is hard-coded in the human brain (and cannot be learned)... an internet based research project involving bots around the globe that used video cameras to see a table top in front of them learned to communicate about the items on the desktop. The bots were able to form their own terms for the items in front of them and in communicating with each other had to develop an agreement on terms. Individual dialects developed as the bots interacted with each other and formed population niches... and the bots even learned a basic grammatical structure which they shared with new bots (that presumably made it easier for new bots to learn how to converse with existing bots). So from this research I would say that what is needed is a communications channel (a way of sending and receiving symbols) and an underlying method of adjusting syntax and semantics of internal symbols based on what is sent and received on the channel.

Anyway, I digress...

Cheers,

Timkin

Share this post


Link to post
Share on other sites
Sneftel    1788
Quote:
Original post by Timkin
An article I read about a year ago threw a singularly large spanner into Noam Chomsky's (famous linguist for those that don't know) beliefs that grammar is hard-coded in the human brain (and cannot be learned)...

Huh... very cool. Link?

Share this post


Link to post
Share on other sites
steven katic    275
Quote:

One final comment... steven katic wrote:

Quote:

With a series of blinks could a language develop?

yeah only if the programmer's programmed for it?

and:

If by this you mean that the programmer has to program in a language with which to communicate......


The continued explanation you provide is not what I meant: that's to hard...
although interesting. :)

My response was much more simple, in keeping with my ignorant cynic theme:

So what did I mean by "yeah if the programmer's programmed for it?".

I was trying to imply the following generalization:

"there is no intelligence in Artificial Intelligence, and that any
signs of intelligence that ever becomes interesting to discuss
(such as developing a language with a series of blinks) must
orginate from the human creators of the software/system(s)/experiment."

Apart from sounding a little too obvious, the huge flaw in that statement is I gave no definition of intelligence! Will I need to?

But, hopefully, the generalization does clarify the point.
(If it doesn't, I'm sure you can safely ignore it if you wish)




Share this post


Link to post
Share on other sites
Roboguy    794
Quote:
Original post by Sneftel
Quote:
Original post by Timkin
An article I read about a year ago threw a singularly large spanner into Noam Chomsky's (famous linguist for those that don't know) beliefs that grammar is hard-coded in the human brain (and cannot be learned)...

Huh... very cool. Link?


Indeed. I'd be interested in the link as well. Although, haven't there been other experiments that have gone against that theory as well, or just other theories?

Share this post


Link to post
Share on other sites
owl    376
so, @Timkin, you say that with a set of functions like:
go_left()
go_right()
go_forward()
go_backwards()
light_on()
light_off()
sense_light()
eat()

these robots can evolve a language between them? (Probably some color measurement and some timers would be required too.)

Share this post


Link to post
Share on other sites
Kylotan    9854
Hmm, I quite like this sort of article, but that's because instead of looking at how it potentially overhypes the robot/AI work, I look at the implications for psychology. I think it's very interesting to think that the ideas of altruism and deception, so often wrapped up in discussions of ethics and morals, can be viewed as simply specialised optimisations when placed into a wider population.

Share this post


Link to post
Share on other sites
makar    122
well i think the concept of lying to achieve some gain is actually a very likely behaviour that would emerge from any learning machine. A child will learn that from a very early age, lying can be beneficial.

adult: 'did you make this mess?'
child: 'yes'

*smack*

this action/response would give a negative result, and so the child would try something different next time:

adult: 'did you make this mess?'
child: .... 'no'
adult: 'hmmm ok, nevermind'

I think most learning methods would eventually learn to lie, if for nothing more than to try and avoid getting negative reactions

Share this post


Link to post
Share on other sites
Timkin    864
Quote:
Original post by Sneftel
Quote:
Original post by Timkin
An article I read about a year ago threw a singularly large spanner into Noam Chomsky's (famous linguist for those that don't know) beliefs that grammar is hard-coded in the human brain (and cannot be learned)...

Huh... very cool. Link?


It was a New Scientist article iirc... I'll try and track it down for you.

steven: lets not get into a discussion of 'what is intelligence' just yet... the year is still too young (and we've had that discussion many times over during the past decade). As for your belief that intelligence must arise from the creator/designer, I disagree. Mostly because I believe intelligence is a functional property of systems and so it can be learned (and improved) through adaptation of the system. Provided the designer/creator gives the system the capacity to try new strategies and evaluate the quality of them, the system will develop what we might call 'intelligent strategies'... i.e., those that best suit the system (taken with respect to its performance measures and beliefs).

owl: no, that's not what I'm saying. If you gave a bot/agent a sensor with which to observe an environment and a means of applying labels to objects detectable by the sensor, then you gave it the capacity to communicate these labels to another bot/agent that could observe the same environment... and then finally gave them a means of inferring the meaning of a label they receive from the other bot, then it is conceivable that you could devise an evolutionary strategy that permitted the bots to evolve a common language.

In the given experiment, the communication channel is made of both the sensor and the blinking light. The labels can be anything, but they map directly to positive and negative reinforcements in the environment. In this context it doesn't matter what one bot calls them... only the label they send to other bots (how they blink... or not at all).

The evolutionary strategy is 'survival of the power-eaters'... i.e., those that receive the most positive reinforcement are more likely to survive. However, this isn't guaranteed, since the GAs implementation includes stochastic factors (mutation and selection). Thus there will be situations in which bots will gain more by helping everyone to recieve more power, rather than just themselves (altruism benefits weak individuals the most). There will also be situations in which those with a strong strategy are better off treading on the weak (altruism does not benefit the powerful).

For those interested: Kevin Korb from Monash University, along with some of his honours students, has investigated evolution of various social behaviours in software simulations. He has noted, for example, that in certain populations, euthenasia is a viable and appropriate strategy for ensuring the long term strength and viability of the population. If you're interested in his work you can find more information online at Monash's website.

Cheers,

Timkin

[Edited by - Timkin on January 29, 2008 7:08:38 PM]

Share this post


Link to post
Share on other sites
Kylotan    9854
Quote:
Original post by makar
well i think the concept of lying to achieve some gain is actually a very likely behaviour that would emerge from any learning machine. A child will learn that from a very early age, lying can be beneficial.

adult: 'did you make this mess?'
child: 'yes'

*smack*

this action/response would give a negative result, and so the child would try something different next time:

adult: 'did you make this mess?'
child: .... 'no'
adult: 'hmmm ok, nevermind'

I think most learning methods would eventually learn to lie, if for nothing more than to try and avoid getting negative reactions


But as with many similar problems, iterative and multi-agent versions produce different results. Not only do you have to learn not just to lie, but the circumstances under which to do it. And you have to take into account the diminishing utility of lying when the other agent is aware of the possibility of you doing so, eg. if the 'adult' agent knows a 'child' agent made the mess, but each denies it. It may be more worthwhile to have told the truth and accepted the short-term punishment in return for being able to get away with a bigger lie later! These are interesting problems.

Share this post


Link to post
Share on other sites
steven katic    275
Quote:

from Timkin

steven: lets not get into a discussion of 'what is intelligence' just yet... the year is still too young (and we've had that discussion many times over during the past decade). As for your belief that intelligence must arise from the creator/designer, I disagree. ......


No No that's fine ...let's not.

I am sure there is a/are clear definition(s) of the term intelligence for this field of science (that must work) for it to be sensibly explored in a civilised manner as a science (It is but a vague memory to me now). Apart from the likelihood of such a discussion digressing away from the science of AI and into philosophy, I wouldn't particularly find it pleasant to argue/discuss (i.e. I wouldn't participate anyhow. When I mentioned "defining intelligence" in that post I was feeling cheeking and baiting for bites( um..aaah): It's hard for me to play the ignorant cynic all time you know... but we all have a own crosses to bear don't we ;))

Share this post


Link to post
Share on other sites
Timkin    864
Quote:
Original post by steven katic
I am sure there is a/are clear definition(s) of the term intelligence for this field of science


Hehe... but therein lies the problem... there is no universally accepted definition of intelligence! ;) It's like asking a chimpanzee to define a banana. Sure, it can pick one out of a pile of fruit... but getting it to explain to you the texture, taste and colour that enabled it to distinguish it from say, a lemon... well, that's a different story! ;) Are fruits that are yellow also bananas (for the chimp)? Are fruits that taste like bananas also bananas? Is the chimpanzee 'intelligent' because she can pick out a banana among lemons or is it just a behaviour triggered by an encoded functional mapping from observations to expected rewards?

Oh god no... I've started it now, haven't I?


(I couldn't help myself... it's Friday... it's quiet around my office... and I'm avoiding real work) ;)

Cheers,

Timkin

Share this post


Link to post
Share on other sites
owl    376
Quote:
Original post by Timkin
Quote:
Original post by steven katic
I am sure there is a/are clear definition(s) of the term intelligence for this field of science


Hehe... but therein lies the problem... there is no universally accepted definition of intelligence! ;) It's like asking a chimpanzee to define a banana. Sure, it can pick one out of a pile of fruit... but getting it to explain to you the texture, taste and colour that enabled it to distinguish it from say, a lemon... well, that's a different story! ;) Are fruits that are yellow also bananas (for the chimp)? Are fruits that taste like bananas also bananas? Is the chimpanzee 'intelligent' because she can pick out a banana among lemons or is it just a behaviour triggered by an encoded functional mapping from observations to expected rewards?


Well, we know more or less how intelligence looks like. If you ever had a dog, you know they can behave noticiably intelligent some times, like if they were capable of comming up with a solution/conclusion they weren't taught before.

To me intelligence is to be able to sintetize a solution (not previously known) for a problem by one's own. And I don't even think self awarenes is a requirement for that.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this