•      Sign In
• Create Account

# Robots Evolve And Learn How to Lie

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

32 replies to this topic

### #21Timkin  Members   -  Reputation: 864

Like
0Likes
Like

Posted 29 January 2008 - 01:08 PM

Quote:
Original post by Sneftel
Quote:
 Original post by TimkinAn article I read about a year ago threw a singularly large spanner into Noam Chomsky's (famous linguist for those that don't know) beliefs that grammar is hard-coded in the human brain (and cannot be learned)...

Huh... very cool. Link?

It was a New Scientist article iirc... I'll try and track it down for you.

steven: lets not get into a discussion of 'what is intelligence' just yet... the year is still too young (and we've had that discussion many times over during the past decade). As for your belief that intelligence must arise from the creator/designer, I disagree. Mostly because I believe intelligence is a functional property of systems and so it can be learned (and improved) through adaptation of the system. Provided the designer/creator gives the system the capacity to try new strategies and evaluate the quality of them, the system will develop what we might call 'intelligent strategies'... i.e., those that best suit the system (taken with respect to its performance measures and beliefs).

owl: no, that's not what I'm saying. If you gave a bot/agent a sensor with which to observe an environment and a means of applying labels to objects detectable by the sensor, then you gave it the capacity to communicate these labels to another bot/agent that could observe the same environment... and then finally gave them a means of inferring the meaning of a label they receive from the other bot, then it is conceivable that you could devise an evolutionary strategy that permitted the bots to evolve a common language.

In the given experiment, the communication channel is made of both the sensor and the blinking light. The labels can be anything, but they map directly to positive and negative reinforcements in the environment. In this context it doesn't matter what one bot calls them... only the label they send to other bots (how they blink... or not at all).

The evolutionary strategy is 'survival of the power-eaters'... i.e., those that receive the most positive reinforcement are more likely to survive. However, this isn't guaranteed, since the GAs implementation includes stochastic factors (mutation and selection). Thus there will be situations in which bots will gain more by helping everyone to recieve more power, rather than just themselves (altruism benefits weak individuals the most). There will also be situations in which those with a strong strategy are better off treading on the weak (altruism does not benefit the powerful).

For those interested: Kevin Korb from Monash University, along with some of his honours students, has investigated evolution of various social behaviours in software simulations. He has noted, for example, that in certain populations, euthenasia is a viable and appropriate strategy for ensuring the long term strength and viability of the population. If you're interested in his work you can find more information online at Monash's website.

Cheers,

Timkin

[Edited by - Timkin on January 29, 2008 7:08:38 PM]

### #22Kylotan  Moderators   -  Reputation: 5639

Like
0Likes
Like

Posted 29 January 2008 - 11:14 PM

Quote:
 Original post by makarwell i think the concept of lying to achieve some gain is actually a very likely behaviour that would emerge from any learning machine. A child will learn that from a very early age, lying can be beneficial.adult: 'did you make this mess?'child: 'yes'*smack*this action/response would give a negative result, and so the child would try something different next time:adult: 'did you make this mess?'child: .... 'no'adult: 'hmmm ok, nevermind'I think most learning methods would eventually learn to lie, if for nothing more than to try and avoid getting negative reactions

But as with many similar problems, iterative and multi-agent versions produce different results. Not only do you have to learn not just to lie, but the circumstances under which to do it. And you have to take into account the diminishing utility of lying when the other agent is aware of the possibility of you doing so, eg. if the 'adult' agent knows a 'child' agent made the mess, but each denies it. It may be more worthwhile to have told the truth and accepted the short-term punishment in return for being able to get away with a bigger lie later! These are interesting problems.

### #23steven katic  Members   -  Reputation: 275

Like
0Likes
Like

Posted 30 January 2008 - 04:30 PM

Quote:
 from Timkinsteven: lets not get into a discussion of 'what is intelligence' just yet... the year is still too young (and we've had that discussion many times over during the past decade). As for your belief that intelligence must arise from the creator/designer, I disagree. ......

No No that's fine ...let's not.

I am sure there is a/are clear definition(s) of the term intelligence for this field of science (that must work) for it to be sensibly explored in a civilised manner as a science (It is but a vague memory to me now). Apart from the likelihood of such a discussion digressing away from the science of AI and into philosophy, I wouldn't particularly find it pleasant to argue/discuss (i.e. I wouldn't participate anyhow. When I mentioned "defining intelligence" in that post I was feeling cheeking and baiting for bites( um..aaah): It's hard for me to play the ignorant cynic all time you know... but we all have a own crosses to bear don't we ;))

### #24Timkin  Members   -  Reputation: 864

Like
0Likes
Like

Posted 31 January 2008 - 12:49 PM

Quote:
 Original post by steven katicI am sure there is a/are clear definition(s) of the term intelligence for this field of science

Hehe... but therein lies the problem... there is no universally accepted definition of intelligence! ;) It's like asking a chimpanzee to define a banana. Sure, it can pick one out of a pile of fruit... but getting it to explain to you the texture, taste and colour that enabled it to distinguish it from say, a lemon... well, that's a different story! ;) Are fruits that are yellow also bananas (for the chimp)? Are fruits that taste like bananas also bananas? Is the chimpanzee 'intelligent' because she can pick out a banana among lemons or is it just a behaviour triggered by an encoded functional mapping from observations to expected rewards?

Oh god no... I've started it now, haven't I?

(I couldn't help myself... it's Friday... it's quiet around my office... and I'm avoiding real work) ;)

Cheers,

Timkin

### #25 owl   Banned   -  Reputation: 376

Like
0Likes
Like

Posted 01 February 2008 - 05:36 PM

Quote:
Original post by Timkin
Quote:
 Original post by steven katicI am sure there is a/are clear definition(s) of the term intelligence for this field of science

Hehe... but therein lies the problem... there is no universally accepted definition of intelligence! ;) It's like asking a chimpanzee to define a banana. Sure, it can pick one out of a pile of fruit... but getting it to explain to you the texture, taste and colour that enabled it to distinguish it from say, a lemon... well, that's a different story! ;) Are fruits that are yellow also bananas (for the chimp)? Are fruits that taste like bananas also bananas? Is the chimpanzee 'intelligent' because she can pick out a banana among lemons or is it just a behaviour triggered by an encoded functional mapping from observations to expected rewards?

Well, we know more or less how intelligence looks like. If you ever had a dog, you know they can behave noticiably intelligent some times, like if they were capable of comming up with a solution/conclusion they weren't taught before.

To me intelligence is to be able to sintetize a solution (not previously known) for a problem by one's own. And I don't even think self awarenes is a requirement for that.

### #26m0ng00se  Members   -  Reputation: 104

Like
0Likes
Like

Posted 15 February 2008 - 12:08 PM

To study the principle of genetic algorithms correctly you need to actually understand gene theory. Most of you are mixing up "intelligence" with "evolution theory" and "pattern hereditary". You're comparing apples with oranges.

When I first saw AI genetic algorithms I was also fascinated so I took a different approach to understanding. I spent four years researching Mendells theory of inheritance, hybridism, F2, F3, F4 cross linking. Inbreeding of genes, mutation, dominance, recessive traits etc, etc.

Would you say that plants are "intelligent?" Yet they will compete for food, compete for light, compete to be pollinated etc. However using genetic crosses you can deliberately breed the opposite so they give up easily and have low adaption skills etc. Does that make one lot "more intelligent" than the other?

All genetic algorithms do is simulate "pattern evolving" and "response evolving". It is true that that it can take many, many generations to get a desired result with AI genetic modelling BUT you can mathematically predict the outcomes depending on the amount of random mutation introduced.

To imply that behaviour comes as a surprise would indicate either a large mutation factor or a total lack of mathematical modelling. We are talking about computers remember and finite algorithms. Think about that. Naturally you can run the algorithm in simulation mode and predict all possible outcomes. That is what computers do best. Even with random seeding you can still expect pattern emergence under Chaos Math Theory.

As somebody has already pointed out... The usual result if you don't understand genetic theory properly... is a bunch of duds. Just like the Dodo bird. Current genetic algorithms tend to fail because they pretend to imitate human or animal genetics but actually do the complete opposite. We are what we are because our DNA can combine in an almost infinite variation. You don't even need mutation if the original gene source is large enough. That's a crappy computer engineer shortcut.

Their AI algorithms are flawed because they do the exact opposite to real genes. Computer engineers write algorithms that selectively combine fewer and fewer variations until their desired pattern "emerges".

m0ng00se

### #27steven katic  Members   -  Reputation: 275

Like
0Likes
Like

Posted 15 February 2008 - 10:34 PM

How do you know that I am not a robot?

PS. That was a rhetorical question by the way.

### #28Timkin  Members   -  Reputation: 864

Like
0Likes
Like

Posted 16 February 2008 - 05:56 PM

Quote:
 Original post by m0ng00seTo study the principle of genetic algorithms correctly you need to actually understand gene theory.

That's patently false. Genetic algorithms bare little to no resemblance to biological evolutionary principles and in particular gene theory. To understand genetic algorithms one need only understand schema theorem and some basic probabilistic analysis. From this and the canonical string operators (selection, crossover and mutation) you'll arrive at an asymptotic convergence law that describes why GAs are useful for solving blind optimisation problems.

As to the rest of your post m0ng00se, I fail to see how it applies to the topic or what was written previously.

### #29choffstein  Members   -  Reputation: 1090

Like
0Likes
Like

Posted 21 February 2008 - 11:17 AM

I see this outcome as perfectly feasible if the simulation is run correctly -- and not just as a 'optimal solution' expected by the researchers.

Why would it not be possible to create an instruction set describing all the actions of these robots, encoded into binary, and then use each gene as a different set of 'cause and effect' action. So given

010 111 00 101111 1 00101

That might say 'if i see three fast pulses AND a pulse wait pulse', 'turn towards the light source AND move forward'

You could then use genetic algorithms to splice these instruction sets, coded in the DNA, and evolve robots. In this case, it could indeed happen that lying is evolved, depending on how the simulation was set up and how 'rewards' were given.

For example, imagine that only 'surviving' robots reproduced to the next generation and imagine that fitness is based on amount of food collected versus the amount of food others collected. In this case, attracting attention to any food you find would be sub-optimal. In fact, it would be perfectly logical for the bots to evolve into trying to kill each other. That is exactly how the fitness function was defined.

Are the bots smart? Are they learning? In a broad sense of the word, sure. They certainly are not 'aware' of anything ... but they are learning and evolving the same way a dog learns to sit when you say 'sit' and give it a treat... simple reaction versus reward.

Seems perfectly plausible and not too surprising to me...

### #30steven katic  Members   -  Reputation: 275

Like
0Likes
Like

Posted 21 February 2008 - 03:12 PM

I am skewing a bit(a lot?) off topic here, but it does scrape in under robots....I just recently found out I could buy a little robotic lawnmower to mow my lawn (cost about \$3000AU)! It looks like a little flying saucer(about 300-400mm diameter) with 3 wheels to roll about on. Apparently you rope off designated areas, that it mows in random patterns, eventually mowing about 100 square metres/hour (a bit Sssssslowwwwwww!); it's also nice enough to ride off (all by its lonesome) to the recharger when its battery gets too low and needs a charge to continue.

I imagine if I bought one, I would waste a fair bit of my time watching it's debut on my lawn.

Ah..just some more of the fruits of AI/robotics research.

### #31fujitsu  Members   -  Reputation: 122

Like
0Likes
Like

Posted 24 February 2008 - 02:35 AM

Quote:
 Original post by makaradult: 'did you make this mess?'child: 'yes'*smack*

I'd just like to point out something:

The article is titles "robots evolve and learn how to lie" however it is about the evolutionary process of intelligence but not how we learn to lie (and do other cool stuff).

I'd like to propose the creation of a 'parent' robot, a robot that is incapable of learning but is programmed to be capable of survival in its environment. Would this help the development of a 'child' robot? A robot that is capable of using the sensors and limbs attached to it and has a desire to make choices that benefit itself most.

Perhaps.

In nature parents tend to force things upon their children for the child's benefit. In time the child learns to make its own decisions (with a little guidance of course).

Though it seems possible to recreate intelligence as it exists in nature I don't believe it is possible to code a program to make decisions based on events that have not yet been perceived.

### #32choffstein  Members   -  Reputation: 1090

Like
0Likes
Like

Posted 24 February 2008 - 07:18 AM

Quote:
Original post by fujitsu
Quote:
 Original post by makaradult: 'did you make this mess?'child: 'yes'*smack*

I'd just like to point out something:

The article is titles "robots evolve and learn how to lie" however it is about the evolutionary process of intelligence but not how we learn to lie (and do other cool stuff).

I'd like to propose the creation of a 'parent' robot, a robot that is incapable of learning but is programmed to be capable of survival in its environment. Would this help the development of a 'child' robot? A robot that is capable of using the sensors and limbs attached to it and has a desire to make choices that benefit itself most.

Perhaps.

In nature parents tend to force things upon their children for the child's benefit. In time the child learns to make its own decisions (with a little guidance of course).

Though it seems possible to recreate intelligence as it exists in nature I don't believe it is possible to code a program to make decisions based on events that have not yet been perceived.

Along these lines, it has been found that humans had an explosion of culture around 10000 years ago. All of a sudden there was a rapid acceleration of knowledge and invention. Theories about this period include the idea that traits are passed down two ways: genetically and culturally. Genetic traits include physical traits and 'instinctual' reactions. Cultural traits is essentially knowledge taught.

This exercise may be more telling if parents did not die before children were spawned and that knowledge learned in one generation, stored in a 'memory bank' -- could be 'passed down' and taught to children (based on physical capacity to learn)...

It certainly wouldn't make sense if you were using genetic algorithms as a method to find optimal solutions ... but otherwise, it might be 'interesting'...

### #33wodinoneeye  Members   -  Reputation: 1656

Like
0Likes
Like

Posted 07 March 2008 - 07:57 PM

Thinking about this (less the usual reaction to such AI hype - "How many dim journalists does it take to anthromorphize something little more than a fancy lightswitch....") --- it came to mind to question how complex a world mechanism and possibilities for behavior is needed before you can start using it as a remote analogy of human level behavior ??

Cooperative behavior where units of similar genetics are seen as 'Us' and thus worthy of help (signaling them that food is here or equiv) and trying to fool/block the acquisition of that resource by 'The Others' (ones not sharing the same genes).

I suppose IDENTIFY_GENETIC_MATCH could be turned into a gene/set of genes that modiy behavior, but that seems so contrived because the GA system didnt create the mechanism from much simpler primitives itself.

Too many genes and you get into combinatoric chaos-land (millions of generations of a world of billions of units to grow behaviors that are not chaotic ineffectiveness).

Too few genes and your potential for complex behaviors are virtually nil.

I though of the above and of the old 'amino acid soup' theory of evolution, I wonder if anyone doing these experiments has noticed (if the mechanism is even possible) groupings of units coalescing to connect their behaviors.

An Idea would be to add a gene that allows connection to one or more other units to form super units -- whereby the limited behaviors of a GA gene system can group into a next order of organism (effectively the original units become 'super=genes' allowing much more complex behavior).
(Unfortunately this will be subject to the combinatorics effect unless some stability factor causes the unit mutation to slow down and the generational cycle to now be mostly at the higher order.)

Billions and billions ....

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

PARTNERS