Robots Evolve And Learn How to Lie

Started by
31 comments, last by wodinoneeye 16 years, 1 month ago
Hell... just go out and buy the Creatures series. This is hardly anything new, people.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Advertisement
I found some papers on the subject and posted them in my weekly roundup:

http://aigamedev.com/links/2008-week-4

Alex

Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!

I think the more interesting point is how the "prisoners dillema" situation works. Is it better for the robots to have a strategy thats only good for them? or a strategy that helps everyone. Obviously usually a balance is reached, but i think an interesting point in this article is how quickly this shows up even on the smallest scale examples.
I too become annoyed when I see reports such as this one. Usually though I see this sort of stuff when I'm reviewing conference and journal papers prior to publication... researchers that claim big outcomes that are, in actuality, just a re-application of known results in a new domain.

GAs are a class of algorithms for solving blind search problems, such as certain optimisation problems. Their convergence properties are well known. They have been applied to problems in evolving populations of agents many times over (even in hardware). Nothing new here.

That the bots in this story evolved unexpected behaviours is not a clear indication of outstanding capabilities of these algorithms but rather an indictment on the lack of clear thought applied by the researchers as to what they would expect to see. There have been inumerable papers published on evolving both socially beneficial and individually beneficial agent behaviours in populations... it all gets down to what you choose as your objective function and what capabilities the agents have to sense each other and their environment.

One final comment... steven katic wrote:
Quote:

With a series of blinks could a language develop?

yeah only if the programmer's programmed for it?


If by this you mean that the programmer has to program in a language with which to communicate, then the above statement is not true. Co-evolution of language in bots (software and hardware) has been studied and shown to be possible. An article I read about a year ago threw a singularly large spanner into Noam Chomsky's (famous linguist for those that don't know) beliefs that grammar is hard-coded in the human brain (and cannot be learned)... an internet based research project involving bots around the globe that used video cameras to see a table top in front of them learned to communicate about the items on the desktop. The bots were able to form their own terms for the items in front of them and in communicating with each other had to develop an agreement on terms. Individual dialects developed as the bots interacted with each other and formed population niches... and the bots even learned a basic grammatical structure which they shared with new bots (that presumably made it easier for new bots to learn how to converse with existing bots). So from this research I would say that what is needed is a communications channel (a way of sending and receiving symbols) and an underlying method of adjusting syntax and semantics of internal symbols based on what is sent and received on the channel.

Anyway, I digress...

Cheers,

Timkin
Quote:Original post by Timkin
An article I read about a year ago threw a singularly large spanner into Noam Chomsky's (famous linguist for those that don't know) beliefs that grammar is hard-coded in the human brain (and cannot be learned)...

Huh... very cool. Link?
Quote:
One final comment... steven katic wrote:

Quote:

With a series of blinks could a language develop?

yeah only if the programmer's programmed for it?

and:

If by this you mean that the programmer has to program in a language with which to communicate......


The continued explanation you provide is not what I meant: that's to hard...
although interesting. :)

My response was much more simple, in keeping with my ignorant cynic theme:

So what did I mean by "yeah if the programmer's programmed for it?".

I was trying to imply the following generalization:

"there is no intelligence in Artificial Intelligence, and that any
signs of intelligence that ever becomes interesting to discuss
(such as developing a language with a series of blinks) must
orginate from the human creators of the software/system(s)/experiment."

Apart from sounding a little too obvious, the huge flaw in that statement is I gave no definition of intelligence! Will I need to?

But, hopefully, the generalization does clarify the point.
(If it doesn't, I'm sure you can safely ignore it if you wish)




Quote:Original post by Sneftel
Quote:Original post by Timkin
An article I read about a year ago threw a singularly large spanner into Noam Chomsky's (famous linguist for those that don't know) beliefs that grammar is hard-coded in the human brain (and cannot be learned)...

Huh... very cool. Link?


Indeed. I'd be interested in the link as well. Although, haven't there been other experiments that have gone against that theory as well, or just other theories?
so, @Timkin, you say that with a set of functions like:
go_left()
go_right()
go_forward()
go_backwards()
light_on()
light_off()
sense_light()
eat()

these robots can evolve a language between them? (Probably some color measurement and some timers would be required too.)
[size="2"]I like the Walrus best.
Hmm, I quite like this sort of article, but that's because instead of looking at how it potentially overhypes the robot/AI work, I look at the implications for psychology. I think it's very interesting to think that the ideas of altruism and deception, so often wrapped up in discussions of ethics and morals, can be viewed as simply specialised optimisations when placed into a wider population.
well i think the concept of lying to achieve some gain is actually a very likely behaviour that would emerge from any learning machine. A child will learn that from a very early age, lying can be beneficial.

adult: 'did you make this mess?'
child: 'yes'

*smack*

this action/response would give a negative result, and so the child would try something different next time:

adult: 'did you make this mess?'
child: .... 'no'
adult: 'hmmm ok, nevermind'

I think most learning methods would eventually learn to lie, if for nothing more than to try and avoid getting negative reactions

This topic is closed to new replies.

Advertisement