Followup: Though Experiment

Started by
2 comments, last by Sander 20 years, 9 months ago
This is a followup of the "Though Experiment" thread which I find extremely interesting. There was on remark in there by Baylor (his first post in the thread) that I disagree with but that started me thinking "what if''s". I didn''t want to lead the original thread off-topic so I started this one.
quote: Original post by baylor Fourth, i don''t believe intelligence has to be a stimulus-response thing. For example, dreams don''t have sensory inputs. Talking to yourself in your head is the same.
This is the part that I disagree with. I think that this can be a matter where the output is both stimulus and response. Output is input so to speak. But it got me thinging. Has this ever been simulated artificially, for example in ANN''s? What would the effect be on an ANN when part of his output is also his input? would it "think"? Could it "think" longer? What if you would apply it on an existing AI like NLP''s? Say you take two NLP''s and have them talk to eachother. What would they talk about? One LNP''s output would be the other''s input. Maybe you would have them start talking, like in the ball experiment from Robbin Williams'' "Awakenings" movie. But where would the discussion go? This would even be more interesting if the two LNP''s aren''t the same. After all, we prefer talking to people that are different (e.i. all other people. You rarely start to talk to yourself al lot). Any thoughts? Sander Maréchal [Lone Wolves Game Development][RoboBlast][Articles][GD Emporium][Webdesign][E-mail]

GSACP: GameDev Society Against Crap Posting
To join: Put these lines in your signature and don''t post crap!

<hr />
Sander Marechal<small>[Lone Wolves][Hearts for GNOME][E-mail][Forum FAQ]</small>

Advertisement
quote:Original post by Sander
Output is input so to speak. But it got me thinging. Has this ever been simulated artificially, for example in ANN''s?


Yes, the answer is Recurrent Neural Networks . Some, or all, of the output layer is connected to the input layer, providing a mechanism by which functions of the form

x(t+1) = f(x(t),x(t-1),x(t-2),...,x(t-k);p)

can be approximated (p is a parameter vector that remains constant). ''t'' represents time, although looking closely we can see that it could be any arbitrary index (in time or space).

quote:Original post by Sander
What would the effect be on an ANN when part of his output is also his input? would it "think"? Could it "think" longer?


The result is that the current classification depends on previous classifications.

quote:Original post by Sander
What if you would apply it on an existing AI like NLP''s? Say you take two NLP''s and have them talk to eachother. What would they talk about? One LNP''s output would be the other''s input.

Been done. I''ve forgotted the name of the experiment, but essentially pairs of discourse agents would travel over the internet to a pair of computers that sat on a desktop. There were many locations to visit and many different agents, so the combinations of pairs and locales weas exceedingly large. A person sat at the desk and manipulated objects on the desk. The agents conversed about these objects and asked questions of the person and of each other. The person could provide some answers.

The point was to see if the agents could establish a common language. Not only did they do this, but they also developed a very limited grammar (partially proving certain famous linguists wrong who believe that grammar is hardwired in our brains), many different dialects and learned a few expletives along the way (which was why the project stopped its public phase).

quote:Original post by Sander
This would even be more interesting if the two LNP''s aren''t the same. After all, we prefer talking to people that are different (e.i. all other people. You rarely start to talk to yourself al lot).


As MikeD explained recently, if the two agents don''t have a common basis for developing their language and a common understanding of the discourse domain, then they aren''t going to be able to communicate very much information. I''m sure Mike can explain it better than I.

Cheers,

Timkin
That agent experiment sounds mighty interesting I must say. I''ll try googling for it, see what I come up with and post some links here if anyone else is interested. I know I am

Thanks Timkin

Sander Maréchal
[Lone Wolves Game Development][RoboBlast][Articles][GD Emporium][Webdesign][E-mail]


GSACP: GameDev Society Against Crap Posting
To join: Put these lines in your signature and don''t post crap!

<hr />
Sander Marechal<small>[Lone Wolves][Hearts for GNOME][E-mail][Forum FAQ]</small>

quote:Original post by Timkin
As MikeD explained recently, if the two agents don''t have a common basis for developing their language and a common understanding of the discourse domain, then they aren''t going to be able to communicate very much information. I''m sure Mike can explain it better than I.
Timkin


Well, to be specific (thanks Timkin ), communication can only occur over the areas of the discourse domains of the two agents to the extent of the similarity of those domains. i.e. if you have a identical syntax and semantics for the word cat, you can communicate the concept of cat perfectly. If you have identical syntax and semantics for lots of concepts surrounding the word cat you could discuss cats, within the shared semantics, perfectly. If one person did not know the word cat, but understood the words predator, legs, fur etc. the other person could describe a cat quite well but your shared area of discourse would be limited when using the word cat, as the two concepts of cat could be quite different and the first person would not have a complete understanding of the word cat, rendering communication imperfect.

To extend from this, the word cat is defined by your every interaction with the delineation "cat" in the world. Given that your personal experiences of "cat" are different to mine by definition (as we are different people), communication can never be considered perfect between two individuals.
To take an extreme example, if I was a ranger on the African plains and you had only seen domestic cats. We could have a conversation about cats, their furryness, how many legs they have and their sharp teeth, then I say "aren''t they great at bringing down antelope" and you''d say "what the hey???". You could then ask how much communication has gone on in the whole discussion, when I mentioned furryness you didn''t think of a Lion''s mane like I did, when I mentioned legs and teeth, you didn''t think of weapons that could kill you in an instant. Of course we communicated to some extent, but it was imperfect limited communication, which we are all implicitly bound to by dint of being different people with different experiences. The more different we are and the smaller our shared discourse domain, the less communication occurs, until we might as well be speaking different languages.

Apologies if this is too off topic for people''s tastes

Mike

This topic is closed to new replies.

Advertisement