Life like AI??

Started by
27 comments, last by j8l5s 22 years, 3 months ago
I ph34r when, and if that day comes...
delete this;
Advertisement
quote:Original post by Tjoppen
I ph34r when, and if that day comes...

yah, we''re all screwed when SkyNet gains self awareness...

--- krez (krezisback@aol.com)
--- krez ([email="krez_AT_optonline_DOT_net"]krez_AT_optonline_DOT_net[/email])
You all make good points, but it sounds to me like every thing is theore. How far have people gone so far on this. This reminds me of the aids virus. You have thousands of people working year round tring to solve it("cure it") but barley getting any where. I would think that if it was posible some one would found it by now, but anything is possble.
Here something else to think about, how could it chose a favorit color or food? How could it become atracted to some one?

quote:Original post by j8l5s
You all make good points, but it sounds to me like every thing is theore. How far have people gone so far on this. This reminds me of the aids virus. You have thousands of people working year round tring to solve it("cure it") but barley getting any where. I would think that if it was posible some one would found it by now, but anything is possble.
Here something else to think about, how could it chose a favorit color or food? How could it become atracted to some one?


The question you ask all come down to an essential difference between the computer-model (that''s it: model, capacity does not matter whether it''s an average 500MHz Pentium or Sun''s latest number cruncher) we use today and the way our brains function.

We are rapidly closing in on the computer in memory capacity, we''ve passed in reaction speed, a brain neuron fires more slowly than a processor crunches data, the essential difference is, that a PC has one CPU, a big one has maybe 64 but they are very carefully designed to work together (and remember, 64 coworking processors do not equal 64 CPU''s working for themselves, there''s a law of diminishing returns).

So our brain can take millions of data-units in every second and process them in parallel. That''s what makes our brain very complicated, and then I''m not even mentioning the backfiring in our brain (anyone who has looked at the mathematical theories concerning neural nets that loop and feed the output from turn n as input on turn n+1 knows what I mean)

So if you would give a computer the possibility to have millions of tiny CPU''s that work in parallel, it might develop intelligence. The only thing we can do to mimic this is simulate parallelism on a small scale (every PC does this today to some extent) and hope we can create a small fragment of intelligence in a very small field.

But in the end, we''ll get there... And then we can all grumble when our computer refuses to boot because it does not feel like it :-)

Greetz,



******************************
StrategicAlliance

On the day we create intelligence and consciousness, mankind becomes God.
On the day we create intelligence and consciousness, mankind becomes obsolete...
******************************
******************************StrategicAllianceOn the day we create intelligence and consciousness, mankind becomes God.On the day we create intelligence and consciousness, mankind becomes obsolete...******************************
My answer to a post of krez some posts before...
I don''t think too that a human can make random decisions.
My theory is nothing really nothing is random.
Everything has somany variables that you can''t calculate!
Yeah right.
But if you think so a human must "experiance" all this variables.
Feelings? A good question. A result of this variables like I said.

Another thing: I''ve heard a few days before:
I read in a magazin how scientists try to explain how human-brain
works. An interesting article. Try to find something like that in the internet. It''s also about memory.

Sorry i''ve now school I must run!
cu next: C.Ruiz
quote:Original post by cruiz
Or this: Say a random number!
Why can a human say a random number?
I mean a "computerrandom" is never a real random.


Actually they cannot. Humans are hopeless at generating random numbers. You might try and suggest that they pick a number at random from the set of possible numbers, but even then, they do show bias. If you want to test this, here''s a very quick thing to try. As fast as you can (and being as honest as you can not to think about what sequence of digits to write), write out a very long integer (say 50 or 100 digits). Now perform a quick frequency count on the digits. You''ll find some are highly favoured while others are not!

Timkin

There's a tonne of good discussion in this thread, which is great to see... just a couple of thoughts from me...

One way researchers judge the current accomplishments of AI is to compare the abilities of the AI to a particular level of human development. Many consider the current status of AI to be around the 3 year old mark. That is, consider the abilities of an average 3 year old child and these can be reproduced using current AI technology.

Emotions are possible in artificial agents... unfortunately there is (and has been for hundreds of years) a raging philosophical debate about whether other beings can experience emotions as we humans do. Consider this though: there is a very strong correlation between emotional states and neuro-chemical levels in the brain, suggesting a causal connection. If emotions are governed by the state of the brain then it is reasonable to assert that emotions in other agents are also governed by some internal state. If the outward behaviour of the other agent shows a correlation with our own behaviours while we are in certain emotional states, then it is reasonable to assume that while displaying these behaviours the other agent is experiencing an emotion and it could be labelled based on the behaviour of the agent, just as we humans label the emotions of each other.

Of course, there are complications. Consider edotorpedo's suggestion that crying is hard-coded into the brain to get attention. Actually, when you're an infant, crying is the physiological response to discomfort, particularly internal discomfort (stress) brought about by things like hunger, fatigue and certain emotional states. Parents and carers have a certain response (be it physiological or learned) which is to try and ease the infants discomfort (which requires the carer to pay attention to the infant). As the infant grows, it learns the correlation between crying and attention and throughout the following years it often uses this correlation to gain attention by crying when it chooses to. This suggests that the child may not be experiencing the same emotional state it experienced when it was an infant but that it is decieving the parent as to its emotional state.

One final thought on the issue of choice:

If you want to ask how an artificial agent might make a choice, ask how you make such choices. They are generally selected with a bias. Humans tend to make choices based on their ability to justify the 'correctness' of the choice, given input information. We would like to think that we are rational agents (selecting the action that maximises our expected utility in that situation) however humans are often irrational (possibly because we can maintain paradoxes in our internal logic and still perform reasoning - something AI struggles with). Additionally, correctness is something we judge based on our experiences.


Ultimately, to create an artificial agent that has the same faculties and behaviours as a human, you would need to create that agent as an infant and have it live in our world, interacting with humans. If it had sufficient internal mechanisms to allow it to behave in a manner indiscernable from other humans then you might consider that you had succeeded in creating a human-like agent. Then of course, you're stuck in the moral dilemna of deciding if it should be considered human and have the rights of a human!

Cheers,

Timkin

Edited by - Timkin on January 8, 2002 8:34:17 PM
quote:Original post by Timkin

Consider edotorpedo''s suggestion that crying is hard-coded into the brain to get attention. Actually, when you''re an infant, crying is the physiological response to discomfort, particularly internal discomfort (stress) brought about by things like hunger, fatigue and certain emotional states.


I think what edotorpedo meant was that the physiological response *is* the hard-coded behaviour. As far as I know, infants'' brains do cause crying. If so, then it must be the case that their brains are initially wired for crying in certain states.

With regard to the initial topic, I think it highly likely that AI programs will eventually think like humans. But you probably wouldn''t want them to. Humans are dumb and often nasty.
quote:Original post by Argus
With regard to the initial topic, I think it highly likely that AI programs will eventually think like humans.

heh heh heh... the first time i read this through i thought you said "AI programmers "... heh heh heh..
--- krez ([email="krez_AT_optonline_DOT_net"]krez_AT_optonline_DOT_net[/email])
If you have 2 identical doors, symmetrical layout and so on, most people will choose the one on the right . (I read that in an article in New Scientist about a few habits of humans, in some fancy department stores, people pay extra to have their products on the right hand side )

Anyway, I think one of the big things which is lacking is the actual interface. Most people would try to keep things simple by just having say a text input/output to start with, but what that means is that the computer probably won''t learn a thing.

Take a human baby. After its first word, it receives praise and attention from people, even though it doesn''t understand what the people are saying, it still must be able to understand that it is being rewarded and not punished. There needs to be some "hard coded" things in the baby, things like pain which aren''t learnt. Otherwise, you might get people growning up who believe that pain is a reward or whatever. But, if you try to hard code those things in, you must still be low level enough. Taking the text example, if you program it so that the sight of the phrase "Don''t be stupid", is a pain statement, then whatabout when it is used jokingly? The concepts of pain need to be very low level, and then learn upon, you can''t skip steps.

For the computer to become intelligent, it needs to have a full set of senses, so it can learn to interpret tone of voice, body language and so on. Without that, it will also not be able to learn, if you yell at a baby, I don''t think it would like it, and it would be a punishment. The computer would also need to understand that type of thing.

Finally, remember that a human isn''t born literate. While they are intelligent, we don''t really have any way to see it. Since a human takes so long to get to a level when they can talk (especially in computer years ), I think that we will get together more computing power than is in the brain, and then begin to teach it, so once it reaches human levels of intelligence, it could then continue on, and be even smarter than us.

And when that happens, does it get treated like a human? If we can''t tell the difference between a computer and a human, then should we treat the computer as a human?

Trying is the first step towards failure.
Trying is the first step towards failure.

This topic is closed to new replies.

Advertisement