Archived

This topic is now archived and is closed to further replies.

kenwi

Anything like a "real" AI.

Recommended Posts

kenwi    122
just what it says.. I just recently had a discussion with someone that ment that it was possible to make a "real" AI that would process and make own decisions based on nothing.. I mean, that it's not possible to have a AI to "think" on it's own. A AI cannot have a fantasy, for example having a word, and know the meaning of the word. and describe the word with other words, using it in different frases. And most of all making own words, which describes other things. A AI does only draw conclusions on things it already knows, right? I am not sure if you know what I mean, but I believe it's not possible to make a ai close to a humans intelligence.. in short, my opinion is: A machine can only know, it cannot understand. Can anyone help me out here on this issue, explain some facts why it would or wouldnt be possible for a AI to work like this.? at least not with todays technology THanks in advance Kenneth Wilhelmsen Download my little game project HERE -------------------------- He who joyfully marches to music in rank and file has already earned my contempt. He has been given a large brain by mistake, since for him the spinal cord would fully suffice. This disgrace to civilization should be done away with at once. Heroism at command, senseless brutality, deplorable love-of-country stance, how violently I hate all this, how despicable and ignoble war is; I would rather be torn to shreds than be a part of so base an action! It is my conviction that killing under the cloak of war is nothing but an act of murder Edited by - kenwi on November 3, 2001 2:38:24 PM

Share this post


Link to post
Share on other sites
Colin Jeanne    1114
quote:

a "real" AI that would process and make own decisions based on nothing..



You dont make decisions based on nothing. You need stimulus (for choices) to make decisions.

quote:

I mean, that it''s not possible to have a AI to "think" on it''s own.



If it looks like it''s thinking on its own, is there really a difference?

quote:

A AI cannot have a fantasy, for example having a word, and know the meaning of the word. and describe the word with other words, using it in different frases. And most of all making own words, which describes other things.



Why not? A word in a human sense is an oral representation of something we can recall from memory. Save an object''s most notable features to memory (to save space just like we humans do) and assosiate with a sound and now you have somewhat of an understanding of that object. Same can be done with actions or specific features. The hard part would be having the AI choose which words to use at what time (which we all do sometimes anyway) in order to convey a specific idea.

quote:

A AI does only draw conclusions on things it already knows, right?



So do you. If I talk about a Snorgoflax, then you''ll have no idea what I''m talking about.

quote:

A machine can only know, it cannot understand.



I disagree. What is understanding but fuller form of knowledge? If I say to the machine "The chicken crossed the road" and it now knows that the chicken has crossed the road and can reference that information on its own, does it not understand?
I think that if nature can create consciousness from a pile of grey matter, we can do the same.

Invader X
Invader''s Realm

Share this post


Link to post
Share on other sites
Guest Anonymous Poster   
Guest Anonymous Poster
The problem isn''t getting the AI to remember (store) facts and knowledge; it is getting it to understand relationships between them.
A computer system can store all sorts of fact, and even store some relationships between them. But it can''t intuitively figure out new relationships that humans can (at least as far as I have heard).
If they could do that, we would have SkyNet already, and Terminator 2 would have happened.

Share this post


Link to post
Share on other sites
Nazrix    307
quote:
Original post by Invader X
Why wouldn't they be able to? What makes us so special?

Invader X
Invader's Realm


I think it's conciousness. We're more than just a bunch of neurons and brain cells, but a computer is just a bunch of ones and zeros. But this opens one huge can of worms


A CRPG in development...

Need help? Well, go FAQ yourself.

Edited by - Nazrix on November 3, 2001 9:50:17 PM

Share this post


Link to post
Share on other sites
Null and Void    1088
quote:
Original post by Nazrix
We''re more than just a bunch of neurons and brain cells, but a computer is just a bunch of ones and zeros. But this opens one huge can of worms


Can you prove that? If you can, you could probably apply the same argument to computers .

[Resist Windows XP''s Invasive Production Activation Technology!]

Share this post


Link to post
Share on other sites
Nazrix    307
quote:
Original post by Null and Void

Can you prove that? If you can, you could probably apply the same argument to computers .

[Resist Windows XP''s Invasive Production Activation Technology!]


Okay you have a point

How about this: We have the ability of self-conciousness. The neurons in our brain are able to be aware of themselves and ask why they exist among other questions.

I guess I can''t prove that computers and animals don''t do that though

Share this post


Link to post
Share on other sites
Nazrix    307
quote:
Original post by Invader X
What else are we?

Invader X
Invader's Realm



That's where you delve into religion or lack thereof. That's where the "can of worms" is opened

Okay...everyone can thank Nazrix for causing the topic to go compltely off-course


A CRPG in development...

Need help? Well, go FAQ yourself.

Edited by - Nazrix on November 4, 2001 1:49:31 AM

Share this post


Link to post
Share on other sites
bishop_pass    109
quote:
Original post by Nazrix
How about this: We have the ability of self-conciousness. The neurons in our brain are able to be aware of themselves and ask why they exist among other questions.


But a real AI should ultimately have knowledge of its own internal mechanisms, its own physical structure, and the ability to examine its own memory space. Of course, humans didn''t always know what was in their brains or how they worked. We still don''t know entirely how they work.

As for consciousness, if we actually knew how and why consciousness exists, we wuold be better able to answer whether a computer could have consciousness.

Regarding an AI and knowledge of its own internal mechanisms, let''s look at a desktop computer. If we are ever to interact at an intelligent level with a desktop computer, it stands to reason that an AI residing on your desktop would have knowledge of all of the following:


  • It should know that it''s software in ram and on a HD in a machine on your desk in a particular room in a living space in whatever town you live in whereever you live in the world.
  • It''s composed of ones and zeros and these form a sequence of instructions.
  • It should have an understanding of how the file system on your system works. Not just a program which gives you dialog boxes to browse files or is able to pull up directories, but a fundamental understanding of what a file is (a sequence of bytes) what recursive data structures are (the directory tree), what lists are (the lists of files in a directory), what a picture is, what file operations are (not just able to do them), what something like DELETE really means, etc.
  • It should know what time is. I''m not talking about a clock or counter. I''m talking about time, its relativeness, its value, and the relationship between intervals of time then, now, and in the future. It should know what nighttime is, daytime, when you likely slept, when a long task of executing something might no be appropriate, etc.
  • It should know what a keypress is, and should be able to tap into the rhythem of your keypresses, effectively assisting in you in your activities. But in a comprehending way, as the synergies of all of its knowledge work together.
  • Much more...


I''ll give you an example of something that is NOT an AI, but rather something somewhat useless which masquerades as an AI and goes a long way towards convincing people that AIs will always be stupid: Microsoft''s personal assistants in packages like Word 2000. These little creatures embody none of the concepts I listed above. They have absolutely no understanding of what you are, what you are doing, what they are, or what the computer is.




___________________________________

Share this post


Link to post
Share on other sites
krez    443
quote:
Original post by Invader X
Dont forget, that AI needs to want things that benefit it. If you try to delete the AI''s program or one of it''s main files, it would be able to and would resist you by any means possible.

true... SkyNet didn''t nuke the Russians until they tried to unplug it...

--- krez (krezisback@aol.com)

Share this post


Link to post
Share on other sites
Colin Jeanne    1114
Also, by wants I dont mean automatic wants like David from AI had. He was very fake and wouldn''t pass for conscious at all. I mean wants which directly benefit the AI in some way and are thought about before they are wanted.

Emotions could be emulated in a similar beneficial/not beneficial break down making the difficult part of the AI the acutal analytical portion.

Invader X
Invader''s Realm

Share this post


Link to post
Share on other sites
ragonastick    134
Has the idea of consciousness ever been explained? I''m not too knowledgable on this topic, so I''m just wondering if there is a certain configuration of neurons which will allow "sight", in a sense that it is.... like what I see, and thought like I think.... stupid English, not having the words I need to express myself

Trying is the first step towards failure.

Share this post


Link to post
Share on other sites
Dean Harding    546
quote:
Original post by ragonastick
Has the idea of consciousness ever been explained? I''m not too knowledgable on this topic, so I''m just wondering if there is a certain configuration of neurons which will allow "sight", in a sense that it is.... like what I see, and thought like I think.... stupid English, not having the words I need to express myself



It''s tricky, but it''s possible to find out which parts of the brain map to different functions. For example, you can find out which part of the brain is responsible to sound by giving your subject some sort of sound stimulus (like ringing a bell) and seeing which parts of the brain become more active.



codeka.com - Just click it.

Share this post


Link to post
Share on other sites
Timkin    864
Just a couple of points I'd like to add:
Original post by Nazrix
quote:

How about this: We have the ability of self-conciousness. The neurons in our brain are able to be aware of themselves and ask why they exist among other questions.



Answer me this: which neurons in your brain are currently involved with interpreting the characters you are currently reading on this screen?

Our brains are NOT aware of their internal states, not by any stretch of the imagination. In fact, experiments seem to suggest that our brains are very delicately balanced complex systems that have evolved to self regulate themseleves. If you throw the balance out by just a small amount, the system collapses either into chaos (leading to various mental disorders of varying degrees) or to a shutdown of the control processes of the brain and ultimately death.

Out of this dynamic process emerges cognition and consciousness. There is a large and growing group of scientists that believe that animals have consciousness as we humans do and yet do not display the same cognitive abilities as us. This may suggest that consciousness and cognition are not as highly correlated as we once thought!

On the issue of whether something 'looks' intelligent as opposed to 'being' intelligent', might I suggest that interested readers obtain some of Stevan Harnad's papers on the turing test, modified turing test and the symbol grounding problem. They're available online.

Ultimately, we must consider that if something looks intelligent (through its actions/behaviour and, if possible, through discourse with that agent) then morally we must treat it as intelligent and accord it the same rights we expect as an intelligent agent. For this is exactly what we do with each other. We have no other means of ascertaining whether another human is intelligent other than from observation and expection: the observation of their behavior (and relating it to our own) and the expectation that since they are physiologically similar to ourselves and that we believe we are intelligent, then we accept that they are intelligent. It is the whole presumption of the need for physiological similarity before we can accept something as intelligent that makes it difficult for us to believe that other agents - be they organic or inorganic - could be 'intelligent'.

If we co-existed on this planet with another species that was intelligent and that we could readily communicate with, then I doubt very much that so many people would be so quick to assume that machines could not be intelligent.

Regards,

Timkin

Edited by - Timkin on November 5, 2001 7:59:37 PM

Share this post


Link to post
Share on other sites
Nazrix    307
My point was more a matter of:

as far as we know, animals do not go around pondering life''s questions or other such activities. The most they *seem* to think is "I''m hungry" or "I''m happy". Yet, wouldn''t you say most animals are more autonomous than computers are so far?

Is it "possible" to make computers have the level of intelligence that we as humans do? Probably. But I don''t think we understand enough about ourselves to do that at this point in our technological status, do we?


A CRPG in development...

Need help? Well, go FAQ yourself.

Share this post


Link to post
Share on other sites
Nazrix    307
quote:
Original post by Timkin

Out of this dynamic process emerges cognition and consciousness. There is a large and growing group of scientists that believe that animals have consciousness as we humans do and yet do not display the same cognitive abilities as us. This may suggest that consciousness and cognition are not as highly correlated as we once thought!




That''s pretty interesting. Actually if you look at humans from a very objective view, and a very macro view we probably look like bees or ants. We travel to the same basic places each day, we travel to get food. We mate and reproduce. We fight each other for resources.




A CRPG in development...

Need help? Well, go FAQ yourself.

Share this post


Link to post
Share on other sites
MikeD    158
Timkin said: If you throw the balance out by just a small amount, the system collapses either into chaos (leading to various mental disorders of varying degrees) or to a shutdown of the control processes of the brain and ultimately death.

Just to add to this point, there''s another way of viewing it. Of course the brain can only survive in a small constrained set of physical environments (i.e. that created by our own skull) but the brain is a largely homeostatic, ultrastable piece of wetware. You can be born into a huge range of physical and social environments that affect the development of the brain. The brain rearranges its functioning to deal with its input and produce behaviour while trying to keep certain bodily (including brain) parameters within boundaries. Things like happiness, depression, pain, pleasure etc. Should you be thrown from one social environment (say New York city) into another (such as the African plains) then you can adapt to the social constraints formed by the society you join. As well as this you can adapt to the flora and fauna relevant to your survival. The point where all these internal adaptations break down is when the environment doesn''t allow you to retain your internal parameters within boundaries (situations of physical or mental abuse or being trapped alone on a desert island where social interaction is prevented), it''s at these points that the brain, in attempting to adapt and finding no place of equilibrium within which the parameters are conserved, breaks down and mental illness kicks in. Mental illness is merely a state your brain enters to deal with a given environment in the best possible way.

What I''m trying to say is that the brain is very robust and can deal with the world and a plethora of environments, even those that take it to its limit. The brain may be carefully balanced but that balance is maintained by the brain itself, even under significant perturbations from the environment.

Mike

Share this post


Link to post
Share on other sites
Guest Anonymous Poster   
Guest Anonymous Poster
MikeD, what are your credentials for discussing the brain? Nothing personal, but it sounds like you are making this up. If you have a degree in brainology or something, I''ll let it go; but some of the things you said aren''t true.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster   
Guest Anonymous Poster
Computers can be intelligent... but CPU speeds won''t allow it in real-time yet. Things like pattern recognition could/would take a long time on a computer. Think about every day life... you see a dog, you instantly recognize this as a dog based on other dogs you''ve seen. A computer could do the same thing, but try writing a program that checks images against other images for similarities.... not realize that the dog can be in any postition in a single frame, and of any size.. it makes recognition much harder and much more intensive. You can make a program which sorts through images of it''s past and finds patterns, but it''d take a LONG time for it to work. Now add things like sounds, smells, touch, etc... and you see why computers of today can''t do "real" AI. Transmeta is the closest I''ve seen to good AI so far.. it is constantly figuring out ways to optomize the instructions it''s given to run more efficiently, and the coders didn''t have to program everything into it, it "learns" the programs you run, and optomizes itself for it.

Billy

Share this post


Link to post
Share on other sites
Shadows    133
I don''t believe that it is possible to create real "AI" with the programming languages of today. I know that this is going to upset alot of AI fanatics that think that carbon based life forms is just another step in evolution, but let''s face it we don''t even know even a shrap of how our own brain works how could we then design a real AI? If anyone can give me a real idea on how to implement some of these examples I''d be most... impressed =)

Can you make an AI see a word in a foreign language, and understanding it''s meaning without translating it to it''s own native language? (As I do with English and Swedish most of the time)

You can tell a computer, that it is a Computer, that it''s name is AZ12124we, it''s located in Göteborg, Sweden. You can tell the computer where Göteborg is on a map, but can you make a computer understand why or where Göteborg is and not simply knowing it for a fact?

Can you make a computer experience real feelings, not expressing but experiencing them. Not programming them to show affection or love but to make a computer truly love someone?

And in the end can you make a computer that from the beginning is a single mb of information contained inside a chipset grow and evolve into a full grown K7 1,533? (ok really really badly illustraded but you get the idea... don''t you? =)

I could go on forever but I think that I''ve pretty much made my position clear. I know that alot of people won''t agree with me on this and that some probably will. And that''s the beauty of life and nature, we''re all different.

// The Shadows
Fredrik Nilsson - Rogue Software
Sorry ''bout my bad english I''m not of this world

Share this post


Link to post
Share on other sites