Sign in to follow this  
Optimus Prime

Strong AI vs. Weak AI

Recommended Posts

Hello, I'm wondering, how many of you out there are working on Strong AI projects? (See below for info on Strong Ai vs. Weak Ai.) I ask because I've seen to have run the gamut in terms of developing simple AI agents. I'm looking for something a little more complex and in the field of strong ai, rather than weak ai or narrow ai. Dr. Ben Goertzel's venture with Novamente seems interesting. However, I think he and his team are biting off more than they can chew. I think a simpler approach with a simpler goal would be a better start. If you are interested, you can take a look at some of his speeches: http://video.google.com/videoplay?docid=569132223226741332&q=agi http://video.google.com/videoplay?docid=574581743856443641&q=agi Strong AI: http://en.wikipedia.org/wiki/Strong_AI Weak AI: http://en.wikipedia.org/wiki/Weak_AI#Weak_AI

Share this post


Link to post
Share on other sites
Several things:

First, Strong AI isn't necessarily something one programs towards, because it's more of a philosophical stance than a practical one. Anything a Strong AI can do a Weak AI can do too. The distinction isn't in terms of what thay can do, it's about what's under the hood. Is the AI actually sapient, or just making a perfect mimicry?

The Novamente program, from what I can tell, isn't trying to head towards Strong AI. It's heading towards something that it claims might be able to reason intelligently, but this isn't the same as Strong AI.

From what I can tell from the lectures, Novamente are building some reasonable attempt at a general-purpose reasoning engine. That progect might be interesting -- I didn't get too much from the lectures, and there weren't any real demonstrations. Then they hope that the Singularity will arrive and magically solve all their problems in changing their engine into something that might be labeled "Strong AI".

The first lecture, by the way, was fairly hokey: it's just a catalogue of all the wishes people have been hoping will come about after the supposed Singularity. The guy also seemed very proud of having worked out that, given infinite computing power, you could program anything in only a few lines. I assume he hadn't heard of a Universal Turing Machine before, which was worked out 70 years ago.

Really, most people working in AI don't particularly care about Strong AI -- that's left to the philosophers. Since no one has yet come up with a situation in which a Strong AI would behave any differently than a Weak AI, it's a little pointless.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
Original post by Asbestos
Since no one has yet come up with a situation in which a Strong AI would behave any differently than a Weak AI, it's a little pointless.


I don't think I understand this statement.

When I think of Weak AI, I think of individual problems that an AI is able to solve. For example, theorem proving, advanced 3d path finding, medical diagnosis, etc.

When I think of Strong AI, I think of a well rounded "human-like" AI that is able to perform all of these tasks on some level - the level at which they can be performed being dependent on the the AI's level of intelligence.

So in other words, there is no specific situation for Strong AI, because by definition Strong AI is for _all_ situations.




Share this post


Link to post
Share on other sites
Oops. The post above is mine, I just didn't sign in.

Quote:
Original post by Asbestos
The distinction isn't in terms of what thay can do, it's about what's under the hood. Is the AI actually sapient, or just making a perfect mimicry?


Philosophically, I believe that no distinction can be made between sapience and perfect mimicry of sapience. Sapience is sapience. The only was to make a distinction between two sapient AI's would be to compare source code and hardware (as you said).

Share this post


Link to post
Share on other sites
Quote:
Original post by Anonymous Poster
When I think of Weak AI, I think of individual problems that an AI is able to solve. For example, theorem proving, advanced 3d path finding, medical diagnosis, etc.

When I think of Strong AI, I think of a well rounded "human-like" AI that is able to perform all of these tasks on some level - the level at which they can be performed being dependent on the the AI's level of intelligence.


That's not quite the distinction between Strong and Weak AI.

Searle, who created the terms "Strong AI" and "Weak AI", explains it best in the Chinese Room argument, which I'm sure you've read, but I'll just clarify it here:

Searle envisions a room in which a person is stationed with a very large rule-book. In this case, it happens to be a rule book for speaking Chinese. Stimuli come in to the room in the form of written Chinese texts. The speaker, who speaks no Chinese, is able to use the rule book to send out replies in Chinese, creating a perfect conversation. The person still does not understand Chinese, however, and so is an example of Weak AI: There's nothing but rules under the hood, there is no actual understanding.

The point of the argument isn't in the limits of what the person in the room can do. There's no reason why, if he had an even bigger book, the guy inside could conduct perfect conversations, do math and logic puzzles, wage war, write a book, reason about politics, human affairs, love and art and all the rest. Searle's point isn't that the guy wouldn't be able to do all this, it's that he would still have no actual understanding: he just be following rules.

So in terms of AI, the outward behaviors make no difference. What matters is whether the AI is actually sapient, or just "pretending" to be.

Now one may certainly not agree with Searle's point. Philosophers debate whether or not the person in the room actually understands Chinese; Turing proponents say that "if it looks like intelligence, then it is intelligence"; people programming just shrug their shoulders and keep programming. After all, to someone programming, there's no difference as to whether it is actually sapient, or just pretending to be.

This is why it's a philosophical point, not a programming point.

Share this post


Link to post
Share on other sites
This may sound silly, but wouldn't any programmed AI be a weak AI by definiton? I mean, even neural network simulations produce answers by following blind rules, but its the best model of the human brain we've conceived so far. Even a reasoning engine is still following what ultimately are blind rules to create the reason it's simulating.

Just as a philosophical point, where would the line be drawn?

Share this post


Link to post
Share on other sites
Quote:
Original post by IndigoDarkwolf
This may sound silly, but wouldn't any programmed AI be a weak AI by definiton? I mean, even neural network simulations produce answers by following blind rules, but its the best model of the human brain we've conceived so far. Even a reasoning engine is still following what ultimately are blind rules to create the reason it's simulating.

Just as a philosophical point, where would the line be drawn?


The human brain is just a big neural net though too. The only difference is how we've been programmed, not whether we are or not.

We're programmed by past experience (learning - something neural nets are good at) and hardcoded genetic coding. Both of these are things we can do in software too.

I think it's an interesting discussion personally. I don't think as an AI programmer I'm necessarily quite so uninterested in the philosophy underlying AI; maybe it's because my first introduction to AI was from a psychology curriculum, as opposed to a machine intelligence one.

As far as strong AI goes, I'm certainly not working on a project to develop it though! Don't want to build skynet just yet... :)

Share this post


Link to post
Share on other sites
Quote:
Original post by IndigoDarkwolf
This may sound silly, but wouldn't any programmed AI be a weak AI by definiton? I mean, even neural network simulations produce answers by following blind rules, but its the best model of the human brain we've conceived so far. Even a reasoning engine is still following what ultimately are blind rules to create the reason it's simulating.

Just as a philosophical point, where would the line be drawn?


Well, since I believe that consciousness is an evolved process, and that there is no single spark or neuron or anything that flips a switch from "unconscious" to conscious" (and that humans are more conscious than ants, which are more conscious than mollusks), then I don't think that there is a line that can be drawn. However, we've left the stage at which the discussion could ever be answered in an AI forum, or where we wouldn't all be repeating things said many times before. For some interesting reading, straight from Endnote:

Nagel, Thomas. 1974. "What is it Like to Be a Bat?"
Weiskrantz, L. 1986. "Blindsight: A case study and its implications."
Dennett, Daniel. 1991. "Consciousness Explained"
Chalmers, David. 1995. "Facing Up to the Problem of Consciousness"
Chalmers, David. 1996. The Conscious Mind: In Search of a Fundamental Theory
Dennett, Daniel. 1999. "The Zombic Hunch: Extinction of an Intuition?"
Harnad, S. 2005. "What is Consciousness?"

Share this post


Link to post
Share on other sites
Quote:
Original post by IndigoDarkwolf
This may sound silly, but wouldn't any programmed AI be a weak AI by definiton? I mean, even neural network simulations produce answers by following blind rules, but its the best model of the human brain we've conceived so far. Even a reasoning engine is still following what ultimately are blind rules to create the reason it's simulating.

Just as a philosophical point, where would the line be drawn?


For the xth time, the kind of neural networks generally used in AI is not even a model of the brain at all. The brain is much, much more than just a neural net. As far as recent biology research shows, it doesnt even function like a net.

The terms "strong vs weak AI" was probably first used by some incompetent AI researchers who were trying to give some credibility to their work.

At any rate, there is nothing in that "field" that I would consider of any use for practical game ai.

Share this post


Link to post
Share on other sites
Quote:
Original post by Asbestos
That's not quite the distinction between Strong and Weak AI.

Searle, who created the terms "Strong AI" and "Weak AI", explains it best in the Chinese Room argument, which I'm sure you've read, but I'll just clarify it here:

Searle envisions a room in which a person is stationed with a very large rule-book. In this case, it happens to be a rule book for speaking Chinese. Stimuli come in to the room in the form of written Chinese texts. The speaker, who speaks no Chinese, is able to use the rule book to send out replies in Chinese, creating a perfect conversation. The person still does not understand Chinese, however, and so is an example of Weak AI: There's nothing but rules under the hood, there is no actual understanding.

The point of the argument isn't in the limits of what the person in the room can do. There's no reason why, if he had an even bigger book, the guy inside could conduct perfect conversations, do math and logic puzzles, wage war, write a book, reason about politics, human affairs, love and art and all the rest. Searle's point isn't that the guy wouldn't be able to do all this, it's that he would still have no actual understanding: he just be following rules.

So in terms of AI, the outward behaviors make no difference. What matters is whether the AI is actually sapient, or just "pretending" to be.

Now one may certainly not agree with Searle's point. Philosophers debate whether or not the person in the room actually understands Chinese; Turing proponents say that "if it looks like intelligence, then it is intelligence"; people programming just shrug their shoulders and keep programming. After all, to someone programming, there's no difference as to whether it is actually sapient, or just pretending to be.

This is why it's a philosophical point, not a programming point.


I agree. I think I may have just misunderstood you earlier.

The Weak vs. Strong AI debate is certainly a philosophical matter. Even if we one day do have AI's that are indistinguishable from humans, there will no doubt be a debate on whether those AI's really "understand" what they're doing.

Like you said, this gets back to the Chinese Room idea. I have my own position on that debate, but I don't want to delve into it here.

Here is what I am interested in (This kind of goes back to my original post about Novamente):
Given todays technology, would it be possible to program an AI agent in a simulated 3d environment with the ability to learn via "human capabilities". (ie, natural language processing(text), vision, touch, smell, taste). I know that it's possible to develope each of these abilities independent of each other. However, it would be interesting to incorporate them all into one a AI in a useful way so that knowledge from one sense would be useful to another.

As I said above, Novamente seems to be overkill for the moment. While it make take a project 10 or 100 times that size to produce an actual human-like AI, for the moment, I think much much smaller steps need to be taken with regards to incorporating several senses into 1 AI.

Share this post


Link to post
Share on other sites
I'm wondering why exactly anybody would even bother making a "strong AI"? What do we get out of it? A machine that mimics humans, i.e. a tempremental machine, prone to laziness and disobedience, unable to reason logically? No thanks.

Quote:

The Weak vs. Strong AI debate is certainly a philosophical matter. Even if we one day do have AI's that are indistinguishable from humans, there will no doubt be a debate on whether those AI's really "understand" what they're doing.

Like you said, this gets back to the Chinese Room idea.


Well, yes, it does. The Chinese Room Argument, IMO, isn't compelling in the least. There's no reason why we cannot say that the whole system is intelligent. Focussing on the guy inside the box is a canard. After all, something intelligent had to set the system up, produce the book etc.

Share this post


Link to post
Share on other sites
Quote:

The Weak vs. Strong AI debate is certainly a philosophical matter. Even if we one day do have AI's that are indistinguishable from humans, there will no doubt be a debate on whether those AI's really "understand" what they're doing.

Like you said, this gets back to the Chinese Room idea.

Quote:
Original post by MDI
Well, yes, it does. The Chinese Room Argument, IMO, isn't compelling in the least. There's no reason why we cannot say that the whole system is intelligent. Focussing on the guy inside the box is a canard. After all, something intelligent had to set the system up, produce the book etc.


My opinion on the Chinese room is: Yes, the human inside the room does not "really" understand Chinese. However, I also believe that the brain inside the human doesn't "really" understand English. Because in reality, there is nothing to understand.

This is all philosophy though. It doesn't really reach any conclusion as to whether Strong AI is technically possible.

Quote:
Why would someone want to build Strong AI.

Share this post


Link to post
Share on other sites
Quote:
Original post by aphydx
right now i am convinced that godel's incompleteness theorem proves that we need a radically different computational model in order to achieve strong ai.

How would it prove that?

Share this post


Link to post
Share on other sites
Quote:
Original post by Optimus Prime
Quote:

The Weak vs. Strong AI debate is certainly a philosophical matter. Even if we one day do have AI's that are indistinguishable from humans, there will no doubt be a debate on whether those AI's really "understand" what they're doing.

Like you said, this gets back to the Chinese Room idea.

Quote:
Original post by MDI
Well, yes, it does. The Chinese Room Argument, IMO, isn't compelling in the least. There's no reason why we cannot say that the whole system is intelligent. Focussing on the guy inside the box is a canard. After all, something intelligent had to set the system up, produce the book etc.


My opinion on the Chinese room is: Yes, the human inside the room does not "really" understand Chinese. However, I also believe that the brain inside the human doesn't "really" understand English. Because in reality, there is nothing to understand.

This is all philosophy though. It doesn't really reach any conclusion as to whether Strong AI is technically possible.

Quote:
Why would someone want to build Strong AI.


Going further, I would say that human "understanding" relies on rules we are not aware of.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Regarding Human Conscisousness/Spaience/whetever you want to believe in:

Check into people with 'split brain' and 'alien hand' disorders.
Where signifigant sections(even half) of their brain have been disconnected from the rest, and continue to act upon the portions of the body that they control.

The 'conscious' half of the brain (or at least the part that talks) claims that they have an 'alien' presence living with them that sometimes acts uncontrollably contrary from what they want...
Makes you wonder, if there really is a part of the brain you could point at as say 'consiousness is right there!'...

Share this post


Link to post
Share on other sites
Strong AI could potentially make the production of Weaker AI easier and so while the initial investment is the difficult part the results could produce "good" AI far quicker and easier than programming in every eventuality as with Weaker AI. Given an infinite amount of time Weaker AI can always produce the same outcomes as Strong AI but in a finite world I really can't see Weaker AI consistently beating Strong AI over time.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
regarding Strong AI and programming consciousness:

you can't program something that you don't understand

and I don't believe for a second that tossing a bunch of GAs or NNs into a blender and shaking them up will accidentally build one either

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
Original post by Graphain
Strong AI could potentially make the production of Weaker AI easier and so while the initial investment is the difficult part the results could produce "good" AI far quicker and easier than programming in every eventuality as with Weaker AI. Given an infinite amount of time Weaker AI can always produce the same outcomes as Strong AI but in a finite world I really can't see Weaker AI consistently beating Strong AI over time.


Uh, No
Pit the StrongAI known as You against the WeakAI of Chessmaster 4000. You will consistently lose.
In the context of a system Intuition(random) is no match for the Expert System. -by definition of expert

Share this post


Link to post
Share on other sites
Quote:
Original post by aphydx
penrose's argument for the higher abilities of the human mind (compared to an algorithmic axiomatic system) is applicable here, and argues that strong ai is not technically feasible, at least not w/ a turing machine

right now i am convinced that godel's incompleteness theorem proves that we need a radically different computational model in order to achieve strong ai. however it is unclear to me whether the incompleteness theorem also proves that we as potential machines cannot prove the consistency of or even explain our own cognitive faculties

so i will settle for the temporary argument that strong ai will never be possible, unless we stumble upon it by accident

but i could be convinced otherwise --edit, either by argument or maybe physical force--


Does anybody in the AI community actually take Penrose seriously?

Share this post


Link to post
Share on other sites
Quote:
Original post by Asbestos
The guy also seemed very proud of having worked out that, given infinite computing power, you could program anything in only a few lines. I assume he hadn't heard of a Universal Turing Machine before, which was worked out 70 years ago.


Is that actually the case? I don't see how it possibly can be - Kolmogorov Complexity argument. I don't see how you could be a serious AI researcher and not be aware of that.

http://en.wikipedia.org/wiki/Kolmogorov_complexity

Share this post


Link to post
Share on other sites
Quote:
Original post by Optimus Prime
The Weak vs. Strong AI debate is certainly a philosophical matter. Even if we one day do have AI's that are indistinguishable from humans, there will no doubt be a debate on whether those AI's really "understand" what they're doing.


Why don't we just make the smarest AI possible, and ASK it? :P

Share this post


Link to post
Share on other sites
Quote:
Original post by Optimus Prime
The Weak vs. Strong AI debate is certainly a philosophical matter. Even if we one day do have AI's that are indistinguishable from humans, there will no doubt be a debate on whether those AI's really "understand" what they're doing.


Why don't we just make the smarest AI possible, and ASK it? :P

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this