Sign in to follow this  
vallentin

What is really AI?

Recommended Posts

Hnefi    386
Quote:
Original post by BreathOfLife
It is something that living things just seem to have, and something that is very difficult to re-create outside of a very, very narrow scope (eg. computers that learn to play chess and backgammon well). Even then, the only real way that they can "learn" is because they have a near flawless memory and razor sharp math skills.

Not at all. It's true that it is difficult to implement learning in a large domain, but that is only the case because the complexity of learning increases very quickly with the size of the domain (probably exponentially). Remember that it takes the most intelligent creatures we know of about a year to simply learn how to walk. Most AI's aren't given that kind of timeframe to learn.

I should also point out that "flawless memory" has nothing to do with it. In fact, most machine learning techniques forgo that advantage and use various types of heuristics and approximations. I don't know of any learning technique, except the most trivial, that actually rely on perfect memory.

Quote:
Furthermore, they can only really "learn how to play well" the game after we tell them all the rules, and then explain what "playing well" is.

And this is different from humans how? If you don't tell a person the rules or goal of chess, they will never be able to play it. They may be able to infer the rules and goal by observing several games being played, but so could a computer.
Quote:
Computers and programs need everything definded for them... variable names, types, values. Learning would be like, creating a new variable type during runtime, and generating a bunch of operators to manipulate the data, and then implementing them.

I used to think so as well, but it's completely wrong. You assume that the data structures in machine code correspond directly to the objects and features observed by the program. This is wrong. Some types of machine learning do not even have discrete variables at all for the things observed, yet they manage to be very efficient learners anyway. Other strategies use a generalistic approach, where the observed objects are first classified with some very general technique (such as self-organizing maps) and then instanced as collections of classifications - features. Such techniques can be applied to any reasonable domain, though it may not be the most efficient.

Quote:
Basically, in reference the the OPs origonal post reference, my 2 cents is that the difference between intelligence, and artificial intelligence, is that one exists, and the other countless people are trying to reproduce the best they can against a completely impossible goal.

You seem to have fallen for the same unrealistic expectations that the early promoters of the field did - that it's just a matter of computing power. But unlike them, you realize that we can never have enough computing power to solve the problem of AI with trivial methods.

But the field of AI is much bigger than that. Sure, we are far from implementing sentience. That's not exactly surprising. But the efforts made to implement it has yielded a lot of valuable knowledge when it comes to machine learning, planning, searching, knowledge representation etc that are used a lot in software today.

Maybe we'll never implement a "real" AI, even if we manage to define the term. But to say that it's impossible, given what we know today, is overly pessimistic - and the knowledge gained by trying sure makes it worth the effort.

Share this post


Link to post
Share on other sites
BreathOfLife    188
Heuristics. Thats my new stance. Its the heuristic that is my "overly pessimistic" point of origin. I do not believe that a computer could ever implesment a heuristic, unless at some point we give it one, or exmplain to it somehow what a heuristic is.

Truly intellegent AI seems totally unreal, that doesnt mean that the AI we have come up with thus far isnt really, really close. In same cases, it is. But they all need a heuristic from us to start the process.


This might come out sounding entirly ignorant, but I'm not entirely sure that its true, but I bet that most if not all AI development spends alot of time tweaking heuristics.

Say we make a machine capable of feeling pain. It would not be able to avoid situations that cause pain unless we tell it pain is "bad" somehow. It would know pain, but wouldnt even bother trying to avoid it.

You could counter with a "happy meter" style stance, that its not "bad" it is just less "happy". But that only works if we first explain to it that "happy" is what it wants to be.

If we could avoid having to do so, we'd have a machine that we could teach how to play backgammon, and midway through have it refuse to do so because it doesnt want to play.

Share this post


Link to post
Share on other sites
Hnefi    386
Quote:
Original post by BreathOfLife
Heuristics. Thats my new stance. Its the heuristic that is my "overly pessimistic" point of origin. I do not believe that a computer could ever implesment a heuristic, unless at some point we give it one, or exmplain to it somehow what a heuristic is.

Truly intellegent AI seems totally unreal, that doesnt mean that the AI we have come up with thus far isnt really, really close. In same cases, it is. But they all need a heuristic from us to start the process.

I think you need to look a bit closer on the current state of the field. A heuristic is just a strategy for approximation, and they are, at this point, pretty trivial to implement in such a way that the computer can construct and fine-tune a heuristic from scratch for any purpose without any help except being told what the goals corresponding to the input is. The most common technique for this is probably a classical backpropagating neural network. I believe ANN's are probably the most promising area of research for those looking to create "real" AI; these people are, however, in the minority among the research community. Most researchers are building automated cars and bombers; not Johnny 5.
Quote:
This might come out sounding entirly ignorant, but I'm not entirely sure that its true, but I bet that most if not all AI development spends alot of time tweaking heuristics.

Any implementation work is primarily tweaking. The actual theoretics are a relatively minor part of the total work hours being put into most fields of research in computer science.
Quote:
Say we make a machine capable of feeling pain. It would not be able to avoid situations that cause pain unless we tell it pain is "bad" somehow. It would know pain, but wouldnt even bother trying to avoid it.

You could counter with a "happy meter" style stance, that its not "bad" it is just less "happy". But that only works if we first explain to it that "happy" is what it wants to be.

What you are basically saying here is that the metric of success - to know whether what one does is good or bad - is difficult to implement. This is sometimes true, sometimes not. In a game, it is trivial to implement (the closer you are to winning, the happier you are). In an agent that is supposed to emulate human behavior, it is much more difficult. I'm not sure whether any research is actually being made in that area though; it seems pretty premature. Check back in a decade or two.
Quote:
If we could avoid having to do so, we'd have a machine that we could teach how to play backgammon, and midway through have it refuse to do so because it doesnt want to play.

If we trained a computer only to be as good a backgammon player as possible, it would be "happier" the better it played. The option of refusal would not even be available to it, because it does nothing to enhance its playing abilities and is not part of the relevant domain anyway.

If we built instead a computer that operated in a larger domain (say, an entire set of different boardgames) and allowed it a say in which game to play, then a generalistic approach would most likely result in a computer favouring the games it is best at, resulting in a refusal to play the games it plays poorly.

But you must remember that whatever we implemented is limited to the domain in which we construct it to operate. A game AI, built for playing only one game, should not be able to refuse to play so we never give it that option. A car AI, built for transporting people to different places, should not be able to erase its own hard drive so we never give it that option. All agents, artificial or otherwise (that includes us), are limited by the choices available to them. We will never see an entirely "free" AI, able to breach the domain in which it operates (such as refusing to play a game it is built to play), because such a concept is as ridiculous as humans willing themselves to be able to fly or move objects with their mind. We can't break the limits built into us.

Share this post


Link to post
Share on other sites
BreathOfLife    188
Us having to give an "AI" a domain is just as similar as telling to to try some sort of aproximatation, its just not going to happen on its own.

By this token, we could differentiate between erzatz and artifitial intelligence. One is a very close re-creation of a sytem learning to do something, the other is a reproduction of learning at a base level, in accordance with my theory that "Intelligence is the ability to learn".

EI might come very very close to AI, and AI might come very very close to Intelligence, but AI will still never quite match with Intelligence.

Call it pesimistic, but I just believe that I know our bounds. Currently, Im quite happy with them, and love the process of creating AI (specifically within am game system). I code an engine, I code the rules of say RPG style combat, I code the AI to go through the motions of said combat and have it learn from the outcome. If I do a bang-up job, I've managed a very nice example of EI, but not AI as far as Im concerned.

EI deals with a scope we can ourselves can understand. AI works at an entirely different plane. It requires the ability to generate said understanding.

Share this post


Link to post
Share on other sites
Hnefi    386
Quote:
Original post by BreathOfLife
Us having to give an "AI" a domain is just as similar as telling to to try some sort of aproximatation, its just not going to happen on its own.

Of course it won't "happen on its own". No entity can change the domain in which it operates - an AI won't be able to perform actions unavailable to it any more than we are.
Quote:
By this token, we could differentiate between erzatz and artifitial intelligence. One is a very close re-creation of a sytem learning to do something, the other is a reproduction of learning at a base level, in accordance with my theory that "Intelligence is the ability to learn".

I don't see how that's useful for differentiating between true AI and weak imitations. What, exactly, is the metric and methodology used? Given perfect knowledge of a system, how would you go about determining whether it was intelligent?

Share this post


Link to post
Share on other sites
sharpnova    108
Quote:
Original post by Hinkar(Currently) AI is a misnomer. You just put your ball in the slot and it trickles through your code and you get a correct result.


You're correct in that there are a lot of misconceptions about AI and the nature of consciousness and intelligence. Unfortunately for you, what you just said is one of the common misconceptions.

The difference between AI and your code is not in nature but simply in order of magnitude.

For example, putting a ball in the slot and letting it trickle through your code and get a result is analogous to providing a human being with visual/audio input, letting it trickle through their neural network (brain) and produce a result (thought.. or the resulting action that results from that thought).

Share this post


Link to post
Share on other sites
Emergent    982
Quote:
Original post by Timkin
As for Emergent's comments...
[...]
As a quick aside... machine learning and Optimal Control should not be considered side by side and I don't believe anyone working in OC would ever claim what they were doing was AI. If though you meant by OC merely the problem of determining an optimal control function/regulator... then ML is just a tool for doing that... as are the formal methods of OC (what we usually call Control Theory).


Actually, I have a specific example of a control theoretician saying pretty much exactly that. I quote (roughly) a professor who does research in optimal and multi-agent control:

"Most machine learning is pretty-much optimal control."

It's not a completely fair statement, but there's a real element of truth. The example he presented was Q-Learning, which he argued was just an application of Bellman's Optimality Principle. It is, of course.

I think the temptation when a control theoretician realizes this is to trivialize the algorithm: That is what this professor tried to avoid but I think did anyway. Me, I take both sides: I say that it's a very nice piece of work not to be trivialized as "just the straightforward application of Optimal Control" (Because it isn't. The update rule is non-obvious.), but also agree with the professor that we shouldn't wrap it in unnecessary mysticism.

The sentiment that this professor expressed is part of what is, I think, a larger movement by control theoreticians, who have begun to recognize that they can unleash the mathematical tools they've developed on problems traditionally considered "AI."

I'm very sympathetic to their point of view, as a lot of what they have accomplished by building on good theoretical foundations has been incredibly impressive. But I also keep in mind a warning from W.S.Anglin:

"Mathematics is not a careful march down a well-cleared highway, but a journey into a strange wilderness, where the explorers often get lost. Rigour should be a signal to the historian that the maps have been made, and the real explorers have gone elsewhere."

(Of course, this too is an oversimplification.)

Anyway, I'm getting off topic, so let me return to the point: I do think that many of the problems tackled by people in the AI community also have solutions with different flavors from the controls community (and visa versa), and whether it is an "intelligent system" or just a "controller" (or even less sexy: a "regulator"), depends largely on which researcher came up with it.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this