|Original post by BreathOfLife|
It is something that living things just seem to have, and something that is very difficult to re-create outside of a very, very narrow scope (eg. computers that learn to play chess and backgammon well). Even then, the only real way that they can "learn" is because they have a near flawless memory and razor sharp math skills.
Not at all. It's true that it is difficult to implement learning in a large domain, but that is only the case because the complexity of learning increases very quickly with the size of the domain (probably exponentially). Remember that it takes the most intelligent creatures we know of about a year to simply learn how to walk. Most AI's aren't given that kind of timeframe to learn.
I should also point out that "flawless memory" has nothing to do with it. In fact, most machine learning techniques forgo that advantage and use various types of heuristics and approximations. I don't know of any learning technique, except the most trivial, that actually rely on perfect memory.
|Furthermore, they can only really "learn how to play well" the game after we tell them all the rules, and then explain what "playing well" is. |
And this is different from humans how? If you don't tell a person the rules or goal of chess, they will never be able to play it. They may be able to infer the rules and goal by observing several games being played, but so could a computer.
|Computers and programs need everything definded for them... variable names, types, values. Learning would be like, creating a new variable type during runtime, and generating a bunch of operators to manipulate the data, and then implementing them.|
I used to think so as well, but it's completely wrong. You assume that the data structures in machine code correspond directly to the objects and features observed by the program. This is wrong. Some types of machine learning do not even have discrete variables at all for the things observed, yet they manage to be very efficient learners anyway. Other strategies use a generalistic approach, where the observed objects are first classified with some very general technique (such as self-organizing maps) and then instanced as collections of classifications - features. Such techniques can be applied to any reasonable domain, though it may not be the most efficient.
|Basically, in reference the the OPs origonal post reference, my 2 cents is that the difference between intelligence, and artificial intelligence, is that one exists, and the other countless people are trying to reproduce the best they can against a completely impossible goal.|
You seem to have fallen for the same unrealistic expectations that the early promoters of the field did - that it's just a matter of computing power. But unlike them, you realize that we can never have enough computing power to solve the problem of AI with trivial methods.
But the field of AI is much bigger than that. Sure, we are far from implementing sentience. That's not exactly surprising. But the efforts made to implement it has yielded a lot of valuable knowledge when it comes to machine learning, planning, searching, knowledge representation etc that are used a lot in software today.
Maybe we'll never implement a "real" AI, even if we manage to define the term. But to say that it's impossible, given what we know today, is overly pessimistic - and the knowledge gained by trying sure makes it worth the effort.