AI thoughts

Started by
10 comments, last by losethos 15 years, 8 months ago
So I was pondering casually what exactly makes intelligence so useful, as I am sure many of us had, especially those who work in AI. One idea I had was that abstraction and intuition probably may be two possible, and obvious sources. By abstraction, I mean the mechanism that makes polymorphism so useful. It gives you something simple to work with, that removes the need to think about other intermediate steps. For example, in polymorhpism you would need to think about conversion between types more than you would without it. It is somehow just nicer to work with. That one ability alone seems to give a huge mental advantage. I say intuition is important because if you get a few levels of abstraction and you need to solve a problem with concepts in a different layer of abstraction, navigating that by direct analysis can be very time consuming or difficult to do. For example in physics a force is a fairly basic thing to work with. Many concepts build on the idea of the force, and allow for ways to work with forces indirectly. One notable example is energy conservation. As a physics student, I thank god every day that energy seems to conserve. Some problems would be just terrible or maybe even impossible to solve analytically with just the idea of the force. But sometimes you do need to work backwards in abstraction levels (from energy to force). That can sort of thing can often be very hard to do. With intuition you can sort of guide yourself in situations like that to solutions, although you cant really explain why you go in some direction. Are these ideas totally obvious? Not true? What has been done in AI that explores these aspects of mental abilities? Are there abstraction algorithms out there that abstracts concepts in software? Maybe start with a simple interpreted programming language that can take simple commands in any order, during run time. This would be layer 0 of abstraction. Then a second layer of a finite number containers are created that represent the next level of abstraction. This new layer would randomly be populated with a small number of commands to the language. Any number of subsequent abstraction layers could be created and populated with references to containers in a previous layer. Then you could execute some number of containers in an abstraction layer, and they would each chain down and create some sort of complex pattern in the interpreter. The intuition would be what executes containers in an abstraction layer. It would probably a genetic algorithm or neural net of some sort. It would find what execution patterns of an abstraction layer are useful. Has this sort of thing been done before? Does it work? I do not really spend much in time in AI, so I am not sure what is out there.
--------------------------I present for tribute this haiku:Inane Ravings OfThe Haunting JubilationA Mad Engineer©Copyright 2005 ExtrariusAll Rights Reserved
Advertisement
A bit of wisdom : intelligence is not trivial and will not suddenly emmerge with some quack idea.
Your idea of abstraction is similar to an idea that has become quite popular in robotics in recent years, though it's been around for a while. It is the simple idea od a hierarchical control system. The brain, or highest level, gives the highest level commands without knowing what actually will happen, say for example, a walk command. The highest level node just has a simple expectation that a walk command will make itself move, but how it is handled in the lower level, is completely up to them. So, as the command trickles down the "chain of command" and becomes more refined, the individual components that are involved will start getting actual concrete commands. These components will be on the lowest end. So, one motor may end up only getting a command to spin a certain amount of time in a certain direction. Note that the motor really has no clue what is going on because its movement is coordinated by a local command center, which can be thought of as its parent node in the command hierarchy.

As for intuition, that is a little different from what you are thinking. Human intuition is more like a greedy extrapolation from past experiences. So, it really doesn't need anything as complex as a genetic algorithm or neural network. Something just as simple as a weighted average of past relevant experiences will probably do. Just think, intuition is usually a split second snap decision containing no doubt, which means, from a algorithm standpoint, the "idea" did not go through any form of "refinement." It is just a straight forward heuristic guess.
Im really surprised that you brought up polymorphism in the context of intelligence, but then passed it off as something that makes things easier for conversion between types.

In my opinion polymorphism is like something from algebra, a homomorphism. it allows us to strip the details of something away and treat things in the way they are the same. To exploit some symmetry.
First-order logic was supposed to be the holy grail of logical abstraction - but it still falls short in some ways. Until there is a level of inference based on innumerable similarities and differences (including context), we are still struggling uphill.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Quote:Original post by Witchcraven
So I was pondering casually what exactly makes intelligence so useful, as I am sure many of us had, especially those who work in AI. One idea I had was that abstraction and intuition probably may be two possible, and obvious sources.

By abstraction, I mean the mechanism that makes polymorphism so useful. It gives you something simple to work with, that removes the need to think about other intermediate steps. For example, in polymorhpism you would need to think about conversion between types more than you would without it. It is somehow just nicer to work with.

That one ability alone seems to give a huge mental advantage. I say intuition is important because if you get a few levels of abstraction and you need to solve a problem with concepts in a different layer of abstraction, navigating that by direct analysis can be very time consuming or difficult to do. For example in physics a force is a fairly basic thing to work with. Many concepts build on the idea of the force, and allow for ways to work with forces indirectly.

One notable example is energy conservation. As a physics student, I thank god every day that energy seems to conserve. Some problems would be just terrible or maybe even impossible to solve analytically with just the idea of the force. But sometimes you do need to work backwards in abstraction levels (from energy to force). That can sort of thing can often be very hard to do. With intuition you can sort of guide yourself in situations like that to solutions, although you cant really explain why you go in some direction.

Are these ideas totally obvious? Not true? What has been done in AI that explores these aspects of mental abilities? Are there abstraction algorithms out there that abstracts concepts in software?

Maybe start with a simple interpreted programming language that can take simple commands in any order, during run time. This would be layer 0 of abstraction. Then a second layer of a finite number containers are created that represent the next level of abstraction. This new layer would randomly be populated with a small number of commands to the language. Any number of subsequent abstraction layers could be created and populated with references to containers in a previous layer. Then you could execute some number of containers in an abstraction layer, and they would each chain down and create some sort of complex pattern in the interpreter.

The intuition would be what executes containers in an abstraction layer. It would probably a genetic algorithm or neural net of some sort. It would find what execution patterns of an abstraction layer are useful.

Has this sort of thing been done before? Does it work? I do not really spend much in time in AI, so I am not sure what is out there.





The intuition part is one of the hard parts. Think of how many factors influence any decision you make and how much information it requires evaluation/interpretation of and the interrelations of that information for each flavor (a continuum??) of decisions to be handled.

We have 100 billion fuzzy-logic subprocessors in our brains with thousand times that many interconnections, It self adjusts continually. Thats where our 'intuition' comes from. You may be able to boil down the same operaion to logic but even for fairly simple real world problem space the quantity of logic is massive. Just entering all that logic (even with assist from a learn by demonstration system) is a significant chokepoint.
--------------------------------------------[size="1"]Ratings are Opinion, not Fact
Perhaps a good place to mention that my rep at Charles River Media got back to me the other day. My book proposal is approved and they are drawing up the contract now. Expect "Behavioral Mathematics for Game AI" on the shelves by GDC. (Now I just have to write 20-30 pages/week until Christmas. *sigh*)

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Congrats IF! Or should I offer my condolences for a prematurely expired vacation? :P
-------------Please rate this post if it was useful.
Quote:Original post by InnocuousFox
Perhaps a good place to mention that my rep at Charles River Media got back to me the other day. My book proposal is approved and they are drawing up the contract now. Expect "Behavioral Mathematics for Game AI" on the shelves by GDC. (Now I just have to write 20-30 pages/week until Christmas. *sigh*)


Congratulations!

Speaking of Behaviour, I am a beginner and I cannot find anything on Behavio?ur Trees beyond a AI Gamdev post and a metapost on IA on AI. What is it really in more traditional terms?
Quote:Original post by ibebrett
Im really surprised that you brought up polymorphism in the context of intelligence, but then passed it off as something that makes things easier for conversion between types.

In my opinion polymorphism is like something from algebra, a homomorphism. it allows us to strip the details of something away and treat things in the way they are the same. To exploit some symmetry.


Agreed... Polymorphism is something from algebra. [grin] They are a type of Natural Transformation.

This topic is closed to new replies.

Advertisement