Jump to content
  • Advertisement
Sign in to follow this  
Witchcraven

AI thoughts

This topic is 3795 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So I was pondering casually what exactly makes intelligence so useful, as I am sure many of us had, especially those who work in AI. One idea I had was that abstraction and intuition probably may be two possible, and obvious sources. By abstraction, I mean the mechanism that makes polymorphism so useful. It gives you something simple to work with, that removes the need to think about other intermediate steps. For example, in polymorhpism you would need to think about conversion between types more than you would without it. It is somehow just nicer to work with. That one ability alone seems to give a huge mental advantage. I say intuition is important because if you get a few levels of abstraction and you need to solve a problem with concepts in a different layer of abstraction, navigating that by direct analysis can be very time consuming or difficult to do. For example in physics a force is a fairly basic thing to work with. Many concepts build on the idea of the force, and allow for ways to work with forces indirectly. One notable example is energy conservation. As a physics student, I thank god every day that energy seems to conserve. Some problems would be just terrible or maybe even impossible to solve analytically with just the idea of the force. But sometimes you do need to work backwards in abstraction levels (from energy to force). That can sort of thing can often be very hard to do. With intuition you can sort of guide yourself in situations like that to solutions, although you cant really explain why you go in some direction. Are these ideas totally obvious? Not true? What has been done in AI that explores these aspects of mental abilities? Are there abstraction algorithms out there that abstracts concepts in software? Maybe start with a simple interpreted programming language that can take simple commands in any order, during run time. This would be layer 0 of abstraction. Then a second layer of a finite number containers are created that represent the next level of abstraction. This new layer would randomly be populated with a small number of commands to the language. Any number of subsequent abstraction layers could be created and populated with references to containers in a previous layer. Then you could execute some number of containers in an abstraction layer, and they would each chain down and create some sort of complex pattern in the interpreter. The intuition would be what executes containers in an abstraction layer. It would probably a genetic algorithm or neural net of some sort. It would find what execution patterns of an abstraction layer are useful. Has this sort of thing been done before? Does it work? I do not really spend much in time in AI, so I am not sure what is out there.

Share this post


Link to post
Share on other sites
Advertisement
Your idea of abstraction is similar to an idea that has become quite popular in robotics in recent years, though it's been around for a while. It is the simple idea od a hierarchical control system. The brain, or highest level, gives the highest level commands without knowing what actually will happen, say for example, a walk command. The highest level node just has a simple expectation that a walk command will make itself move, but how it is handled in the lower level, is completely up to them. So, as the command trickles down the "chain of command" and becomes more refined, the individual components that are involved will start getting actual concrete commands. These components will be on the lowest end. So, one motor may end up only getting a command to spin a certain amount of time in a certain direction. Note that the motor really has no clue what is going on because its movement is coordinated by a local command center, which can be thought of as its parent node in the command hierarchy.

As for intuition, that is a little different from what you are thinking. Human intuition is more like a greedy extrapolation from past experiences. So, it really doesn't need anything as complex as a genetic algorithm or neural network. Something just as simple as a weighted average of past relevant experiences will probably do. Just think, intuition is usually a split second snap decision containing no doubt, which means, from a algorithm standpoint, the "idea" did not go through any form of "refinement." It is just a straight forward heuristic guess.

Share this post


Link to post
Share on other sites
Im really surprised that you brought up polymorphism in the context of intelligence, but then passed it off as something that makes things easier for conversion between types.

In my opinion polymorphism is like something from algebra, a homomorphism. it allows us to strip the details of something away and treat things in the way they are the same. To exploit some symmetry.

Share this post


Link to post
Share on other sites
First-order logic was supposed to be the holy grail of logical abstraction - but it still falls short in some ways. Until there is a level of inference based on innumerable similarities and differences (including context), we are still struggling uphill.

Share this post


Link to post
Share on other sites
Quote:
Original post by Witchcraven
So I was pondering casually what exactly makes intelligence so useful, as I am sure many of us had, especially those who work in AI. One idea I had was that abstraction and intuition probably may be two possible, and obvious sources.

By abstraction, I mean the mechanism that makes polymorphism so useful. It gives you something simple to work with, that removes the need to think about other intermediate steps. For example, in polymorhpism you would need to think about conversion between types more than you would without it. It is somehow just nicer to work with.

That one ability alone seems to give a huge mental advantage. I say intuition is important because if you get a few levels of abstraction and you need to solve a problem with concepts in a different layer of abstraction, navigating that by direct analysis can be very time consuming or difficult to do. For example in physics a force is a fairly basic thing to work with. Many concepts build on the idea of the force, and allow for ways to work with forces indirectly.

One notable example is energy conservation. As a physics student, I thank god every day that energy seems to conserve. Some problems would be just terrible or maybe even impossible to solve analytically with just the idea of the force. But sometimes you do need to work backwards in abstraction levels (from energy to force). That can sort of thing can often be very hard to do. With intuition you can sort of guide yourself in situations like that to solutions, although you cant really explain why you go in some direction.

Are these ideas totally obvious? Not true? What has been done in AI that explores these aspects of mental abilities? Are there abstraction algorithms out there that abstracts concepts in software?

Maybe start with a simple interpreted programming language that can take simple commands in any order, during run time. This would be layer 0 of abstraction. Then a second layer of a finite number containers are created that represent the next level of abstraction. This new layer would randomly be populated with a small number of commands to the language. Any number of subsequent abstraction layers could be created and populated with references to containers in a previous layer. Then you could execute some number of containers in an abstraction layer, and they would each chain down and create some sort of complex pattern in the interpreter.

The intuition would be what executes containers in an abstraction layer. It would probably a genetic algorithm or neural net of some sort. It would find what execution patterns of an abstraction layer are useful.

Has this sort of thing been done before? Does it work? I do not really spend much in time in AI, so I am not sure what is out there.





The intuition part is one of the hard parts. Think of how many factors influence any decision you make and how much information it requires evaluation/interpretation of and the interrelations of that information for each flavor (a continuum??) of decisions to be handled.

We have 100 billion fuzzy-logic subprocessors in our brains with thousand times that many interconnections, It self adjusts continually. Thats where our 'intuition' comes from. You may be able to boil down the same operaion to logic but even for fairly simple real world problem space the quantity of logic is massive. Just entering all that logic (even with assist from a learn by demonstration system) is a significant chokepoint.

Share this post


Link to post
Share on other sites
Perhaps a good place to mention that my rep at Charles River Media got back to me the other day. My book proposal is approved and they are drawing up the contract now. Expect "Behavioral Mathematics for Game AI" on the shelves by GDC. (Now I just have to write 20-30 pages/week until Christmas. *sigh*)

Share this post


Link to post
Share on other sites
Congrats IF! Or should I offer my condolences for a prematurely expired vacation? :P

Share this post


Link to post
Share on other sites
Quote:
Original post by InnocuousFox
Perhaps a good place to mention that my rep at Charles River Media got back to me the other day. My book proposal is approved and they are drawing up the contract now. Expect "Behavioral Mathematics for Game AI" on the shelves by GDC. (Now I just have to write 20-30 pages/week until Christmas. *sigh*)


Congratulations!

Speaking of Behaviour, I am a beginner and I cannot find anything on Behavio?ur Trees beyond a AI Gamdev post and a metapost on IA on AI. What is it really in more traditional terms?

Share this post


Link to post
Share on other sites
Quote:
Original post by ibebrett
Im really surprised that you brought up polymorphism in the context of intelligence, but then passed it off as something that makes things easier for conversion between types.

In my opinion polymorphism is like something from algebra, a homomorphism. it allows us to strip the details of something away and treat things in the way they are the same. To exploit some symmetry.


Agreed... Polymorphism is something from algebra. [grin] They are a type of Natural Transformation.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!