Theory - ultimate AI, at atomic level

Started by
60 comments, last by IADaveMark 12 years, 6 months ago
Incidentally, for those who cite "perfect knowledge of the laws of physics" need to google the "3-body problem" and the "butterfly effect".

On-topic, I should have locked this thread when I had the chance. dry.gif

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Advertisement
In my very personal opinion trying to create the "Ultimate AI" is a waste of time. I don't mean that AI is a subject that should not be developed
since it's at a great level already( like TOPIO). In order to make it more clear think of a simple thing. What humans have that computers don't? Maybe a lot of things but i'll stick to the subject.
So a human has something called "thought" which it's quite easy to understand, but it cannot be abstracted with any kind of mathematical equation. You simply say "i want to go there" and there may be some
obstacles in the way but you "know" how to bypass them without saying "i have to do this maneuver" or jump, it's done right away without any comparing or calculation. So AI i simply a way to simulate thought with mathematics and equations
but real thought cannot be simulated that way. If you want a real example what i mean just test yourself by putting a simple problem to go somewhere but with obstacles on the way, what you do in order to go there?
And see if you can put down that thought with mathematics.
Although AI may be reach a level quite high enough, the "thinking with calculations" is not the real way of thinking so it's quite limited compared to human thought.
Again this is my personal opinion and i'm open to ideas.

- A programmer

In my very personal opinion trying to create the "Ultimate AI" is a waste of time. I don't mean that AI is a subject that should not be developed
since it's at a great level already( like TOPIO). In order to make it more clear think of a simple thing. What humans have that computers don't? Maybe a lot of things but i'll stick to the subject.
So a human has something called "thought" which it's quite easy to understand, but it cannot be abstracted with any kind of mathematical equation. You simply say "i want to go there" and there may be some
obstacles in the way but you "know" how to bypass them without saying "i have to do this maneuver" or jump, it's done right away without any comparing or calculation. So AI i simply a way to simulate thought with mathematics and equations
but real thought cannot be simulated that way. If you want a real example what i mean just test yourself by putting a simple problem to go somewhere but with obstacles on the way, what you do in order to go there?
And see if you can put down that thought with mathematics.
Although AI may be reach a level quite high enough, the "thinking with calculations" is not the real way of thinking so it's quite limited compared to human thought.
Again this is my personal opinion and i'm open to ideas.

- A programmer

Just because you/we can't think of a way to "put down that thought with mathematics" doesn't mean it can't be done.
Human mind exists in the physical world and one day we will understand particularly everything about the physical world (at least everything useful) and it will be "put down with mathematics". (it doesn't mean we can model and predict the world perfectly, that's a different and impossible thing).
So a human has something called "thought" which it's quite easy to understand, but it cannot be abstracted with any kind of mathematical equation.

Very over-generalized. I wrote a whole bloody book on abstracting thought to mathematical equations. One example:

Two identical objects that you desire. The price of one is $2, the other is $2000. Which will you purchase? I can generally abstract that out with the complex mathematical construct known as the "less than" operator.

You simply say "i want to go there" and there may be some obstacles in the way but you "know" how to bypass them without saying "i have to do this maneuver" or jump, it's done right away without any comparing or calculation.[/quote]
You are wrong. Just because you aren't aware of the comparing and calculation doesn't mean that it isn't happening. In fact, there are numerous scientifically tested examples that discern how we do measure and calculate -- even on a subconscious level. Of course, sometimes our perception or belief systems are incorrect and we may, therefore, choose incorrectly. Still, there are calculations being done.

So AI i simply a way to simulate thought with mathematics and equations but real thought cannot be simulated that way. If you want a real example what i mean just test yourself by putting a simple problem to go somewhere but with obstacles on the way, what you do in order to go there? And see if you can put down that thought with mathematics.[/quote]
This is what I do, sir. This is what I do. And AI programmers need to be able to do it as well.

I leave you with this comic:
[attachment=5161:equations.png]

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

I would suggest that an atomic representation of a brain is entirely unnecessary (for the purposes of artificial intelligence) when it is theoretically possible to create a completely functional model of the brain at a cognitive/architectural level. What we lack is a sufficient understanding of said architecture (and, more trivially to a theoretical discussion, sufficient storage). In fact neural modelling seems to be more useful as an aid in neuroscience research than in artificial intelligence research.
Just ignoring the obvious scaling problem, another problem with trying to "simulate reality" on an atomic level is that we have no idea how they _actually_work_.
Even less on how we can generalize this in any useful way to simulate them "in bunch"


We have lots of models describing how they seem to move and behave in 4d space, but we know thats just part of what an atom is, the model is incomplete, and no-one knows how it "really works", that's the reason they build things like the LHC.

So even if we did manage to simulate every atom in a brain to our current knowledge, we have no idea if this would produce anything useful or would just fall apart.

And again, just simulating what we _do_ know about atoms/molecules for just a few (thousand) atoms in a protein (as in Folding@HOME) is extremely costly.
Atoms are _not_ governed by "a few simple laws" like balls bouncing around.

Incidentally, for those who cite "perfect knowledge of the laws of physics" need to google the "3-body problem" and the "butterfly effect".

On-topic, I should have locked this thread when I had the chance. dry.gif


I do not believe you understand either of these problems then.

It's easy to simulate 3 bodies. With a powerful enough computer you could even simulate them accurately. The problem asks for a closed expression for their position(time). This is a mathematical failing. Not a physics failing.

The butterfly effect is exactly the same deal.

The original post was an obvious troll. Not made obvious only by the implication that we could simulate a planet, but that the best way to simulate intelligence would be on the atomic level, rather than the cellular level.

I'd also like to point out to another poster that thinks that machines won't be able to "think" that it is not the machine's failure to think but YOUR failure to understand what thought is.

You think your thoughts are under your control but they are just the deterministic result of the processing of your neural network. Simulating an equivalent neural network in software would have the result of a conscious entity of equal intelligence and sentience as yourself.

(simulating a better one would result in a conscious entity capable of understanding that thought is a deterministic result. not an underlying, driving force)

[quote name='IADaveMark' timestamp='1314404419' post='4854258']
Incidentally, for those who cite "perfect knowledge of the laws of physics" need to google the "3-body problem" and the "butterfly effect".

On-topic, I should have locked this thread when I had the chance. dry.gif


I do not believe you understand either of these problems then.

It's easy to simulate 3 bodies. With a powerful enough computer you could even simulate them accurately. The problem asks for a closed expression for their position(time). This is a mathematical failing. Not a physics failing.

The butterfly effect is exactly the same deal.

The original post was an obvious troll. Not made obvious only by the implication that we could simulate a planet, but that the best way to simulate intelligence would be on the atomic level, rather than the cellular level.

I'd also like to point out to another poster that thinks that machines won't be able to "think" that it is not the machine's failure to think but YOUR failure to understand what thought is.

You think your thoughts are under your control but they are just the deterministic result of the processing of your neural network. Simulating an equivalent neural network in software would have the result of a conscious entity of equal intelligence and sentience as yourself.

(simulating a better one would result in a conscious entity capable of understanding that thought is a deterministic result. not an underlying, driving force)
[/quote]


If you close your eyes and think of a person or picture...what is it youre seeing? That "picture" that you're imagining is not a picture...its a thought, but one that we interpret as a picture by using different parts of our brain to form it.

When you create an AI...how would they see that "thought" If you programmed them with knowledge of a particular item, could they use their programmed knowledge to picture the item without seeing it?

Makes me think of language...language is actually a barrier that slows our thought process down. If everyone was equally intelligent, perfect beings, we would have no need for language. It wouldnt be telepathy, it would be knowing the answer because its the right thing to do. If we had to communicate with people, we would instantly understand what they needed without exchanging words because we could interpret the need without having to talk.

its like a team game..either digital or athletic. you become a cohesive unit...multiple brains melding into one to the point where you can predict what the other is doing without talking.

pretty neat to think about.


[quote name='sooner123' timestamp='1315836466' post='4860677']
[quote name='IADaveMark' timestamp='1314404419' post='4854258']
Incidentally, for those who cite "perfect knowledge of the laws of physics" need to google the "3-body problem" and the "butterfly effect".

On-topic, I should have locked this thread when I had the chance. dry.gif


I do not believe you understand either of these problems then.

It's easy to simulate 3 bodies. With a powerful enough computer you could even simulate them accurately. The problem asks for a closed expression for their position(time). This is a mathematical failing. Not a physics failing.

The butterfly effect is exactly the same deal.

The original post was an obvious troll. Not made obvious only by the implication that we could simulate a planet, but that the best way to simulate intelligence would be on the atomic level, rather than the cellular level.

I'd also like to point out to another poster that thinks that machines won't be able to "think" that it is not the machine's failure to think but YOUR failure to understand what thought is.

You think your thoughts are under your control but they are just the deterministic result of the processing of your neural network. Simulating an equivalent neural network in software would have the result of a conscious entity of equal intelligence and sentience as yourself.

(simulating a better one would result in a conscious entity capable of understanding that thought is a deterministic result. not an underlying, driving force)
[/quote]


If you close your eyes and think of a person or picture...what is it youre seeing? That "picture" that you're imagining is not a picture...its a thought, but one that we interpret as a picture by using different parts of our brain to form it.

When you create an AI...how would they see that "thought" If you programmed them with knowledge of a particular item, could they use their programmed knowledge to picture the item without seeing it?

Makes me think of language...language is actually a barrier that slows our thought process down. If everyone was equally intelligent, perfect beings, we would have no need for language. It wouldnt be telepathy, it would be knowing the answer because its the right thing to do. If we had to communicate with people, we would instantly understand what they needed without exchanging words because we could interpret the need without having to talk.

its like a team game..either digital or athletic. you become a cohesive unit...multiple brains melding into one to the point where you can predict what the other is doing without talking.

pretty neat to think about.


[/quote]

Best advice, stay away from metaphysics or philosophy when working AI as they are not based in fact (not to be confused with conceptual facts in philosophies) or more importantly are not grounded in reality, which is where you have to make the AI funciton. Additionally, best to avoid the "perfect emulation" of thought or thinking and go for "best approximation" as perfect emulation requires magnitudes of processing power above the "best approximation".

Case and Point: SNES emulation can be perfectly emulated under existing hardware, however, it only has recently been possible to do so, mainly due to the amount of processing power required to perfectly emulate SNES hardware.

Another Example: IBM Watson. Watson is as dumb as a box of rocks, but appears intelligent due to the way it verifies and assembles evidence information based on the question asked and then ranks the results. Watson isnt a perfect emulation of question-answering. It is a current "best approximation", and even then Watson still takes 2800+ processors and Terabytes of Secondary Memory (RAM) to answer basic questions in 2-6 seconds (several hours on a lone processor).

Not saying to lower hopes or expectations, but to properly set them within the confines of what is possible. Build and reach from there. Serendipity is rare, Progress is unavoidable.
<br>[quote name='sooner123' timestamp='1315836466' post='4860677']<br>[quote name='IADaveMark' timestamp='1314404419' post='4854258']<br>Incidentally, for those who cite "perfect knowledge of the laws of physics" need to google the "3-body problem" and the "butterfly effect".<br><br>On-topic, I should have locked this thread when I had the chance. <img src="http://public.gamedev.net/public/style_emoticons/default/dry.gif"><br>
<br><br>I do not believe you understand either of these problems then.<br><br>It's easy to simulate 3 bodies. With a powerful enough computer you could even simulate them accurately. The problem asks for a closed expression for their position(time). This is a mathematical failing. Not a physics failing.<br><br>The butterfly effect is exactly the same deal.<br><br>The original post was an obvious troll. Not made obvious only by the implication that we could simulate a planet, but that the best way to simulate intelligence would be on the atomic level, rather than the cellular level.<br><br>I'd also like to point out to another poster that thinks that machines won't be able to "think" that it is not the machine's failure to think but YOUR failure to understand what thought is.<br><br>You think your thoughts are under your control but they are just the deterministic result of the processing of your neural network. Simulating an equivalent neural network in software would have the result of a conscious entity of equal intelligence and sentience as yourself.<br><br>(simulating a better one would result in a conscious entity capable of understanding that thought is a deterministic result. not an underlying, driving force)<br>[/quote]<br><br><br>If you close your eyes and think of a person or picture...what is it youre seeing?&nbsp;&nbsp;That "picture" that you're imagining is not a picture...its a thought, but one that we interpret as a picture by using different parts of our brain to form it.<br><br>When you create an AI...how would they see that "thought"&nbsp;&nbsp;If you programmed them with knowledge of a particular item, could they use their programmed knowledge to picture the item without seeing it?&nbsp;&nbsp;<br><br>Makes me think of language...language is actually a barrier that slows our thought process down.&nbsp;&nbsp; If everyone was equally intelligent, perfect beings, we would have no need for language.&nbsp;&nbsp;It wouldnt be telepathy, it would be knowing the answer because its the right thing to do.&nbsp;&nbsp;If we had to communicate with people, we would instantly understand what they needed without exchanging words because we could interpret the need without having to talk.<br><br>its like a team game..either digital or athletic.&nbsp;&nbsp;you become a cohesive unit...multiple brains melding into one to the point where you can predict what the other is doing without talking.&nbsp;&nbsp;<br><br>pretty neat to think about.&nbsp;&nbsp;<br><br><br>[/quote]<br><br>If the AI's neural network was structured the same as yours, they would "see" the same things you see when you "picture" something.<br><br>If it was given a neural network similar to a human infants, visual input into the optic nerve, auditory/gravitational input into the vestibular, etc. etc., then it would evolve into an adult brain that thinks, conceives, and pictures things the same way you do.<br><br>There is no distinction. You are seeing a difference that doesn't exist because you don't understand intelligence.

<div><br></div><div><a href="http://en.wikipedia.org/wiki/Brain_in_a_vat">http://en.wikipedia.org/wiki/Brain_in_a_vat</a></div><div><br></div><div>Read this.</div>

This topic is closed to new replies.

Advertisement