Theory - ultimate AI, at atomic level

Started by
60 comments, last by IADaveMark 12 years, 6 months ago

[quote name='sjaakiejj' timestamp='1313155366' post='4848206']
[quote name='Adaline' timestamp='1313093080' post='4847869']
As far as I know, neural nets aren't used in game AI : the learning process can be very very long, i think the learning process can't be integrated in games for that reason


They have been used in games before, and learning was done during the testing phase, and the results were just used in the released game. Though I believe that current developers stepped away from the idea, as adjusting to the player has become more important to developers, something which other techniques lend themselves much better for.
[/quote]

Yeah, I recall a kind of AI bot for Counter Strike, that had to learn the levels (each of them one by one). One learning was about 20 minutes.Maybe it was similar
[/quote]
I believe that may have been John Laird (et al) and his S.O.A.R. technology. Mixed reviews from what little I have heard.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Advertisement
A trained neural net is a weighted equation. The two are identical. The difference is in how the weighted equation is developed.

A trained neural net is a weighted equation. The two are identical. The difference is in how the weighted equation is developed.

I've been trying to get that into people's heads for years but the schools keep cranking out people who think just saying "neural network" is sexy. "Weighted sums" doesn't impress people enough.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Hello


[quote name='Dave Weinstein' timestamp='1313167942' post='4848303']
A trained neural net is a weighted equation. The two are identical. The difference is in how the weighted equation is developed.

I've been trying to get that into people's heads for years but the schools keep cranking out people who think just saying "neural network" is sexy. "Weighted sums" doesn't impress people enough.
[/quote]

For a single layer net such as adaline or perceptron, with the activation function=identity , so we can say that the output is just a 'weighted sum' of the inputs
What about multi-layer networks ?
What about models that don't use the Mc Culloch & Pitts model, like spiking neural networks ? (LaPicque model for instance)

I think that 'weighted equation' is more accurate, at least a better shorcut

Hello

[quote name='IADaveMark' timestamp='1313170176' post='4848335']
[quote name='Dave Weinstein' timestamp='1313167942' post='4848303']
A trained neural net is a weighted equation. The two are identical. The difference is in how the weighted equation is developed.

I've been trying to get that into people's heads for years but the schools keep cranking out people who think just saying "neural network" is sexy. "Weighted sums" doesn't impress people enough.
[/quote]

For a single layer net such as adaline or perceptron, with the activation function=identity , so we can say that the output is just a 'weighted sum' of the inputs
What about multi-layer networks ?
What about models that don't use the Mc Culloch & Pitts model, like spiking neural networks ? (LaPicque model for instance)

I think that 'weighted equation' is more accurate, at least a better shorcut
[/quote]

I refer to them as "functions with parameters", or "functions with too many parameters for their own good". :)

"functions with too many parameters for their own good"
[/quote]

What do you mean ? I don't see the point


"functions with too many parameters for their own good"


What do you mean ? I don't see the point
[/quote]

An ANN consisting of a single neuron with linear activation function is multiple linear regression, and it is well understood since the times of Gauss. An ANN consisting of a single neuron with sigmoid activation function is logistic regression, and it's also well understood. A multi-layer perceptron has so many parameters that training becomes really hard (e.g., it's hard to avoid getting stuck in local minima), and trying to understand what each parameter does becomes hopeless. That's what I mean by "too many parameters for their own good".

An ANN consisting of a single neuron with linear activation function is multiple linear regression, and it is well understood since the times of Gauss. An ANN consisting of a single neuron with sigmoid activation function is logistic regression, and it's also well understood. A multi-layer perceptron has so many parameters that training becomes really hard (e.g., it's hard to avoid getting stuck in local minima), and trying to understand what each parameter does becomes hopeless. That's what I mean by "too many parameters for their own good".


Ah ok I understand what you meant, thanks :)

EDIT :
"avoid getting stuck in local minima" -> that's why a momentum is often added in the delta rule (but it doesn't eliminate the risk, and it's done by adding other parameters .... :( )

"understand what each parameter does becomes hopeless" -> we generally don't care about what the weight values are (and as you say it's hopeless anyway in a 'big' network)
This "simulate everything at atomic level" madness that started with the unlimited detail thing must stop NOW!

Adding to what ApochPiQ said:

For those saying we have enough processing power: Look at Folding@home; it's attempting to do what you're saying (model at atomic level, although not for AI purposes, but to understand protein synthesis), and it takes thousands of computers running through hours just to simulate ONE NANOSECOND. And last time I check, inteligence improves through time. It takes around two decades for a normal human being in the real world to become intelligent enough.

For those saying we only need to compute those atoms that are revelevant NO! We don't know what's relevant or not; everything is connected with everything. One small tiny miniscule thing leads to another and to another which in results leads to a million chain reactions ultimately causing a huge difference. It's called the butterfly effect.
I can think of two problems. One, even if we had sufficient processing power, there's the fact that the simulation would have to split time up in frames. That adds a certain amount of error, that would make the rules of that world fundamentally different from those of our world. Two, it presumes that we actually know more about the universe than we really do. We don't have perfect knowledge of physics yet. So we'd make assumptions, and the rules of the simulated world would drift even more.

And a third one that just occurred to me: Say you have preposterous processing power and perfect knowledge of the laws of physics. So you build your world (by the way, are you simulating just the world, or the sun as well? That's a whole other bunch of atoms to think about, and it's not like the sun doesn't have some small impact on our world, so four problems really), and you get your intelligence up and running. And now THEY want to make a computer to simulate the universe. So your computer has to simulate a world with a computer trying to simulate the world. And of course they are successful, so the intelligence that emerges in the simulated computer tries to simulate another world.

So now your computer is simulating a world that contains a computer that is simulating the simulated world, which will contain a computer that can simulate the simulated simulated world. And so on the recursion goes. And you want this to run in real time (in fact, millions of times faster than real time). You're essentially asking the computer to simulate the processing power of infinite computers just like itself. This is a logical impossibility.

This topic is closed to new replies.

Advertisement