My theory on game AI

Started by
17 comments, last by Timkin 19 years, 9 months ago
Quote:Original post by Yamian
With that, if every mollicule in the univers was taken in, things could be predicted. Creepy, huh.


this is classic determinism, circa 1920s...welcome to the last 80 years of quantum mechanics.
Advertisement
Actually, quantum mechanics is just as deterministic as classical mechanics, it's just that it doesn't apply the determinism to classical paths, masses, and particles, but to the quantum wave-function. If you know the wavefunction and the Hamiltonian of the system (roughly, all the forces acting) at some given time, you know the wave function at all other times, assuming of course that you can solve the equations.

The fact that you cannot use the wave function to predict the position, spin, or what-have-you of a particle - other than probabilistically - is totally irrelevant.

Chaos theory does not really contradict determinism; it merely states that for some systems, you need an infinite amount of information to predict what is going to happen. Clearly, this amount of information does exist in nature; chaos, then, is merely an expression of limitations on human theory.

On the other hand, it's not totally obvious how these two theories interact. Perhaps my blithe assertion that infinite information does exist in nature, is actually wrong? After all, chaos theory does operate on the crude old classical concepts of position, velocity, and whatnot - precisely those where QM can only give us statistical information. Stay tuned, folks - this is the cutting edge.
To win one hundred victories in one hundred battles is not the acme of skill. To subdue the enemy without fighting is the acme of skill.
hi stephen, is that you? from a brief history of time:

Quote:These quantum theories are deterministic in the sense that they give laws for the evolution of the wave with time. Thus if one knows the wave at one time, one can calculate it at any other time. The unpredictable, random element comes in only when we try to interpret the wave in terms of the positions and velocities of particles. But maybe this is our mistake: maybe there are no positions and velocities, but only waves. It is just that we try to fit the waves to our preconceived ideas of positions and velocities. The resulting mismatch is the cause of the apparent unpredictability.


sounds awfully familiar. furthermore, the positions and velocities of particles are *exactly* what we were talking about...not "totally irrelevant." i direct you to the quote that i was addressing in my last post.

anyway, back on topic.
Quote:Original post by justo
anyway, back on topic.
Which was... ? Oh yeah... coin flipping! ;)

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Quote:Original post by justoFurthermore, the positions and velocities of particles are *exactly* what we were talking about...not "totally irrelevant." I direct you to the quote that I was addressing in my last post.


I'm not comrade Hawking, in fact I haven't even read his book, though I'm flattered at the comparison. Apparently you did not detect the position of my tongue, to wit, firmly pressed against my cheek. Obviously positions and velocities are relevant when you develop games! What has that to do with anything? I was discussing quantum mechanics.
To win one hundred victories in one hundred battles is not the acme of skill. To subdue the enemy without fighting is the acme of skill.
An infinite amount of information does not exist in nature. Thanks to black holes, information can be lost, making the gathering of enough information to make deterministic predictions of the universe impossible.
The information probably leaks back out again through Hawking radiation. But in any case, I was not referring to information in the classic sense of information theory, but in the much rougher sense that, even if we can't predict it, the electron still figures out what it needs to do. Hence, the electron has sufficient, ie infinite, information about its own position.

Of course, I realise that this is not what is actually going on; it was a manner of speaking.
To win one hundred victories in one hundred battles is not the acme of skill. To subdue the enemy without fighting is the acme of skill.
Well, a finite being attempting to grasp an infinte universe. There are questions that we cannot answer because of our limits and this conversation about randomization within computer programs is one of them. If you want to get technical every single moment is completely different than the last due to the concept of infinity which we cannot grasp.

Alright back to reality. If your making a PONG game wouldn't you rather have the AI not know the exact location that the ball will be in. How about instead give the AI the wrong information and vary the difficulties by making his guess more accurate. So therefore, randomize the accuracy of his information in determining the posistion that the ball will end up.

You can do this with pretty much any AI simply don't give the AI accurate information based on a difficulty curve an also change his reaction time. Put physical constraints on the AI. Don't make it possible for the AI to imediately move a paddle from one side of the table to the other. Have it so that it takes a random amount of time to move a certain distance.

Just a thought.
*Mod hat on*
Part of me says that this thread is WAY off topic... but then the other side of my brain says that the door was opened by the original poster by discussing the randomness (or lack there of) of computer programs. For the moment I am happy to leave things as they are but I ask everyone to remember that this is an AI forum, so as least try to keep it related to AI! ;)
*mod hat off*

From a personal perspective, I can add a few things to this discussion. First, a correction/clarification of what KingOfMen stated about infinite information. It is far better to state that Chaos Theory indicates that certain nonlinear systems require an infinite precision in the specification of the initial state in order to predict subsequent states exactly. This better highlights why we humans have difficulty in predicting the evolution of chaotic systems... because we have only finite sensory precision with which to observe the state of the system and only finite computational precision to simulate its evolution.

Quantum mechanics expresses the view that even with infinite precision, the evolution of a wave is not deterministic (i.e., it is stochastic) and we can only discuss its properties with regards to a stochastic density (wave) function.

Further to this, recent research has offered up a result that suggests that ALL systems, be they quantum or classical, that can be described by a wave equation are in fact inherently uncertain. This arises from it being finally proven that Heisenberg's Uncertainty Principle and Goedel's Incompleteness Theorem are indeed results of the same phenomena... and that these are related to Chaitin's Algorithmic Complexity theory.

How does this relate to AI... let's get back to the issue of predictability of AI and why I disagree that 'rnd(3) is a valid AI'. We don't want AI that is random. We want AI that has a behaviour that can be analysed and one that appears human-like, because we believe that this leads to the most fun on the part of the player (because it leads to the possibility of defeat by a superior agent). A suitable candidate for such an AI is a rational agent. This means that randomised outputs of the agent state are not acceptable. The outputs must result from a mapping of inputs to outputs that takes into account the value function of the agent, however that might be encoded. Given such an agent, the challenge on the part of the player is to induce the value function and to analyse it for weaknesses.

We can weaken our AI agents by making it harder for them to assess their environment (by either restricting the precision of their observations/computations, or adding uncertainty to their sensing of the environment, which can be shown to be somewhat the same result) and we can make it harder or easier for the player to predict the AI's behaviour by similarly limiting their observational power.

Ultimately, my advice is to engineer the perfect solution (or as close as one can achieve) and then work at making it perform sub-optimally, offering more openings for defeat.

Cheers,

Timkin

This topic is closed to new replies.

Advertisement