Quake AI doesn't learn anything, Carmack himself just tweeted the other day that they don't use neural networks at all. The Reaper bots for Quake(1) did, though, but those were just a 3rd party mod.
yeah, at least the standard Quake 1/2 AIs were basically just finite state machines.
their AI logic was basically just (when angry):
turn in direction of player (if the timer allows);
if collided with something, head in a random direction for a certain random amount of time;
if we have line of sight with the player, fire their weapon.
then they have a few states: standing idle, walking idly, running after the player, attacking, ...
hacking on the AIs to add things like path-finding made them considerably more scary, in that if you got an enemy mad and ran off somewhere, it would catch up (rather than just get stuck in a corner somewhere).
there was logic for misc things though, like to prevent them from running off edges, basically by preventing movements which would cause their bounding-box to no longer be on the ground.
when idle, an AI could optionally follow a sequence of nodes (generally organized into a loop), otherwise it will just stand in one spot until it sees something.
IIRC, the Quake 3 AIs were fairly similar, IIRC mostly just working by randomly following waypoints, with some special nodes set up to indicate for the AI to visit them (with the AI having some logic to compel them to visit all the nodes).
if it sees an enemy along the way, it will shoot at them.
FSMs generally can do a fair amount though and don't really eat the CPU as bad.
beyond this, there is sometimes the "poor mans' neural net" which is basically using matrix multiplies and autocorrelation and similar.
related to this is the use of Markov chains and Bayesian networks and similar.
these can generate interesting results without being too expensive (but are ultimately fairly limited, as-in, they tend not to exhibit any real "intelligence" and only learn within a fairly limited scope).
I had a few times experimented with trying to use genetic-programming for things (*1), but at least in my tests, generally got not-very-compelling results. as in, lots of CPU time for usually not a whole lot interesting going on, and similarly all they really seem to be good at is finding a way to cheat the test (such as by finding a way to exploit bugs in the test data or the interpreter), so a lot more time ends up mostly going into beating on the tests to try to prevent cheating, ...
another limiting factor (for using GP in game-AIs), is that there aren't really a lot of good situations to actually test it with (excepting something like an MMO where the mass of players "doing stuff" could be used to train the AIs, where the AIs try to get kills and avoid being killed).
basically, it would require something much more actively competitive, either actively competing against human players, or at least competing against other GP AIs.
but then (assuming no sort of barrier or similar preventing it), you would probably just end up with waves of monsters heading into newbie areas or spawn-camping and similar, or otherwise just being annoying (I suspect dumber monsters are preferable for most players, and if the monsters are too difficult or unpredictable many players would likely become frustrated).
*1: basically where one has sequences of specialized bytecode (or similar) in a specialized interpreter or similar, with the GP system basically randomly mutating and recombining them. in my past tests, this "bytecode" has typically been ASCII based (generally with each "program" as a collection of strings, generally with 1 or more entry points, and any other strings being usable for internal uses).
usually, for most things, it is faster/easier to just code it up directly.
don't know what all has actually been used in game AIs though.
Edited by cr88192, 03 September 2013 - 11:13 AM.