Tessalatable AI

Started by
1 comment, last by MikeD 24 years, 5 months ago
thoughts:

one, keeping ai and graphics on separate threads might be a good idea. I'm not sure if you were pondering whether ai would drag down the graphics in your post, but threading (and an internal gameclock) would definitely help alleviate that problem if you are worried about it. Mmm. On rereading I discovered your actual meaning. However, physics is likely to still be a bigger problem than ai in this case, especially if you're designing for a modern system - collisions for that many models would be intense, to say the least!

two: given that, I don't see any reason why you couldn't, given an appropriately capable adaptive representation, use a finite-step but potentially infinite-resolution tesselation to scale an ai.

How about using a group-based tesselation? You set up your units or NPCs or what have you so that they are associated with one or more groups (ie platoon/adventurer party, army/town, etc), and then tesselate your ai in one of two ways:
a) maximize the individual ai of each unit
(ie give each unit as much of the pie as you can, iff it needs some ai)
b) set minimal ai to each and then maximize as many brains as possible.
(ie give each unit very basic abilities but give most of the power to one or two units)

b is how I'm thinking of doing a 3d rts unit ai system, so each unit can follow the leader, and the leader can figure out the best overall actions just as a commander might assign orders to his troops or a navigator might pathfind through the wilderness.

As usual, a mixing and matching would likely work best, so that in, say, a CRPG you might try to give monster parties the b treatment and villagers the a treatment

Graphical techniques could probably be used in other ways:

'hierachical rendering' or 'rendering planes' (depending on your visualization) are already pretty much used as I understand it, so that you have the overlord ai playing the game and all the wee little ones doing the day to day work with maybe some in between stuff

'radiosity' could put ai power into the areas where it's most needed (if you're good at designing metrics of need) [and yes, I am aware of radiosity's actual origin]

As to the HOW:

Take the set-min-and-maximize-some-units, for example. You use a pared-down NN-trained FSM as your 'basic' ai. Then you begin to maximize: first, use your tesselation metric to evaluate how much you can do, then bring in as much of the partially trained NN as you can for each group leader. In the limit, you might have a fully-functional NN using GA training to generate and discard its next course of action. Of course, I'm thinking in terms of Messiah-type tesselation, where you use an insanely detailed model to begin with (effectively an infinite-detail representation) and tesselate it back down.

so you might have


metric avail_power

set-min-ai(allunits)
for(allgroups)
tesselate_ai(group_leader, avail_power, maximal_ai)


as a minimalist treatment of the idea. Choose your favourite technique for implementing - GAs & NNs are probably best-suited, since they're inherently adaptive.

If you could get this to work then you could also tesselate away from ai-ai conflicts (iff the player's not around!) by using a prediction algorithm to find the battle outcome. You'd have to be AWFULLY good to get it just right, though. In fact, if you can do that, then you probably don't need to build an ai at all. But I could be wrong there.

Also, you might implement it so that groups themselves are amorphous - with less power available, the tesselation would maximize the size of the groups, and in the infinite limit, every group would have a size of 1!

Anyway,
just some thoughts.

signing off,
mikey

Advertisement
A question for those who work in this field of have thoughts and ideas around the development of future AI in games.
Do you think it's possible to have a single direction of AI development that could allow for tessalatable AI. Not just set levels of detail in processing NPC actions but infinite levels of detail which account for processor power or a lack of it.
While non-tessalatable graphics and physics always assure that the AI processing is kept discrete and in the background compared to the 50 or so poly minimum that models tend to be limited to, tessalatable models could drag a single NPC in a full scale RTS battle down to one or two poly's. At this point the AI takes up a significant proportion of the CPU time and becomes a problem.
How would it be possible to assure that thinking time reduced along with the poly count?
You have your traditional AI domains, condition/action rules, pathfinding, heuistics, fuzzy logic decision making, perhaps with automatic adaption taking place during or between battles but there is no AI polygon which can be tagged and counted per second.
I'm trying to come up with ideas of whether the idea of tessalatable AI is feasible or even possible and any ideas or meandering thoughts are appreciated.

Thanks for listening.

Mike

Okay, as at least Niels read this and seemed interested, here's a few conclusions I came to.
In a general sense the more information you use, if you use it correctly, the more accurate a decision you can make. Given that none of the information is redundant. i.e. knowing facts about an enemies favorite colour won't help you in making tactical decisions about troops formations (or at least, _probably_ won't help you).
However different pieces of information have different factors in decision making, some are weighted very highly and have a strong influence wheras others are not very important but if you have the time it's possible you can use them to your advantage.
Basically thinking is an incremental system, the more data you have, the more data you have to sift through to make a decision.
So if you need to make a snap decision then you can look at the most important one or two facts and use them to decide. This is very much an idea that is used in expert systems, the more information you have to categorize a plant type or a medical condition, the more accurately you can categorize it but with each fact comes an overhead of processing time. As you look at each new piece of information you can eliminate more possibilities until you have a final answer.
The way information reduction works in predicate calculus is simple and reduces statements fairly easily. For instance, in the case of simple logic functions with
a and b
a or b
a exor b
If you ignore the 'a' fact then they all logically simplify to
b
I could not think of a real world example this doesn't make sense in...please tell me if you can.

This can be expanded directly into condition action rules.
If you had four conditions.
a - Angry
b - Drunk
c - Ordered
d - Attacked
and an action
e - Attack
With the c/a rule
If a and b or a and d or c then e (with 'or' having the highest
order of precedence)
You can get rid of two factors to simplify the condition statement.
The intial statement translates as
If I'm angry and drunk
or
I'm ordered to fight
or
They attack me and I'm angry
then
Attack them
For an example of reducing the cognitive process forget complex social factors like anger or drunkeness and concentrate on orders or physical danger.
By reducing the logic functions as suggested results in
if c or d then e
If I'm ordered to fight or I'm attacked, attack back.
This doesn't have the complexity of a great deal of factors but still makes for a rational simplified decision.
With the more subtle weighting of an NN, the level of processing is directly proportional to the complexity (number of hidden layers) of the net.
If you remove input a, then all it's connections dissapear from the input layer to the next layer (presuming we're talking about a simple layered network), however it is unlikely that any second layer neuron only has inputs from 'a' and if one did
then only that neuron's connections would dissappear from the proceeding layer.
So if you had 6 inputs and two hidden layers with 6 neurons each, going to a six neuron output (keeping it simple) every neuron
in layer n connecting to every neuron in layer n+1 then you have 36 connections from layer 1 to 2, from 2 to 3 and from 3 to the output layer. This gives 108 connections in total.
Removing input 'a' will only remove the six inputs to layer 2 dropping the total to 102 or by 1/18th.
If you had six inputs directly connected to 6 outputs it would drop it by a 1/6th.
So the more hidden layers, the less possible it is to simpify a neural net, at least in terms of simple layered networks.

So the easy way to reduce processing time for AI is to reduce the number of facts considered. It generally appears that the underlying structures and algorithms have to be altered very little to deal with variable number of facts while being (in simple cases at least) linearly reduced in processing time.

Any comments?

This topic is closed to new replies.

Advertisement