Jump to content

  • Log In with Google      Sign In   
  • Create Account


Virtual Entities


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
14 replies to this topic

#1 Ashaman73   Crossbones+   -  Reputation: 5846

Like
0Likes
Like

Posted 10 January 2011 - 04:10 AM

I've got a little issue with my AI entities. My game levels are large enough to hold a few hundreds npc entities, but CPU power will most likly only allow up to 60-80 entities (empirical values).
I build a micro-threading FSM framework including time-slicing for each individual entity, a job queue for A* requests,freezing state for idle times etc., all for better load balancing, still it is not enough. The costs for planning and movement (including physics) for a single entity are just too high, I think I need to come up with some other solution other then optimizing AI/physics code.

The basic idea is to have a limited pool of active entities which will be spawned on-the-fly in areas of activity. If you would only control the player this would be quite easy, but you although control your own entities (up to 30). On the other hand this game is more like a simulation. Spawning all entities on-the-fly would sacrify the simulation part for none player controlled entities, thought I could live with this decision.

So far I got the following ideas:
I) Just let up to 40 entities roam the level and try to "attract" them to the player and player controlled entities.

II) Divide the level into section and give each section a activity state. Inactive section will release all entity resources whereas active sections will spawn entities on-the-fly. Much like L4D. The player will lead to an higher actvity state whereas player controlled entities will lead to lower activity states.

III) Serialize/Deserialize entities on-the-fly. The original virtual entity population will exists, but once an entity is out of range of any "active entity" (player or player controlled entity), then it will be deserialized.

All this must be a known problem in game development, I just don't want to reinvent the wheel. Any help will be appreciated.

Sponsor:

#2 Overdrive   Members   -  Reputation: 187

Like
1Likes
Like

Posted 10 January 2011 - 04:32 AM

When dealing with huge numbers of entities, what can work is a LOD level for the distant AI.
This works basically as a model LOD level. You have a rough version of the AI.

This means that you do not need to smooth paths for pathfinding for example, or use a higher level graph (with a really small number of nodes) on that you compute the paths on.
This wouldn't be as fine grained as the version used if the entity is close to the player and thus pathfinding will be faster..
Don't update animations on them, etc.
You can think of examples for all the tasks that an AI entity needs to do.

When the entity becomes relevant for the player(probably based on distance/ line of sight), you switch to the "normal AI" with all the details.

#3 apatriarca   Crossbones+   -  Reputation: 1476

Like
1Likes
Like

Posted 10 January 2011 - 04:59 AM

What about using a hierarchical approach? You may first work on groups of entities (making higher level decisions) and then on single entities. You may then always update groups, but only the individuals in active areas.

#4 Overdrive   Members   -  Reputation: 187

Like
0Likes
Like

Posted 10 January 2011 - 06:05 AM

What about using a hierarchical approach? You may first work on groups of entities (making higher level decisions) and then on single entities. You may then always update groups, but only the individuals in active areas.


Or you can just combine the two methods(simpler pathfinding, etc.) + just doing this for groups of units if there are groups of units in your game.
It really depends on how your AI is currently structured.

#5 hiigara   Members   -  Reputation: 108

Like
1Likes
Like

Posted 10 January 2011 - 04:05 PM

2 levels of detail is a good solution. When you search a larger radius of terrain ignore details. If the monster is close to the player then search only a small radius around the monster.
Also sharing expensive computations between the monsters will help. Group monsters who are close to each other and compute only once a path or other low level computation like predicting where the enemy will be, and then share the knowledge of that computation between all monsters.
Then each individual monster can decide to use the path or make any high level decisions based on the low level results.

#6 IADaveMark   Moderators   -  Reputation: 2222

Like
0Likes
Like

Posted 10 January 2011 - 08:43 PM

Also remember, you don't have to do high-level AI processing every frame. For many decisions, you can get away with doing things once or twice a second.
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC

Professional consultant on game AI, mathematical modeling, simulation modeling
Co-advisor of the GDC AI Summit
Co-founder of the AI Game Programmers Guild
Author of the book, Behavioral Mathematics for Game AI

Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

#7 Ashaman73   Crossbones+   -  Reputation: 5846

Like
0Likes
Like

Posted 11 January 2011 - 01:44 AM

When dealing with huge numbers of entities, what can work is a LOD level for the distant AI.
This works basically as a model LOD level. You have a rough version of the AI.

My problem is tightly coupled to the physics engine. My entities move in a 3d environment controlled by a physics engine. Improving only the AI would shift the bottleneck to the physics engine.

What about using a hierarchical approach? You may first work on groups of entities (making higher level decisions) and then on single entities. You may then always update groups, but only the individuals in active areas.

This sounds more feasable for my requirements.

Also sharing expensive computations between the monsters will help. Group monsters who are close to each other and compute only once a path or other low level computation like predicting where the enemy will be, and then share the knowledge of that computation between all monsters.
Then each individual monster can decide to use the path or make any high level decisions based on the low level results.

Pathfinding isn't the problem (atleast yet :-) ). Sensory scanning of the surrounding and a scripted behaviour tree are quite expensive, but incredible flexible. I think currently the behaviour tree written in lua is my bottleneck. Althought I could optimize it, I want to solve the
problem on an higher level like L4D.


Also remember, you don't have to do high-level AI processing every frame. For many decisions, you can get away with doing things once or twice a second.

Micro-threading and timeslicing already solves this problem. My entities have a "heartbeat" which force an update interval between 100 and 2000 ms. The heartbeat increases when the entity gets in contact with other entities (feels threatened or has been attacked) and decreases when nothing special happens.
Still the entity exists and moves through the world (physics engine + steering behaviour), which can't be optimize in a similar way.

Sofar I think that performance improvement of the AI (LOD etc.) will leave me with a bigger physics performance problem. Currently my only hope to solve this seems to remove the entities from the (physics) world simluation to get
rid of AI and physics load.

First idea(LOD):
1. entity spawns in world with full sensory abilities, behaviour tree and physics representation.
2. entity leaves area of focus defined by the player, the entity will loss its physics representation and most of its sensory abilities, a new more restricted behaviour tree will be taken (low detail level).
3. entity interacts on a meta-level, interaction with other meta-level entities is possible but restricted.
4. entity recover its sensory abilites, behaviourtree and physics representation once it enters the focus area again (will be respawned at a plausible spot)


Second idea(hierachy):
1. ai controlls areas of the world more on a strategy level (N units of X dwell in area Y etc.)
2. ai makes decision on strategy level 
3. once an area gained the focus, entities are spawned on-the-fly according to the area profile.
4. entities are just representation of the strategy profile, they act according to it and will give some feedback.


#8 leopardpm   Members   -  Reputation: 148

Like
0Likes
Like

Posted 26 January 2011 - 06:48 PM

Ashaman, how is it coming?

I was thinking about your 'heartbeat' idea, what exactly does the heartbeat control? Amount of time sliced to the entity? or does it indicate the next scheduled slice of time for it?










;

#9 Ashaman73   Crossbones+   -  Reputation: 5846

Like
0Likes
Like

Posted 27 January 2011 - 12:51 AM

Ashaman, how is it coming?

I was thinking about your 'heartbeat' idea, what exactly does the heartbeat control? Amount of time sliced to the entity? or does it indicate the next scheduled slice of time for it?

The spawning on-the-fly thing has been implemented and works quite fine, thought it will need more testing and tweaking.

The 'hearbeat' controls the frequency of calling a entity update method which handles the AI part of the entity (the movement part will be updated always every frame). The heartbeat varies between 100ms and 2000ms. That means that a passive entity will be only updated every 2 seconds.

This could lead to delayed reactions but on the other hand delayed reaction of a surprised creature feels rights. The heartbeat is controlled by sensory and action events. When a entity enters combat the heartbeat will be very high, when an entity saw a threat or some interesting item the heartbeat will slowly increase. When an entity does not encounter any interesting stuff and is not in combat, it will slowly decrease up to an 2000ms interval.

PS: I use a deadline first scheduling algorithm for heatbeat managing.

#10 ApochPiQ   Moderators   -  Reputation: 12394

Like
0Likes
Like

Posted 03 February 2011 - 06:10 PM

Your number of active entities sounds painfully low... what exactly is the physics/steering stuff doing? What is your hardware target?

It seems to me like a simplification of the physics side would open you up to a lot of heuristic improvements on the AI side, and vice versa; I'd start with making the physics as simple as possible in the case where the player isn't actively observing anything, and ramping it up to full detail when a player-controlled entity is nearby. This is really just the LOD suggestion from earlier, except applied to your entire game simulation rather than just the AI side.


Of course, more details on the nature of your simulation would be incredibly helpful.


For comparison, X3: Terran Conflict simulates several thousand entities actively across the game universe in realtime, on a commodity PC.

#11 Ashaman73   Crossbones+   -  Reputation: 5846

Like
0Likes
Like

Posted 09 February 2011 - 04:14 AM

Your number of active entities sounds painfully low... what exactly is the physics/steering stuff doing? What is your hardware target?

"..painfully low.." :(
Well, I'm using bullet as physics engine, have a terrain-heightmap collision handler and several hundred static collision objects (cave models) which consist of low-res model (tri-soup, 100-300 tris per model). Each entity will do collision detection only vs a small terrainmap part and only vs 1-3 of the more complex collision objects (cave models), thought the collision detection will atleast always have to handle one of the cave models, because the entity is inside its AABB. Each entity consists only of a sphere or capsule and is steered by forces in combination with one ray-cast test.

Entity vs entity collision is turned off if not one of the colliding objects is the player. For steering behaviour the closest X objects are detected and cached over several frames.

AI is based on a behaviour tree written in lua. The behaviour tree has an update frequency of 0.5 to 5 hz.

My target hardware is 2-4 year old PCs, so powerful enough.

For comparison, X3: Terran Conflict simulates several thousand entities actively across the game universe in realtime, on a commodity PC.

A space game has extremly good collision detection conditions. Objects are seldom in frequently contact to other objects, whereas my game each entity has to check atleast vs terrain and one cave model. I think, that the cave models are the most expensive collision tests.

#12 ApochPiQ   Moderators   -  Reputation: 12394

Like
1Likes
Like

Posted 09 February 2011 - 07:17 AM

Why are you doing full physics checks for entities which may be way outside any viewing/observation area?

That's the entire point of the LOD recommendation: don't do math you don't have to do, and you save a ton of CPU time. Simplify the navigation of entities using, say, navigation meshes or waypoint graphs, and you'll get a corresponding boost in performance.

#13 Ashaman73   Crossbones+   -  Reputation: 5846

Like
0Likes
Like

Posted 11 February 2011 - 12:39 AM

Eventually you all convinced me to add some kind of level of detail including my own approach. The implementation is not finished yet, but the major hurdles are taken.
That is what I've done:
The entire map is devided into section. Each section contains information about creatures and "spawn-rate". Depending on these attributes creatures start spawning when the player is close to the according section. Entity distribution and amount is managed and limited. Addionally I now added a light-phyiscs simulation which disables collsision detection and external forces (gravity) and moves the entity along a waypoint graph. Furthermore the more expensive steering behaviours (which needs to scan the surrounding) are disabled in this mode too. This mode will be active for entities outside the players visible view distance.

I don't believe that this will enable the simulation of the whole level (several hundred entities), but it is very flexible, scaleable and the active entity limit is relaxed. The best thing is, that it enables new features like entities summoning new entities which lead often to some nasty issues in rogue-like games.
:D

#14 wodinoneeye   Members   -  Reputation: 618

Like
1Likes
Like

Posted 13 February 2011 - 01:54 AM

I looked at this problem years ago and you basically want a LOD (level of detail) mechanisms
that downgrades the entities AI processing when out of the players view. This can be done by
increasing the granularity size of the decision points and actions taken.

When out of view, why do pathfinding when you can simply jump to the target point (after a proper delay)
or along a simplified sequence of waypoints (often from the 'coarse' pathfinder).

It all depends if the simulation needs to be extremely accurate always or if generalized results/interactions
are good enough. If not, then 'out of sight' interactions can be generalized/abstracted to eliminate all the fine
level interactions the AI would have to calculate. The generalization might go as far as simply carrying out the
entities expected interactions with little or no positionality change on the world (except that a position is
maintained to check for reactivation when the player nears.

Some entities may simply be 'props' used for ambience which may just be a probablistic for an area
and be spawned onto the edge of the players view (radius) and then move about (but with full ability to
interact) and then disappear as the move out of the players view. When far enough from the player
(see in a distance) such 'props' might follow a routine script of actions that actually takes very little AI.
This could be daily schedules that can be calculated ahead of time and resused for routine activity
(including saved paths/routes where finesse/realisticness would not be seen thus need not have CPU
wasted on it).

Some parts of the simulation deal with groups of objects working/interacting cooperatively and these can be
abstracted to remove indivuduals interactions and replaced by simplified results applied to the world.


The hardest aspect can be when the player moves and the 'full detail' window around him causes the lower
LOD entities to have to transition back up to full detail (prefereably a little time before the player can actually
see/interact with them). The coarse granularity would likely leave them in 'safe' positions which might have
to be randomized some to make look more realistic (both position and where they are in the fine level
sequences of actions that make up their activities). When several entities 'expand' in close vicinity they
might need adjustments to be properly/naturally placed and in the middle of their expected activities.

Often more complex AIs use planners that have continuous cached states which might quickly have to rebuilt
for their current context (the fine detal past having been discarded when the entity shifted to low LOD
AI) You might have to reactivate to fine detail earlier to allow the reconstitution and the AI orienting itself
to the current situation.

Boundry Endcases would be:

Group entities (if you have them) where half are inside the player window and the rest outside -- when
would the transition to coarse LOD happen... (probably once all of them are outside or maybe split
into two groups - one full detal the other abstract).

Entities in High/fine detail/LOD may have to interact high detail with other entities also running at high level
to simulate their actions correctly (where players may see them). If entities past the boundry are forced to interact
in full detail (with those inside ) then they themselves being in high/full detail might interact with other adjacent entities
even further out -- cascading outward and likely forcing too many entities to run too much AI processing. Thus it may
take some clever logic to have the entities at the boundry interact at high level appropriately with those within the
players view and abstractly outside simultaneously.


Visually players may see LARGE entities from a long distance (depending how your graphic scheme works) and these
may have to be run at high/fine detail and be seen interacting with unseen smaller entities. DO you have a bubble of
high level around those objects....

When player has remote viewing or magnification (telescope) whatever is viewed would need to be simulated in
high/fine detail -- again possibly needing a bubble of high level AI simulation in that vicinity -- and if they can rapidly
swing their view you might have a serious problem reconstituting high level detail that fast .


Looking at such designs I even forsee a 'micro' view of a very fine level of detail immediately in the players reach
where very close interactions with small objects (or objects manipulated by entities adjacent to the player would
be done). Just another case of objects that disappear with distance and are filled in when the player is near (very near
in the micro detail case). These may not be 'entities' themselves but the nearby entities may manipulate them and interact
with the player with them when so close, and the behavior of the entities to use these objects would change as you
(the player) got far enough away (and interactions of that type cease) . The same LOD mechanism would be used to
control AI for the simulation.
--------------------------------------------Ratings are Opinion, not Fact

#15 IADaveMark   Moderators   -  Reputation: 2222

Like
0Likes
Like

Posted 13 February 2011 - 08:35 AM

If you have AI Game Programming Wisdom handy, Mark Brockington (Bioware) had an article in there about how to do LOD for a large RPG (in this case, NWN). He had 4 layers, iirc.
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC

Professional consultant on game AI, mathematical modeling, simulation modeling
Co-advisor of the GDC AI Summit
Co-founder of the AI Game Programmers Guild
Author of the book, Behavioral Mathematics for Game AI

Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS