Jump to content

  • Log In with Google      Sign In   
  • Create Account

#ActualEnder1618

Posted 07 March 2013 - 02:36 PM

So I have this mediocre sensory system for my AI agents. Its attached to the face of the character, and moves along with animation. Has a frustum for main sight, another wider one for peripheral vision, and sphere for audial perception. I use the standard raycast to all perceivable relevant objects (mostly just the player(s)), at a specified slower than frame-rate frequency.

So this give me a range of confidence in my detections.

How do people typically deal with this range of confidence (or do most people not have it at all)?

I have my behavior tree implementation I wrote (which is my only decision system) that essentially just does variable look ups into what I call a perception black board (which has more things in it than just what the sensory system is putting in it: alertness, fear, am I being shot at, etc).

Do people typically deal with the confidence levels of some perception types, within the logic laid out by the BT, or is that better dealt with by more custom code that deciphers and emits a detect or not detect? Typically i mean. I find my BTs getting pretty complex dealing with confidence levels along with everything else they need to do.

And then there is also my chase system. So say I'm an enemy melee AI. I perceive the player. I immediately rotated my head (with sensory system attached) as fast as is believable, to focus (center sight frustum) on the player. Then i path plan and give chase. Btw is it customary to continuously re-plan from you current position to the player current position, at a particular frequency?

So as I chase the player to get up to melee distance, the player somehow gets out of sight (perception). What i was thinking is that I know the player's last position and velocity, so i can extrapolate where i think he will be in a certain max time delta, then re-plan path to that location (if it hasn't hit something first). While or when I get there, if I see him again then i go back into basic chase mode, if I don't then I transition into a search mode (randomly choosing nearby points on navmesh?). Does that sound reasonable? Or do people mostly just get a first perception then chase with perfect knowledge of where the player is, for certain amount of time?

I have built basic systems like this before, but without the confidence perception system, and a perfect knowledge chase. But it kind of didn't seem believable (could be my lack of other factors).

 

Also this is not necessarily a stealth game.

 

Any suggestions/experiences?

 

And yes I have looked at the Thief perception system page, a diff is my logic is handled almost exclusively by the BT right now.


#1Ender1618

Posted 07 March 2013 - 02:24 PM

So I have this mediocre sensory system for my AI agents. Its attached to the face of the character, and moves along with animation. Has a frustum for main sight, another wider one for peripheral vision, and sphere for audial perception. I use the standard raycast to all perceivable relevant objects (mostly just the player(s)), at a specified slower than frame-rate frequency.

So this give me a range of confidence in my detections.

How do people typically deal with this range of confidence (or do most people not have it at all)?

I have my behavior tree implementation I wrote (which is my only decision system) that essentially just does variable look ups into what I call a perception black board (which has more things in it than just what the sensory system is putting in it: alertness, fear, am I being shot at, etc).

Do people typically deal with the confidence levels of some perception types, within the logic laid out by the BT, or is that better dealt with by more custom code that deciphers and emits a detect or not detect? Typically i mean. I find my BTs getting pretty complex dealing with confidence levels along with everything else they need to do.

And then there is also my chase system. So say I'm an enemy melee AI. I perceive the player. I immediately rotated my head (with sensory system attached) as fast as is believable, to focus (center sight frustum) on the player. Then i path plan and give chase. Btw is it customary to continuously re-plan from you current position to the player current position, at a particular frequency?

So as I chase the player to get up to melee distance, the player somehow gets out of sight (perception). What i was thinking is that I know the player's last position and velocity, so i can extrapolate where i think he will be in a certain max time delta, then re-plan path to that location (if it hasn't hit something first). While or when I get there, if I see him again then i go back into basic chase mode, if I don't then I transition into a search mode (randomly choosing nearby points on navmesh?). Does that sound reasonable? Or do people mostly just get a first perception then chase with perfect knowledge of where the player is, for certain amount of time?

I have built basic systems like this before, but without the confidence perception system, and a perfect knowledge chase. But it kind of didn't seem believable (could be my lack of other factors).

 

Any suggestions/experiences?

 

And yes I have looked at the Thief perception system page, a diff is my logic is handled almost exclusively by the BT right now.

 

 


PARTNERS