Jump to content
  • Advertisement

priyesh

Member
  • Content Count

    11
  • Joined

  • Last visited

Community Reputation

124 Neutral

About priyesh

  • Rank
    Member

Personal Information

  • Interests
    |business|designer|programmer|qa|
  1. priyesh

    help me understand utility AI

    I ... I think I get it. I think I understand! You're right, I was overthinking it. It sounded daunting to program that many "solutions" for each agent to consider for their actions. But even if my initial design doc might try to organize those into groups of actions, that's just because it's easier to plan the game that way. In the code, it's still just a long list of actions. Culling the agent's mental model seemed so necessary before. But you're right that 100 characters each considering 100 actions is still only 10000 calculations. And even so, there's some obvious culling to do in terms of "too far away, don't bother", or "don't check this every damn second". Thanks so much everyone. Really appreciate all of this.
  2. priyesh

    help me understand utility AI

    I guess this is the thing. We're not just talking about utility in a finite combat situation, but in a grand simulation sense. (Yeah, I know this is pretty ambitious -- Dwarf Fortress, The Sims, CK2, Prison Architect, and RimWorld as references.) If I effectively multiply the number of actions by the number of characters, that's a LOT of actions. I guess the best way to cull some of the actions is to only consider killing a finite list of "enemies". Or only consider killing the top 5 people I dislike. Similarly, with food I might consider only sources within range, plus whatever I have at home.
  3. priyesh

    help me understand utility AI

    Thanks guys... this is helping me to get passed my intuitions. Processing the problem myself, I'd want to break it apart into two stages (1: what's the goal that satisfies my main needs? 2: what's the action that satisfies the goal, and maybe even knocks off a few other needs in the process). But it seems like I could just flatten it / skip right to stage two: (for every action i could take, what is each action's impact on my needs/goals) Like I said, it seems like a lot of functions -- one for every action. But it seems like any method of problem solving would get to that level of detail eventually.
  4. priyesh

    help me understand utility AI

      You can, if you choose, write a unique utility function for every possible action in your game.  I would advise against it, instead preferring the pattern of having everything driven by simple pieces of data.  A collection of modifiers as data along with a collection of the system's state as data, the result is a scalar value. But even so, you can do it as a bunch of functions if you choose.   Yeah, I think that makes more sense to me. Use utility to set goals, which drive a "family" of actions. Goals like "find food", "fall back", "rise in rank", and "take revenge". "Find food" might simply eat what the NPC has, or hunt or forage, or even try to ration what food the NPC has left, depending on availability. "Fall back" might just take cover or call for help if those options make sense, but full on retreat is always a last resort. "Rise in rank" might challenge a weak peer in the hierarchy, or might build an army to ambush him, or might detour the NPC to train up until they're ready. "Take revenge" might attack someone on the spot if no enemies/guards will stop them, but otherwise they might order an assassination if they have that power, or consider them an enemy and bide their time, or simply deal with it through insult or humiliation.     ... but then I'm sort of seeing the other point. Once I have determined a suitable goal, am I going to pick a suitable action for that goal from a behavior tree -- essentially hard coded logic? Or should I look at a more fluid series of utility scores? The latter makes sense: once the NPC decides their goal is to rise in rank, I do a more detailed utility scoring of the top two or three people they could challenge, and discover that one of them is also a hated enemy. (I really appreciate all this guidance, btw.) I guess my last question: is this sort of "two stage analysis" something that actually has logical problem solving value, leading to better decisions and reducing computational complexity? Or will implementing it be mathematically equivalent to just considering it all at once -- all the ways of finding food vs all the ways of taking revenge vs all the was of rising in rank (etc)?
  5. priyesh

    help me understand utility AI

    I did, actually, and it's much appreciated. It seems like common practice is to come up with a utility function for every action... but that seems like a LOT of actions. Let alone if I start creating utility functions for "two-in-one" actions.
  6. priyesh

    help me understand utility AI

    That's an important distinction, yeah. Maybe a 20 or so utility functions, but perhaps hundreds of "motivators". A thought occurred to me because I really like the architecture that Kylotan suggested:     Having separate utility functions for every action is going to be highly redundant. This might be a moment where I could bring in some of those synergies. Let's say I have a utility function called "challenge someone for rank". It's calculated as the largest expected value of victory against someone my rank or higher. If I reach a certain threshold, I trigger a behavior tree that selects between several actions. The obvious action is "challenge the best option for victory". But the system would also consider actions that challenge the second best option under a few circumstances, effectively aggregating the motivations. Maybe the next best challenge would also kill someone I hate, or also kill a rival on the line of succession. Similar idea for the food vs combat example. If I'm starving, trigger a behavior tree that selects from several food-finding options. But in addition to "go run around looking for food", the system would also consider "if you're in a combat quest to kill something that is made of delicious meat, just carry on". I'm reluctant to invent my own solutions because I'm not arrogant enough to think I've figured out something no one else has. But this seems in line with what a few other people are suggesting, and so probably has some basis in good design?
  7. priyesh

    help me understand utility AI

    This is incredible guys. Open to other solutions and ideas, obviously. But I can work with this. The idea that I might have special utility functions that look for synergies seems like a good enough solution. I'm getting used to the idea that I might have hundreds of utility functions. It's going to be a daunting thing to tweak and balance... but in some ways, fairly straightforward to program.
  8. priyesh

    help me understand utility AI

    I mean, something I've frequently hacked together (and frequently heard) is that randomness can sometimes appear intelligent. If the NPC occasionally picks the second best choice in a computational sense, they might actually stumble onto the actual best choice in a strategic sense, and appear to outwit the player (not to mention other AIs). Still, I'm curious if there is a computational way to notice when there's more utility in doing the second best thing because it actually helps advance you towards another goal.
  9. priyesh

    help me understand utility AI

    Getting some incredible feedback here. It's greatly appreciated. While I have so many brilliant people, I wanna ask... How do you handle "two birds, one stone" types of problem solving using utility AI? That is, there are two completely different utility functions competing for an NPC's attention, and in isolation, they will act on the highest one. But in actuality, there is a solution that allows them to maximize their utility in aggregate: doing both. Imagine something inspired by the Nemesys system from Shadows of Mordor. The AI calculates the utility of fighting different peers for status, and ranks them: Fight Captain Cameron: 70 (not a sure thing, but fairly good chance of beating him and rising up) Fight Captain Donald: 60 (almost as good) Fight Captain Ernest: 30 (too tough) But in addition to fighting for rank within the military... there is also the issue of fighting for rank within your family. Sort of a Game of Thrones / Crusader Kings fight for succession. Kill Brother Adam: 65 (some risk, but still good) Kill Brother Donald: 58 (slightly less good) Kill Brother Garth: 20 (very easily goes wrong) Now, on a very superficial level, the best chance to rise in the military is to challenge Cameron, and the best chance to rise in the line of succession is to murder Adam. But there's a two-birds-one-stone solution here: challenge Donald -- a rival captain and also your brother. Challenge him for military rank, and then fight him until death/exile for your birthright. Challenging Donald doesn't make sense when you consider those goals separately, because he doesn't have the highest utility for either goal. But when I consider them together, I might realize that two fairly challenging fights doesn't have as much utility as going for one slightly harder fight. This might not be the best example, but I can think of others. There's a high utility on combat training, but right now I'm starving, so there's a higher utility on finding food. But in theory, if I hunted a wild boar, it might take me a little longer to find food, but I'd also get my combat training in. Another example: there's a high utility on recruiting an ally, but I have so many enemies that the utility of eliminating an enemy is higher. But in theory, there is one enemy who hates me over a misunderstanding that could be cleared up, and then he might be willing to hire himself out as my ally. I could eliminate an enemy and gain an ally within the same interaction. Is this where a planning-AI needs to come in?
  10. priyesh

    help me understand utility AI

      The breakdown of the Sims makes a ton of sense and lines up with some very surface level breakdowns I've read. It's crazy to think the most detailed explanation I've read was in a reply on a forum post less than a day after I asked. Also thanks for applying it to my example. This is supremely helpful.         I like how The Sims shows you what an NPC's motives are. Now that I understand that The Sims is actually way more complex "under the hood" and only reveals a few of the key utility functions, I should just figure out which utility functions to reveal, and which to leave "under the hood".   My last question is the best practice for manage multiple behaviors from the same utility function. For example, what if I wanted the "retreat" motive to trigger three different behaviors: "call for help", "take cover and heal", "get the actual hell out of there". Would I just wait until the "retreat" motive becomes high enough and then trigger some kind of behavior tree or planner, which would then decide from that list of three actions? Or could I do this entirely from utility just based on different thresholds? Or is this the reason that I might separate the  "retreat" motive into separate utility functions for summoning vs healing vs retreating?
  11. I've been designing a survival game while taking inspiration from the NPCs of RimWorld and Prison Architect. (And to some extent more AAA examples like The Sims, as well as rougher examples like Dwarf Fortress). It seems like a utility or needs based AI might be a good way to help the AI manage a variety of goals, and lead to some dynamic behaviors. Pardon my meanderings, but I'm trying to understand how this works, and if utility AI is even the right approach FOOD Let’s just say I have a “hunger” function that helps AIs manage their food supply. On a basic level, hunger gets low enough (high enough?) then the character should eat from their food supply. But what if they have no food? Obviously they should look for some. But you’d be pretty dumb if you waited until you had absolutely no food before you started looking. The answer is that the AI should look for food BEFORE they’re out of food. So now my utility function isn’t hunger. My character is going to eat regularly and this can be an automated (unintelligent) behavior. My utility function should be supply. My utility function for supply: as my food supply goes down, the utility of food goes up. At a certain threshold, it pushes a character to look for food. But is that dumb as well? The real key is if I have enough food to make a safe journey. So I design the supply function as: only look for food if I don’t have enough to make the journey. But this sounds a lot more like a very static behavior tree than a utility function. The agent isn’t really figuring anything out so much as I’m figuring it out for him, and hard coding it. What am I missing? RETREAT Another example I’m having trouble with… I create a utility function for retreating from battle. On a basic level, it compares the defending NPC’s hitpoints to the attacking NPC’s damage per second. If the threat of death is high enough, then flee. But an NPC shouldn’t flee if it can kill the enemy before the enemy kills them. So I factor that into the function. Compare the NPC’s “time to die” against another NPC’s “time to die”. Maybe scale the function so the difference matters more when my time to die is small, but allows more uncertainty and risk taking when my time to die is big. But then I need to factor in the other idiots on the battlefield. Being surrounded by a friendly militia is no reason to flee, whereas I’d feel differently if I saw an enemy army coming over a hill. What happens if alliances and relationships can change? I suppose an NPC could walk into a trap, thinking they are safe around their friends, when actually it’s a mutiny waiting to happen. But some smarter NPCs should be able to estimate the threat by looking at the souring mood of their estranged allies and say “I’m going to get out of here before it gets really ugly”. Does this make my utility function for retreat so convoluted that it no longer represents anything? I’m not sure I understand the best way to do this.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!