• Create Account

# NPC AI in RPG's

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

59 replies to this topic

### #21_rpg_guy  Members   -  Reputation: 122

Like
Likes
Like

Posted 01 May 2001 - 05:07 AM

Some more ideas:

-Npc''s have a personality that will influence all decisions

-They need to have some sort of memory system (as you''ve been discussing...)

-They make decisions based on what they know

-They should NOT be predictable (i.e. there needs to be an element of randomness to the decision making process)

What I propose is to have a list of possible actions, and each action is given a weighted score based on personality and information. The actions that the npc would not perform (score below 0 or a similar method) would be dropped from the list. An action would then be chosen by a psuedo-random function where the scores act as a frequency factor. The actions with higher scores would be more likely to be chosen. I have to run...(I''m at work) I kept the details out just to provide a proposal for a general solution.

### #22Timkin  Members   -  Reputation: 864

Like
Likes
Like

Posted 01 May 2001 - 04:07 PM

quote:
-------------------------------------------
original post by _rpg_guy

-They make decisions based on what they know

-They should NOT be predictable (i.e. there needs to be an element of randomness to the decision making process)
solution.
-------------------------------------------

One would want an agent to make decisions based on their goals and what is happening around them. These are two distinct planning concepts and have very different methodologies for implementation.

What we DO want though is predictability in our game agents. Most gamers want NPCs to make decisions inline with the characterisation of the NPC. I.e., you don''t want to see a baker run off and try to forge armour with a loaf of bread. However, it wouldn''t be unreasonable to see that same baker try and slay a dragon with a bread stick if their daughter had just been eaten by said dragon.

In other words, we want our game agents to act as we would if we were in their position with their knowledge. That doesn''t mean that as players we can predict what they can do, but it means that when we find out what they know and their motivations, what they did makes sense. That is immersion!

Anyway, to get onto practicalities of how to choose actions for a character, you can use something called the Principle of Maximum Expected Utility. It''s a formal definition of rational action used in AI (and economics funnily enough!). It goes like this:

1) Assume you have a utility function over world states;
2) Given your current state and all possible actions;
3) Evaluate all possible next states;
4) Compute the expect utility (EU) for each state. This is the product of the likelihood of achieving that state with the utility of that state.
5) Choose the action that generates the Maximum Expected Utility.

You can assign probabilities to future states in many ways. If your game world is completely deterministic (ie there is no uncertainty/noise involved with its evolution) then you can apply a uniform probability distribution to all next states (so each are equally likely). Then, you are choosing the action that takes you to the highest utility state.

What is a utility function? It''s simply any function that can evaluate the relative ''goodness'' of any two world states. That is, given state_A and State_X, your function can return a number for each state with the better state of the two having the higher number. If you have a finite set of world states it may be possible to assign them all a unique number. For continuous domains an analytic function would be necessary.

Here''s a simple example that demonstrates the power of this method. Let''s say you have an agent and you want them to walk to the top of a hill. You could simply compute the location of that point and have them walk to it. But, you''d need to compute whether that path passed through objects and then work out how to avoid them. Here''s a different solution.

Assume agent is at bottom of hill and can move in a fininte number of different compass headings.
Compute all next positions that the agent could move to. For all positions that would result in a collision with a game object, assign that position a probability of zero. Assign all other positions a uniform probability. Use the height of each position as it''s utility value. Compute for each position the expected utility. Choose the action that leads to the maximum expected utility.

Okay, so we didn''t actually need utility in that example. We could have just performed reactive planning that had our agent try and increase it''s height on the hill with every step and testing for collisions at each step.

However, consider the problem of getting a vehicle to follow a road. Assume there may be obstacles on the road. Assign a utility function to the region around the road that is highest on the road and decreases the further you go from the road. Then apply the same iterative methodology above for incremental vehicle movements. You can see that the truck will do it''s best to stay on the road but will deviate a minimum distance when it has to avoid an obstacle.

There are plenty of other uses for rational action, particularly in NPC decision making. I''d be happy to discuss them further.

Tim

### #23_rpg_guy  Members   -  Reputation: 122

Like
Likes
Like

Posted 02 May 2001 - 09:57 AM

You started to lose me once you started talking about the "Principle of Maximum Expected Utility." I have no idea what that is. (I''ll do a web search after I finish this post.) It all sounds very cool, but I have to understand all of it before I can program it. I''m still at least a month away from starting to code the AI portions of my game, but probably more. Gotta run. Keep the good ideas rolling!

### #24bishop_pass  Members   -  Reputation: 108

Like
Likes
Like

Posted 02 May 2001 - 01:14 PM

While you are on the subject of Web searches and discussing NPC AI in RPGs, why don't all of you bone up on the following topics. You'll be better prepared to tackle a subject such as this, you won't waste your time seeking solutions to things already solved, and you'll better understand the real problems that you can expect to encounter along the way. Here are the topics:

Belief Systems
An agent believes what it does about the world, and this is not necessarily reality. An agent should be able to model the beliefs of others, even if these beliefs contradict what the agent believes about the world.

Truth Maintenance
An agent should be able to effectively reject or accept new incoming knowledge. It is pointless and will cause problems for an agent if the agent integrates new knowledge into his beliefs if this new knowledge contradicts (not necessarily directly) what the agent already knows. An example might be an agent knowing that Terry is the father of Bill, and then learning later that Terry is the first woman ruler of Krodonia.

FOPC and situation calculus
FOPC stands for First Order Predicate Calculus. Essentially anything which can be said can be said in the formal language of predicate calculus.

For example:
Everyone is younger than their parents.
(A x y) (Parent(x y)->Younger(x y)).

Or:
A knife is a weapon.
Instance-of (knife, weapon).

Or:
There exists at least one sun in every solar system.
(A x) (E y) (Instance-of (x solar-system) & Instance-of (y sun)->Contains (y x))

Predicate calculus can maintain truth. If you have provided knowledge that there is a particular solar system that has no sun and then you try to make the above statement, it can be refuted.

Resolution Refutation and Natural Deduction
These are the two main methods of proving and disproving truth within a knowledge base.

The Twelve Dimensions of Context
As put forth by Douglas Lenat, builder of Cyc, among the twelve dimensions of context are time, belief, geographical location, hypothesis, etc.

Perception and Action
There is, or generally believed to be a basic cycle to any agent. This would be perception, analysis and elaboration of perception, planning, decision, and action.

Defeasible Reasoning
Unlike monotonic reasoning, defeasible reasoning enables one to reason effectively even when given new contradictory information about the world. It enables an update of one's beliefs even with this contradictory information.

So, there you have some food for thought. Ignore these concepts at your own peril. As you dig deeper, ultimately you will encounter some very deep and philosophical problems, many of which have been solved, or at least halfway decent solutions have been created.

Edited by - bishop_pass on May 3, 2001 2:14:31 AM

### #25C-Junkie  Members   -  Reputation: 1099

Like
Likes
Like

Posted 25 May 2001 - 11:06 AM

Well, behavior, in my opinion, is the simplest problem to solve. Although I''ve given up hope of EXPLAINING my sytem, basically Its a bunch of abstract personality numbers/skills/linked lists & tree memory structures. (A bit more memory intensive than most systems, but...hey!) Basically, It''s adaptable and let''s the NPC do irrational things by factoring in abstract personality numbers as a weight in the logic. Which screws it up enough to not call it logic... Ahh, Humanity....
The only problem is COMMUNICATION. I don''t want a menu system, where everything needs to be scripted. And I don''t want the player to have to construct sentences with a bunch of list boxes...too tedious!

I''ve decided to combine them, sortof. I let the AI not only create a question using the "list-box system" but let the game''s logic determine responses as well. (In addition to NPC AI & conversation AI, I have Skill AI using the Skill Web(COMBAT)) And put a little "compose new sentence" button on the bottom, so if the player wants to change the subject he can. ("Did you steal the 300 gp?" "How about that battle outside town today?") I can''t think of anything else except some GIGANTIC super computer running a neural net ''reading'' program for each player...a bit expensive to run 5,000 supercomputers huh?

### #26mad_goldfish  Members   -  Reputation: 122

Like
Likes
Like

Posted 30 May 2001 - 12:19 PM

quote:
Original post by _rpg_guy

You started to lose me once you started talking about the "Principle of Maximum Expected Utility." I have no idea what that is. (I''ll do a web search after I finish this post.) It all sounds very cool, but I have to understand all of it before I can program it. I''m still at least a month away from starting to code the AI portions of my game, but probably more. Gotta run. Keep the good ideas rolling!

Sounds to me like Q-Learning and W-learning, where an agent
takes time to learn about the best possible option from all the possible options, then chooses which one to take based either on their own gain or on the expected loss to others, depending on how you want the agents to play. It was design for a collaboration system but I''m sure it''s relevant.

There''s a Reinforcement Learning paper by Mark Humphrys on this at:
http://www.cl.cam.ac.uk/users/mh10006/

On the greater scheme, the communication between NPC''s is an extremely complex problem both in terms of speed and resources. The only viable way I can see of doing it is via a central database that agents can only view through rose-tinted spectacles, and need memory retardation. Agents would also need beleifs of their own however, as there is no point storing the I heard Albert saying something about Cheap Swordfish'' in the central database.

Unless the game is very long, or there are very few characters, there is little or no danger of the characters knowing everything. The memory retardation is mainly to keep processing per agent down, and to help them focus on what is important to them.

Any a good database would be constantly growing anyway, by adding the Bongo killed a dragon'' type of information, as well as updating certain information on where people are, who''s dead or alive, how powerful people are etc...

### #27 Anonymous Poster_Anonymous Poster_*   Guests   -  Reputation:

Likes

Posted 31 May 2001 - 06:59 AM

With regard to information sharing, we''ve been developing something along the lines of the following:
1) distribution of the message: Close friends, Acquaintences, General Public.
2) Accuracy upon repeating: Very accurate, downplay, exagerate.
3) Type of message: Boast, rumor, decree
4) Duration: Day, Week, Legend

For example, NPC A (from the lovely town of A) is in a pub and overhears NPC B (from that haven of surly fishmongers, village B) stating that that his village market sells fish at 40% less than the going rate. NPC A adds this to his list of known info (as does everybody else in the pub) with the settings of General Public, downplay, boast, day assigned to it, as well as who/where it was said. Later, at yet another pub (A is for Alcoholic, apparently), he hears somebody discussing fish, he repeats that village B sells fish at 10% below cost.

The NPCs share messages the same way that PCs do. If a message is Close Friends, it is whispered. Otherwise, it looks a bit like hanging about the bank in UO. When an NPC no longer has any info about the topic at hand, they will say a random rumor they heard, stirring up yet another long winded bar conversation. They will pick out key words (cities, known nouns, etc...) to use as a determination as to whether or not the rumor is relevant.

So far, we''ve got a pretty good simulation of a bunch of drunks in a bar. Still need to work on relevance...

### #28 Anonymous Poster_Anonymous Poster_*   Guests   -  Reputation:

Likes

Posted 31 May 2001 - 10:13 PM

I would love to see better AI in NPC, for years have I been tossing up wether or not to write the ultimate RPG with contenant sized maps that zoomed into precise detail, characters that would run the show all round the playground, and numerous sub-quests buried into the main quests, but every time I sit down to start coding it, I scoff at the damn machine, sure games have come a long way since the days when I first learnt how to program(back in the days of the microbee and the commodore PET) but I''ve got to be honest, even with cutting edge technology, I think it will take yet another 10-20 years before computers are capable to produce my dreams.
I love multiplayer games, and assume that even with NPC AI you still are planning to incorporate multiplayer.
One last thing that I have to say, I kinda figured a way to create maps of 4096x4096 blocks that span 256x256(these values are aproximate) tiles in relatively little memory, if anyone is interested in chatting about this I''m happy to discuss my ideas, my e-mail is kothos1@dingoblue.net.au
I think what people need today is a bit of laterality, most games I see are linear, you compleate one part and go on to the next, it would be so easy to incorporate a kind of pick-a-path into some games that it bogles me over that they don''t do it, how many times would you be able to play a game with this incporporated, and it is just the tiniest little bit of extra work.
Anyway I wish you the best if you are indeed thinking on working on that Inverse Parser, if I remember rightly they got quite good before castle wolfstein caused that quake in the games market.

### #29eilenbeb  Members   -  Reputation: 122

Like
Likes
Like

Posted 02 June 2001 - 09:37 PM

suggestions:

information will tend to circulate between freinds, within a guild, where they work, etc. npc stats could include what ''sectors'' the npc belongs to and how high they are in the heirarchy.
global game information could be indexed by knowledge level so a 1st level (or equivalent) would probably never have access to the same info that a guildmaster would.

this would also help limit the spread of information. a hacker might know that Xcorp is about to overhaul it''s security systems, but a dealer would most likely not.

gaurds for instance would be very interested in theives guild information but care little about temple issues (unless a member of that temple or a adversary guild) and would ''remember'' certain things better.

none of this is new, just tossing out a few ideas.

AND FOR THE LOVE OF EVERYTHING DIGITAL A LOCKPICKER CANNOT BECOME A BETTER LOCKPICKER BY KILLING ANOTHER CITY GAURD!!!!
hehe...
laters,
b

### #30Ronin_54  Members   -  Reputation: 122

Like
Likes
Like

Posted 03 June 2001 - 04:04 AM

UNLESS! the city guard happened to have a little ''Guide to Lockpicking'' stowed away in his backpack :p

### #31Geek  Members   -  Reputation: 122

Like
Likes
Like

Posted 03 June 2001 - 04:34 AM

Speaking of NPC AI whey dont they get mad whaen you barge in their houses like in games like Lunar 2 and Final Fantasy. And they only walk back and forth. Weird stuff man....lol
Geek

### #32Timkin  Members   -  Reputation: 864

Like
Likes
Like

Posted 03 June 2001 - 08:48 PM

quote:

Sounds to me like Q-Learning and W-learning, [snip]

Not quite. The Principle of Maximum Expected Utility (MEU) assumes you have a utility function defined over domain states. The MEU then provides a means for choosing actions in light of this utility function.

Q-learning (and other forms of reinforcement learning) presume that the utility function is implicit in the reinforcement values returned by the evaluation portion of the algorithm (which takes states/actions as input). That is, considering these ''rewards'' over all states constitutes a utility function.

Tim

### #33Timkin  Members   -  Reputation: 864

Like
Likes
Like

Posted 03 June 2001 - 08:52 PM

quote:
Original post by Anonymous Poster

So far, we''ve got a pretty good simulation of a bunch of drunks in a bar. Still need to work on relevance...

Sounds interesting. Perhaps you could consider including a ''context'' variable that has the same scope as the ''distribution'' variable. There would be one context value per conversation going on. The challenge for the player is to deduce the context of a discussion between NPCs and conversely, the challenge for an NPC talking to a player is to determine the context of their discussion (this is an active research area).

Once a context is defined, then relevance should be fairly trivial to deduce.

Tim

### #34 Anonymous Poster_Anonymous Poster_*   Guests   -  Reputation:

Likes

Posted 05 June 2001 - 12:00 AM

hmm... i think i should say something because i was in charge of coding AI for moonpath (Dream Dust Oy:s RPG) an the thing is that i coded for 8 months and didn''t get anything else but 200 000 lines of MATRIX (my script lanquage that i had made for AI coding0) and that is over 1 000 000 lines of C++ and still i didn''t get them to be intelligent. so after 8 months of 20hour days i got burnout. and that was the end on moonpath. AI was only 10-20% finished so it had got to big. think about it. 10million lines of C. not even newest computers can handle that. so the starting point should be to make good AI not perfect ai that simulates everything.

### #35Timkin  Members   -  Reputation: 864

Like
Likes
Like

Posted 05 June 2001 - 09:55 PM

quote:
Original post by Anonymous Poster

[snip] ...so the starting point should be to make good AI not perfect ai that simulates everything.

Okay, I'm sure I'll cop a flame or two for this one... but...

Scripting your AI is definitely NOT the way to go when you want to imbue your agents with behavioural traits or decision processes that appear intelligent. Scripts are useful for small domains where you can meaningfully right down rules of actions... if condition then action .

Scripting of a storyline or high level actions (like take a trip to another town) would be reasonable if they are integral to the story line, but unfortunately most game producers think everything can be solved with a script. Oh, and a script is NOT AI, its PI (programmer intelligence!).

If you have a large, complex, dynamic domain a script just wont cut it.

This is why we aren't seeing an advancement in game AI that keeps apace with processor and memory advancement... people are using tools that were appropriate 10-20 years ago to make the games of the future.

My \$0.02 worth.

Tim

Edited by - Timkin on June 6, 2001 4:57:52 AM

### #36KaneBlackflame  Members   -  Reputation: 122

Like
Likes
Like

Posted 06 June 2001 - 06:44 AM

### #37Nutter2000  Members   -  Reputation: 122

Like
Likes
Like

Posted 07 June 2001 - 12:22 AM

quote:
posted by KaneBlackflame
I have to agree scripting is not the way to go. A few monthes ago, I implemented an NPC AI design I have been working on for a while.....*snip*

interesting....
and how did you display the results without graphics.
it would be interesting to see a demo of that, assuming thats possible of course

I would say that scripting shouldn't be dismissed so rapidly, even with good ai you still often need methods of triggering events, at least in games you do.

however, a few lines of good propagating self-contained ai can be worth 10's of thousands of lines of equivalent scripting code.

good scripting can give you, as the world creater, a good way of directly controlling a hundreds of bots.

"Bad Day... F**K it!" -Stephen Baldwin (Usual Suspects)

Edited by - Nutter2000 on June 7, 2001 7:25:12 AM

### #38KaneBlackflame  Members   -  Reputation: 122

Like
Likes
Like

Posted 07 June 2001 - 02:56 AM

I must reform what I said...pure scripting isn''t the way to go...to be honest, all of the motor functions were scripted actions. THe simulation ranthe whole toen a few times a second, but the toen was never bigger than 132 people...I throttled the "game" area so not enough food could be made for a population much more than that. If the next demo goes well, I''ll post a well grown demo if you want. I''m still a few weeks away, but I would be happy to show it after it get it running again. It''s not as interesting as I lead on...I had a log file printed of all actions taken...I let it run for 10 minutes my first run and stopped it to make sure the log file was working and found a huge! log file of sleep, eat, move...I ended up only logging certain events like going to certain places or doing certain things, and these went to separate files. When a distaster happened, I had everything be logged during the event, that''s how I saw my ''hero''...but most of the time, nothing but simple everyday stuff goes on...the trends and paterns that emerge though would surprise you...On some levels, humans are actually reasonable! I had never thought this until I saw my "families" doing what we do...someone goes to work, someone gets food for the house or cares for sub NWebs, sub NWebs do stuff that makes them happy. I don''t know, most of it was pretty un-entertaining, but it did show me a good path...a set of NWebs to decide which script to run may provide a viable solution to mobile NPC''s...of course, conversation is a little harder...

### #39Dynamite  Members   -  Reputation: 145

Like
Likes
Like

Posted 07 June 2001 - 02:19 PM

KaneBlackflame, what exactly is an NWeb? Something like Neural Nets?

The RPG I''m working on uses a form of event driven behavior. Right now, it''s just a real-time battle system, where you control your character''s movement and aiming. When you walk, depending on your stealth rating, you may make noise. Then all people within the range hear it and react. They all have a list of known things and have preferences (like A will attack enemy before he''ll heal himself, B will heal a fallen friend before attacking..). They forget after some time and move on after a while.

I also think that EVERY NPC doesn''t have to want to travel. I think it would make things easier if only a few NPC''s actually deviated a lot from their normal routine. I plan on doing something like Naz was talking about, the NPC''s have agendas (just as the soldiers earlier) and go about their business.

--I don''t judge, I just observe

Stuck in the Bush''s, Florida

### #40Nutter2000  Members   -  Reputation: 122

Like
Likes
Like

Posted 08 June 2001 - 12:03 AM

quote:
Original post by KaneBlackflame
I must reform what I said...pure scripting isn''t the way to go...to be honest, all of the motor functions were scripted actions.

ahh yes, now it becomes a little clearer.
to be honest that was my point, I think that it''s very difficult to get good NN or none-scripted methods to do the basic functions, or at least with much success over a large spread of agents. Also I don''t beleave that we "randomily" learn to do stuff like that in real life, babies seem to have the instinct to walk and know how to walk, and I don''t think it''s just from watch adults. but still thats another issue.

I''m a big fan of using Networks, or fuzzy logic to facilitate state changes I think that is often a nice happy medium that gets the best from both worlds!

I for one would be interested in seeing a demo.

to get back on topic, a set of networks or other more flexible systems could be used for changing which set of mood scripts, e. angry, scared, etc. and influence the conversation that way.

for example, the player needs to get a key from a guard, now depending on how the guard is feeling, he may or may not give you the key. The player needs to influence the guard in certain ways, if he makes him happier as regards the player than he''s more likely to give him the key.

in terms of how to do that, you COULD have say a network which takes inputs on current mood, player actions, etc, this modifies which script set the guard uses at that time. Then only certain scripts could have the token-trigger to get the guard to give the player the key.

"Bad Day... F**K it!" -Stephen Baldwin (Usual Suspects)

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

PARTNERS