• Advertisement
Sign in to follow this  

Social Interaction A.I. Based on Human Psychology?

This topic is 3310 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've been thinking setting up a little project for fun where AI 'beings' interact with one another based on the basic principals of human/animal psychology. Normally when I think about the AI that drives NPC or enemy behavior, I think simplistically about what I can do (with relatively little effort) to create the illusion of thought, intelligence, life, etc, but I think it would far more interesting to have a neural network set up with an actual functioning social-psychological model. I'm actually only just lately reading thoroughly into neural networks (in a videogame context), so let me make a little disclaimer here that there may be many people out there already thinking about and messing with these ideas, and I don't claim to be chasing a breakthrough. I just wanted to hear any ideas/discussion about it if anyone's interested. So... My initial thought has been to construct AI neural circuits based on Leary's four* circuits of consciousness (he has eight, but only four are relevant). (never mind for now that other models/schools of thought could be equally viable) For anyone unfamiliar, in a (VERY SMALL) nutshell, those four circuits are: 1. bio-survival: shared by all animals, governing basic advance/retreat impulses. Is it safe? Do I stay or do I flee? 2. emotional/territorial: shared by all mammals, governs pecking order, dominance/submission, desire to please top tog. 3. dexterity/symbolism: shared by all humans, related to reason and conceptual thought, the use of tools, language, writing. 4. social/sexual: shared by 'civilized' humans, governs culture, tribal taboos, morality. (^That's quite simplified and questionably accurate in the fine details, but it shouldn't matter) These are the four circuits that play a role in the AI's neural network. I think when taken to great detail this neural network could potentially be an impossibly hard thing to successfully implement, but what about just in simple steps at a time? Consider this scenario: Three warrior-soldiers (with equivalent attributes in strength and skill) face a large group of extremely dangerous monsters. All three warriors know that fighting the enemies brings a high risk of death. Soldier #1 has most heavily imprinted the first circuit (Its ingrained in his psychology - part of who he is). The strongest of any impulses are then coming from the first circuit - "is it safe? fight/flee". He turns and runs, deserting his duty despite later consequence or implication. Soldier #2 has most heavily imprinted second circuit, and is subordinate to Soldier #3. He places extreme value in the approval and appeasement of #3, and he will follow orders unreservedly. Soldier #3 has most heavily imprinted the fourth circuit. He feels the fear impulses to flee (coming from the first circuit), but he is most heavily governed by duty and honor. He could not possibly desert his role and his duties. As long as morality dictates that all monsters threating his people should be engaged, he will fight to the death. So Soldiers #2 and #3 have stayed to fight, for slightly different reasons, while #1 has left the scene. Now consider these two alternate endings: 1. The two soldiers engage the monsters, and soldier #2 is killed. Soldier #3 does not rely on Soldier #2 as his reason to fight, so he remains alone, still fighting. 2. The two soldiers engage the monsters, and soldier #3 is killed. Soldier #2 has a shift in impulse - his circuit two instinct has been removed, and he may now revert to circuit one, and flee the scene. That's the basic idea in essence, in a combat scenario. Imagine you as a player are controlling the monsters, and your objective is to fight the three soldiers. What you see in the game is an unpredictable foe - "why did one of them immediately run away? That didn't happen last time I fought some soldiers." But, running even deeper, the ultimate goal for me would be to apply this to a living simulated village. Each occupant of the village would have randomized psychological attributes and relationships. What would be the emergent result? When each villager has a basic need for food and desire for coins (just one crude example), how will they interact? Will occupants with extremely low morality kill other occupants to get what they want? and will that trigger other social impulses in the witnesses of the attack? In all honesty I would expect hundreds of initial attempts of this kind of simulation to be specular (and probably humorous) failures in terms of modeling a realistic psychology, but the essence of the idea is something I would really love to develop. Any thoughts?

Share this post


Link to post
Share on other sites
Advertisement
I like the idea of having interacting agents with different morals, but I would approach the situation in a very different way.

I don't think it is a good idea to use artificial neural networks. ANNs are basically highly parametrized functions that can be tuned (trained) to fit data. I don't see a problem of that type in any of what you described. Perhaps the word "neural" is giving you inflated expectations for the applicability of ANNs (this is a common phenomenon).

The natural paradigm to represent the kind of decision making you describe is expected utility theory, which in some sense is the solution to AI in general. Your soldiers have several actions to choose from, and they need to evaluate how happy they expect to be if they take one or the other. Each action can result in several different outcomes, with probabilities attached (the agent's prediction of what will happen). Then each outcome can be evaluated by a function that will result in a real number (called utility), which describes how happy the agent is with each outcome. The only thing left to do is compute the expected value of the utility of each action, and pick the action where the maximum is achieved.

If you implement the utility function by adding together a bunch of terms, you can model different personalities by changing the weight of each term. You can also get interesting behavior by changing the way the agents estimate the probability of the outcomes: a reckless character could be one that doesn't fear danger (utility of dying is higher than in other agents), or it could be one that doesn't see danger (the probability of death is smaller than in other agents' estimates).

I am not very fond of psychological models that seem completely arbitrary, like the one you just described, or Freud's. Anyway, if this model helps you think of how to organize a utility function, that's great. Just leave the artificial neural networks out of this, so you can actually understand and debug the behavior of your agents.

Share this post


Link to post
Share on other sites
The problem with NN is you have to provide the training data that matches in richness the situtions-reactions the NN is meant to solve. If subtle differences in a situation maka a major difference in the correct reation, then the training data has to include thos subtleties. NN are cabaleof generalizations to be able to react to similar situations, but can grow into over generalization which then fail in slightly different situations who actually have a very different 'proper' answer.

Alternately if a training set isnt used (or is partially used as a starting point), then you need some mechanism that tells the system whether the response was good or bad. That can be very hard -- especially with temporal cases where something that happened a while back was the key factor leading to the situation resolution and the result being judged.

The other part of NN that is often overlooked is how those simple inputs into that NN mathematical model get generated in the first place. The game situation has to be interpretted and reduced to these numeric inputs. In a complex game situation that is usually quite difficult to do correctly. The situation has to be summerized and factored, and the mechanism itself to find good factor generalizations usually requires alot of manual creations and proving to match the game mechanics (and likely the specific scenarios).


You suddenly find (as you will in real AI) that the program mechanism is less than 10% of the effort and the other 90% is the development of the domain specific logic data. Building a 'self learning'mechanism doesnt mean you can skip out on the subsequent guidance of the training. Creations of test scenarios (random doesnt work because real situations arent just an assemblage of random situational factors, instead are a cohesive pattern which has to be created somehow.


Share this post


Link to post
Share on other sites
Like has been said by the others, my advice would be to ditch NNs. You don't have any learning to do in the process you describe so it would probably be useless. I like however your idea of 4 levels of behaviors having different priorities according to the personality of the NPCs.

I think you would have a reasonably good result by scripting these behaviors manually. Then, if you really are dedicated in making these more complex, you could use a planning algorithm, able to infer consequences of its actions.

A game design note : I know this is unrealistic and goes against immersion, but I think the battle you describe (one player-controlled monster vs 3 NPC soldiers) could become very interesting if the player knew the personality of his opponents : "you face Krom the coward, Bel the brave and Olmec the loyal". Then it would make sense to use a frightening but less defensive-effective stance to frighten the coward, then concentrate the efforts on the brave.

At the scale of 100+ agents, if done correctly, I think that there could really be a lot of fun. Dwarf Fortress has some basic mechanics like that and the crowd behaviors are sometimes funny, sometimes tragic, but are always explainable.

Share this post


Link to post
Share on other sites
Quote:
Original post by alvaro
I like the idea of having interacting agents with different morals, but I would approach the situation in a very different way.

I don't think it is a good idea to use artificial neural networks. ANNs are basically highly parametrized functions that can be tuned (trained) to fit data. I don't see a problem of that type in any of what you described. Perhaps the word "neural" is giving you inflated expectations for the applicability of ANNs (this is a common phenomenon).

The natural paradigm to represent the kind of decision making you describe is expected utility theory, which in some sense is the solution to AI in general. Your soldiers have several actions to choose from, and they need to evaluate how happy they expect to be if they take one or the other. Each action can result in several different outcomes, with probabilities attached (the agent's prediction of what will happen). Then each outcome can be evaluated by a function that will result in a real number (called utility), which describes how happy the agent is with each outcome. The only thing left to do is compute the expected value of the utility of each action, and pick the action where the maximum is achieved.

If you implement the utility function by adding together a bunch of terms, you can model different personalities by changing the weight of each term. You can also get interesting behavior by changing the way the agents estimate the probability of the outcomes: a reckless character could be one that doesn't fear danger (utility of dying is higher than in other agents), or it could be one that doesn't see danger (the probability of death is smaller than in other agents' estimates).

I am not very fond of psychological models that seem completely arbitrary, like the one you just described, or Freud's. Anyway, if this model helps you think of how to organize a utility function, that's great. Just leave the artificial neural networks out of this, so you can actually understand and debug the behavior of your agents.


The problem with Decision Theory and Game Theory and their circular tautological definition of what rationality entails is that it also has little bearing on how a person behaves. Ergo economic crises and their unpredictability. Economists will argue that a household could be argued to behave more like this rational utility maximizing hypothetical than a single person but even this is a gross approximation. Assuming Bounded Rationality clashes just as much with reality as Psychology experiments and common sense observation show.

Share this post


Link to post
Share on other sites
Right. Most game theory exercises assume the premise of superrationality... that is that all participants in the game are purely rational and they all know that each other are purely rational. Needless to say, this is not even a remotely rational expectation.

Incidentally, Richard Evans, Phil Carlisle, and I are giving a lecture at the GDC AI Summit: "Breaking the Cookie-Cutter: Modeling Individual Personality, Mood, and Emotion in Characters".

Also, I cover the concepts of game/decision theory, its pluses and minuses, and how to model decisions based on multi-attribute decision theory in my book (which I can now link to!) Behavioral Mathematics for Game AI.

Share this post


Link to post
Share on other sites
Thanks all for the replies.

Excellent food for thought.

In my post there was inadvertent and somewhat accidental focus on NNs on my part - they're not really something I'm hung up on or overly enthusiastic about. I'm new to tackling complicated A.I. and NNs are just one model to look into. I may have even been using the term incorrectly regarding the way I'd attempt to implement the ideas I was thinking about.

It's great to have these other suggestions for things to read up on and look into actually, which I'll definitely be doing.

Yvanhoe: Now that I think about it, I do really love the idea too of knowing an opponent's personality and using strategy accordingly. I think that could be a fantastic class skill available to some characters - sense emotion, etc. That said, most people are unable to hide extreme emotion anyway, so a perceptive character would generally be able to sense when some one is wavering anyway. Such a lot you could add to an encounter...

InnocuousFox: Can't tell you how much I'd love to be at that lecture. A little far from home sadly.

Cheers!

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement