Archived

This topic is now archived and is closed to further replies.

Infinisearch

link

Recommended Posts

It seems a good idea, and seems to be a good way to simulate human social rules.

I wish to know if only social rules will be involved or also personal needs and education (like pain, hate, love...). I think that a functional simulation can''t be run without those (personal needs and whishes).

Another thing that annoys me is the comentary about: "agents will be searching through big decission trees (in realtime)" and they are also saying that they will try to precompute them as long as they can.

The game must have an HUGE amount of agents if searching decission trees will be so cpu-consuming, in my opinion (or the trees really really huge). But this is only an speculation, I''d want to hear about this.

The point is.. how can decission tables help in such a problem? There are too many factors and decissions that, surely, have weights (different importance)... even different weights for each individual, which would give him his own "values scale" (personality). Decission trees approach don''t seem correct for me... they would be very difficult to balance, as the step of one state into another would need some requirements and as the article says, the decission trees might be of a considerable size (if not, only a few kinds of personality will exist).

I''m only guessing, because maybe they know that a few states and a few ''social rules'' well combined can create a big variety of personalities. IMO a complete study would be really really interesting: how many factors of personality, plus social processes are needed to create a wide variety of people.

Why aren''t they using some fuzzy logic to take decissions? Probably it would take less computer time. Also, similar groups of agents can benefit of calculating some of the functions for group, and not for agent, if the fuzzy functions are stacked with care.

And, finally, why are they so worried about computer resources? Agents are not taking decissions every frame. As far as I know, one person cannot take more than 160 decissions per second, but this number is only true when making things as flying, driving, fighting... this problem has a more restricted domain: agents will have a variety of deccissions, but this will be limited. If a agent decides go for a beer, it will not have to make many decissions until he gets into the bar. I think that we can reduce this to 1 decission per second in most cases. That would still allow the agent to react to the world around in case that something happens in his way to the bar.

Anyway, I''ll buy that game as soon as they put it on stores. Sure it will have to be seen. What do you thing about this?

JJpRiVaTe
Web: Private Zone

Share this post


Link to post
Share on other sites
quote:
Original post by MikeD
I think it''s a fantastic idea.
But then I _have_ to say that.


Yes you would Mike, wouldn''t you! BTW, how long have you been at LH now? Are you actually working on Dmitry or some other project?

Cheers,

Timkin

Share this post


Link to post
Share on other sites
Sad as I am to admit it, I do indeed work for Lionhead and I am the _other_ AI programmer on Dimitri (I don''t know where the press got the Dmitry spelling, but the .sln project in MSDev is very much called Dimitri), I''ve just passed the 3 month mark and they haven''t fired me yet ;-).
For another link to the idea of social processes, by Richard see http://www.gamasutra.com/features/20020424/evans_01.htm.

I''m not sure how many questions I can answer at the moment as this project is far from completion, pretty much under wraps and I''m primarily working on the physical AI (pathfinding, steering, interfacing with the animation system etc.) atm, wheras all the social AI is handled by Richard.

Anyway, the above article should clear up a few technical points.

Mike

Share this post


Link to post
Share on other sites
MikeD could you point out some books or papers that you use to obtain knowledge in this subject? is it a combination of neural nets/genetic programming/fsm''s etc? or is there a good book on social ai programming?

thank you.

Share this post


Link to post
Share on other sites
If you''ve read the article there''s some references at the back. Most of the work is oriented around decision trees although I can''t really go into much detail.
Did you read the article?

Mike

Share this post


Link to post
Share on other sites
Mike, perhaps you could pass on these thoughts to Richard and report back his comments?

I see a fundamental problem/shortcoming with the system as described in these two articles. Social activities are moderated by the perceptions of the individuals taking part in them. Take the example of Arthur, Belinda and Charlie mentioned in the Gamasutra article. Belinda is flirting with Charlie. Charlie is only taking part in this social activity if he understands that what Belinda is doing is flirting. If Charlie is inexperienced in such matters, or simply naive, he may be predisposed to misinterpret the nature of a particular social activity; in this case flirting. Perception of the nature of the social activity is extremely important for participation in it. This is the whole basis of social misunderstandings!

Even though Charlie may not yet understand what is going on, Arthur may indeed realise that Belinda is flirting with Charlie and may take offense, thus creating a new social activity (taking offense) which might lead to other activities (beating up Charlie).

Perhaps Belinda and Charlie are merely conversing, but Arthur sees their conversation as flirting. This leads onto the issue that, if we take the Bayesian stance, an agent's posterior beliefs depend on their prior beliefs and the observations they make.

An agents belief structures will moderate their reponses to a social activity. Perhaps Arthur is so inclined as to like the fact that Belinda is flirting with Charlie... perhaps he and his wife have been looking for a third sexual partner??? So that while many people may think it inappropriate for Belinda to flirt with Charlie, Arthur's prior beliefs may result in him encouraging the situation. These facts should be taken into account when determining the possible actions that Charlie could take in response to the social activity.

The model of social activity as presented in these articles does not appear to take into account either perceptions or individual belief structures in determining the response to a social activity (and the consequences associated with that response). I would be very interested to hear Richard's comments/thoughts on these points.

Cheers,

Timkin

[edited by - Timkin on September 3, 2002 10:26:48 PM]

Share this post


Link to post
Share on other sites
Timkin: In reply to your points (although Richard is on holiday and might not agree 100% with my interpretation of his work). A social activity is "an ''activity-thing'' [that] does not command its participants to obey - it just requests that they perform an action, and tells them the consequences of ignoring this request". The agents have belief structures based on their perception of the world and so act due to subjective knowledge. The activities merely suggest certain courses of action and the agent decides on the best course due to their personal beliefs, desires and needs.
So Arthur might like the idea of Charlie and Belinda flirting or he might not, however his perception of their flirting is not based on his involvement in that activity (you''ll note that Arthur is not affected by that activity), it is based on his perception of their behaviour and his deciding whether they are flirting and to what extent. His personal beliefs might later trigger a "threesome" or a "revenge" activity, which would try to recruit the other members if appropriate, but the flirting activity is separate from Arthur.

The social activities are an important part of the AI system, but they are far from the only part. Does this answer your questions?

Mike

Share this post


Link to post
Share on other sites
But Arthur and Belinda should already be involved in a ''go out'' activity themselves. This activity could then point out to Arthur that his prestige will drop if his lets the flirting go on. If only activities could talk to one another...

Share this post


Link to post
Share on other sites
quote:
Original post by Diodor
But Arthur and Belinda should already be involved in a ''go out'' activity themselves. This activity could then point out to Arthur that his prestige will drop if his lets the flirting go on. If only activities could talk to one another...



i think the two activities are independent of one another - if their prior activity had been ''stay in to watch tv'' and charlie happened to live with them then the same reactions ought to be evident...



Share this post


Link to post
Share on other sites
Having activities communicate doesn''t make sense in context. Activities attempt to recruit agents to fill roles, not act in accordance with one another. As it is the people outside the activity (indeed, those inside it as well) don''t know the activities intentions. No more than you know your school''s (an activity) intentions apart from those it communicates to you. Each agent is given offers from activities, perceives the external world and has internal desires, which culminate in their eventual behaviour. All the mix-match of social and cultural behaviour occurs as an emergence of the interaction of separate activities. No communication necessary.
Arthur has to notice the flirting himself to act on it, the activities certainly won''t tell him.

Share this post


Link to post
Share on other sites
quote:
Original post by MikeD
Activities attempt to recruit agents to fill roles, not act in accordance with one another.


Okay, that statement clarifies things a lot. Clearly, the roles that activities are trying to fill are predefined according to some higher belief system (that of the game designer/developer). This is the problem I have with the system as it stands...

Okay, Arthur perceives Charlie and Belinda talking, so I accept that his personal predispositions may affect how he perceives that situation... i.e., does he realise they are flirting, or perhaps misinterpret the situation. If Arthur realises that Belinda and Charlie are flirting, then presumably there is a ''flirting-activity'' that now suggestes to Arthur one or more possible actions and the consequences of doing or not doing those actions. It would be socially correct if these choices were mitigated by Arthurs belief structures, rather than predesigned in accordance with some external agents belief structures (the game designer). This, I believe, would be more in tune with Wittgenstein''s original ideas and certainly more realistic.

Ideally, what would be desirable is a system in which agents learn their roles in social activities based on trial and error (punishment and reward). This is indeed how humans and other animals (particularly primates which have very structured social heirarchies for expressing their activities) learn to recognise and partake in social activities. You could possibly achieve this learning by utilising the emotions of shame, pride, embarassment, etc., as a utility/reward function in a reinforcement learning scenario.

Anyway, just my brief thoughts on the notion of social activities, so take them with a grain of salt!

Cheers,

Timkin

Share this post


Link to post
Share on other sites
quote:
Original post by MikeD
Arthur has to notice the flirting himself to act on it, the activities certainly won''t tell him.


indeed... in reality this can be seen in situations where people manage to get away with unfaithfulness for years at a time - if they''re not in a position to see it themselves or simply miss the signs then they won''t know...

as for them being in a permanently "married to" activity, the same still applies - arthurs thoughts on the flirtation (assuming he notices it) are specific to him as they are to all of the agents - arthur may actually like the idea of her flirting with charlie and it may be charlie who sees it as a problem (perhaps a past experience has caused him to be wary of married women )...

quote:
Original post by Timkin
It would be socially correct if these choices were mitigated by Arthurs belief structures, rather than predesigned in accordance with some external agents belief structures (the game designer).


that''s exactly how i understood it would work... arthur would make his decisions based on his own ideals rather than on a global set - this fits precisely with the idea that the agents are self-contained and are given no guidance by the activities they are undertaking (why should flirting always reduce an agent''s prestige?) nor by anything else... arthur''s decisions should be guided by similar experiences in the past - if he''s never been in a similar situation or never witnessed a similar situation then his decision should be essentially random (i.e. no pre-determined starting beliefs for him - unless these, themselves, are initially random)...

Share this post


Link to post
Share on other sites
Timkin, you object to the form of future activities caused by the flirting being formed by the game designer. Instead you want a reward punishment system for agents to learn their own reactions. Who would design this system? Wouldn''t it suffer from the same designer-centric world view and lead to the same problems whether it occured from designed learning or from designed rules?

Share this post


Link to post
Share on other sites
quote:
Original post by paulus_maximus
that's exactly how i understood it would work... arthur would make his decisions based on his own ideals rather than on a global set - this fits precisely with the idea that the agents are self-contained and are given no guidance by the activities they are undertaking


Except that, as far as I can tell from the information presented, once Arthur perceives that the flirting is occuring, the flirting activity makes a recommendation of action to him. If indeed this recommendation depends on Arthurs personal beliefs, then that's fine. My question will have been answered and I believe in that situation this implementation of Wittgenstein is fairly faithful (and would be well worth the look!

If however this is not the case, then this system will lack the depth of variation found among other cultures/species with regards to social activities.

quote:
Original post by MikeD
Timkin, you object to the form of future activities caused by the flirting being formed by the game designer.



It's not that I object... it's just that I think the system will lack depth and wont have the same feel as real social activities among humans. I do agree though that this implementation is at least a first step on the road to creating meaningful interactions between NPCs and between PCs and NPCs.

quote:
Original post by MikeD
Instead you want a reward punishment system for agents to learn their own reactions. Who would design this system? Wouldn't it suffer from the same designer-centric world view and lead to the same problems whether it occured from designed learning or from designed rules?



Not necessarily. How does Arthur formulate the belief that allowing his wife to flirt with other men is a good thing? It gets down to Arthur's experiences. Let's assume that once upon a time Arthur came home to find his wife in bed with another man. Arthur killed the other man and went to gaol for a while. Perhaps he had a good lawyer and he only spent a few years in gaol. Arthur got out of gaol a rehabilitated man. He had learned that killing gets you punished. Arthur now sees his new wife, Belinda, flirting with Charlie. Realising that killing Charlie is not a viable option, because it brings a punishment, Arthur decides that it is okay to let Belinda flirt. He decides that he would rather leave her instead. This of course brings conflicting rewards and punishments (the reward of being empowered and strong along with the punishment of being alone and not getting any)! Perhaps next time Arthur will decide to allow his wife to flirt and may even join in!

Alternatively, what if Arthur's experiences in gaol did not reform him. When he sees Belinda flirting with Charlie, he gets REALLY angry and kills them both, this time going to gaol for life! This is an alternative action based on prior beliefs; that flirting is a form of betrayal and betrayal should be punished.

As I said above, if the 'flirting activity' were to offer Arthur actions based on Arthur's beliefs, then there isn't an issue here... merely a lack of information in the given articles to draw this conclusion. The learning system is not necessary, although it would be the system closer to human experience and thus a lot more interesting!

Please don't get me wrong Mike, I'm not putting you, Richard or LHS down for the system that has been designed... merely offering some thoughts on the matter of social activities.

Cheers,

Timkin

[edited by - Timkin on September 5, 2002 8:49:21 PM]

Share this post


Link to post
Share on other sites