Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

Infinisearch

link

This topic is 5792 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Advertisement
It seems a good idea, and seems to be a good way to simulate human social rules.

I wish to know if only social rules will be involved or also personal needs and education (like pain, hate, love...). I think that a functional simulation can''t be run without those (personal needs and whishes).

Another thing that annoys me is the comentary about: "agents will be searching through big decission trees (in realtime)" and they are also saying that they will try to precompute them as long as they can.

The game must have an HUGE amount of agents if searching decission trees will be so cpu-consuming, in my opinion (or the trees really really huge). But this is only an speculation, I''d want to hear about this.

The point is.. how can decission tables help in such a problem? There are too many factors and decissions that, surely, have weights (different importance)... even different weights for each individual, which would give him his own "values scale" (personality). Decission trees approach don''t seem correct for me... they would be very difficult to balance, as the step of one state into another would need some requirements and as the article says, the decission trees might be of a considerable size (if not, only a few kinds of personality will exist).

I''m only guessing, because maybe they know that a few states and a few ''social rules'' well combined can create a big variety of personalities. IMO a complete study would be really really interesting: how many factors of personality, plus social processes are needed to create a wide variety of people.

Why aren''t they using some fuzzy logic to take decissions? Probably it would take less computer time. Also, similar groups of agents can benefit of calculating some of the functions for group, and not for agent, if the fuzzy functions are stacked with care.

And, finally, why are they so worried about computer resources? Agents are not taking decissions every frame. As far as I know, one person cannot take more than 160 decissions per second, but this number is only true when making things as flying, driving, fighting... this problem has a more restricted domain: agents will have a variety of deccissions, but this will be limited. If a agent decides go for a beer, it will not have to make many decissions until he gets into the bar. I think that we can reduce this to 1 decission per second in most cases. That would still allow the agent to react to the world around in case that something happens in his way to the bar.

Anyway, I''ll buy that game as soon as they put it on stores. Sure it will have to be seen. What do you thing about this?

JJpRiVaTe
Web: Private Zone

Share this post


Link to post
Share on other sites
quote:
Original post by MikeD
I think it''s a fantastic idea.
But then I _have_ to say that.


Yes you would Mike, wouldn''t you! BTW, how long have you been at LH now? Are you actually working on Dmitry or some other project?

Cheers,

Timkin

Share this post


Link to post
Share on other sites
Sad as I am to admit it, I do indeed work for Lionhead and I am the _other_ AI programmer on Dimitri (I don''t know where the press got the Dmitry spelling, but the .sln project in MSDev is very much called Dimitri), I''ve just passed the 3 month mark and they haven''t fired me yet ;-).
For another link to the idea of social processes, by Richard see http://www.gamasutra.com/features/20020424/evans_01.htm.

I''m not sure how many questions I can answer at the moment as this project is far from completion, pretty much under wraps and I''m primarily working on the physical AI (pathfinding, steering, interfacing with the animation system etc.) atm, wheras all the social AI is handled by Richard.

Anyway, the above article should clear up a few technical points.

Mike

Share this post


Link to post
Share on other sites
MikeD could you point out some books or papers that you use to obtain knowledge in this subject? is it a combination of neural nets/genetic programming/fsm''s etc? or is there a good book on social ai programming?

thank you.

Share this post


Link to post
Share on other sites
If you''ve read the article there''s some references at the back. Most of the work is oriented around decision trees although I can''t really go into much detail.
Did you read the article?

Mike

Share this post


Link to post
Share on other sites
Mike, perhaps you could pass on these thoughts to Richard and report back his comments?

I see a fundamental problem/shortcoming with the system as described in these two articles. Social activities are moderated by the perceptions of the individuals taking part in them. Take the example of Arthur, Belinda and Charlie mentioned in the Gamasutra article. Belinda is flirting with Charlie. Charlie is only taking part in this social activity if he understands that what Belinda is doing is flirting. If Charlie is inexperienced in such matters, or simply naive, he may be predisposed to misinterpret the nature of a particular social activity; in this case flirting. Perception of the nature of the social activity is extremely important for participation in it. This is the whole basis of social misunderstandings!

Even though Charlie may not yet understand what is going on, Arthur may indeed realise that Belinda is flirting with Charlie and may take offense, thus creating a new social activity (taking offense) which might lead to other activities (beating up Charlie).

Perhaps Belinda and Charlie are merely conversing, but Arthur sees their conversation as flirting. This leads onto the issue that, if we take the Bayesian stance, an agent's posterior beliefs depend on their prior beliefs and the observations they make.

An agents belief structures will moderate their reponses to a social activity. Perhaps Arthur is so inclined as to like the fact that Belinda is flirting with Charlie... perhaps he and his wife have been looking for a third sexual partner??? So that while many people may think it inappropriate for Belinda to flirt with Charlie, Arthur's prior beliefs may result in him encouraging the situation. These facts should be taken into account when determining the possible actions that Charlie could take in response to the social activity.

The model of social activity as presented in these articles does not appear to take into account either perceptions or individual belief structures in determining the response to a social activity (and the consequences associated with that response). I would be very interested to hear Richard's comments/thoughts on these points.

Cheers,

Timkin

[edited by - Timkin on September 3, 2002 10:26:48 PM]

Share this post


Link to post
Share on other sites
Timkin: In reply to your points (although Richard is on holiday and might not agree 100% with my interpretation of his work). A social activity is "an ''activity-thing'' [that] does not command its participants to obey - it just requests that they perform an action, and tells them the consequences of ignoring this request". The agents have belief structures based on their perception of the world and so act due to subjective knowledge. The activities merely suggest certain courses of action and the agent decides on the best course due to their personal beliefs, desires and needs.
So Arthur might like the idea of Charlie and Belinda flirting or he might not, however his perception of their flirting is not based on his involvement in that activity (you''ll note that Arthur is not affected by that activity), it is based on his perception of their behaviour and his deciding whether they are flirting and to what extent. His personal beliefs might later trigger a "threesome" or a "revenge" activity, which would try to recruit the other members if appropriate, but the flirting activity is separate from Arthur.

The social activities are an important part of the AI system, but they are far from the only part. Does this answer your questions?

Mike

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!