#### Archived

This topic is now archived and is closed to further replies.

# Adaptive Virtual Game Worlds: Where to Begin?

This topic is 5023 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

quote:
Original post by Timkin
irbrian, FL is about logic... and you are correct, it''s a logic in which there are other values besides 0 and 1. But this is all set theory... As fup pointed out, the membership value is not a probability or likelihood of membership in a set. It''s a degree as to how much the item belongs to that set. In Aristotlean logic, items belong to one set or another. Statements are either true or false. In Fuzzy logic, statements can be partly true and partly false at the same time . This, however, has absolutely nothing to do with uncertainty in a statement. Fuzzy logic makes statements about things in the world. Uncertainty formalisms - such as Bayesian probabilities - make statements about the what we believe to be true or false in the world. Consider an example satement: John is a thief. Uncertainty in this statement might be represented by saying that there is a 70% chance that this statement is true. Fuzzy logic however would say that either John is a Thief, John is not a Thief, or to some degree, John is both a Thief and not a Thief.

Do you see the distinction?
I think I''m beginning to see the distinction now... Fuzzy Logic is about propositions being both true and false to some degree. I.E., "It is sort of hot outside" can be interpreted as "It is hot outside AND It is not hot outside." To account for degree of truthfulness, the statements are represented by a value between 0.0 and 1.0, I.E. the first proposition "It is hot outside" might be a 0.6 and the second "It is not hot outside" might have a value of 0.4.

Thus, considering the following two statements:
A) "John is sort of a thief."
B) "John might be a thief."

You are suggesting that proposition A is Fuzzy Logic, because John is both a thief and not a thief; and proposition B is more of, I dunno, a boolean probability I guess you could say.

If I''m understanding so far, I''ll re-evaluate my original statement:
"I believe someone has robbed my store."

Perhaps then this would be best broken into two statements:
A) "I believe that my store was robbed.
B) "Someone robbed my store."

Seems to me the following is true:
1. The evaluation of B is predicated upon the truthfulness of A.
2. A is a boolean probability:
There is a high probability the store was robbed.
3. B is neither probability not fuzzy logic, because its not a true or false value. It is simply an unknown, a variable -- a Question Needing an Answer in the mind of the NPC.

Alright.. I think I get it now. Someone please tell me I''m wrong.. otherwise thanks for the clarification.

##### Share on other sites
Anyway so my original point was that NPCs never have enough information to form absolute conclusions about things.

That said, does Fuzzy Logic still comes into play? How exactly do NPCs handle the situation when they acquire conflicting information?

Given that the NPC Frank has no opinion yet on the subject, consider the following Observations:
A) "Sal believes that John is a thief."
B) "Joe believes that John is not a thief."

So how would this be best handled in order for the NPC to begin to form an opinion?
1) "It is believed that John is a thief (50%) AND It is believed that John is not a thief. (50%)" (True FL)
2) "John might be a thief. (50%)" (Probability)
3) "I trust Joe more than I trust Sal. THUS, John is a thief (0.4) AND John is not a thief (0.6)." (FL, with weighted inputs)
4) "I trust Joe more than I trust Sal. THUS, it is probable that John is not a thief." (Probability, weighted)
etc.

##### Share on other sites
I would use fuzzy logic, but not for the beleifs or the memories, but for the personality. "The pre-generation of John''s character defined him as greedy, lazy, opportunistic, sneaky, and cowardly, yet charismatic." In other words, John has a greedyness of 0.99, activeness of 0.05, etc... Values closer to 0.5 would represent more normal people, and extremes would represent people who would have a larger impact on this "civilization." Since almost all of John''s stats are extreme, it is expected that he would have a large impact on this city, which is just what happened in your scenario.

--------------------------------------
I am the master of stories.....
If only I could just write them down...

##### Share on other sites
quote:
Original post by irbrian
Thus, considering the following two statements:
A) "John is sort of a thief."
B) "John might be a thief."

You are suggesting that proposition A is Fuzzy Logic, because John is both a thief and not a thief;

Yes.

quote:
Original post by irbrian
and proposition B is more of, I dunno, a boolean probability I guess you could say.

No. Boolean suggests only 1 of two mutually exclusive values. That would be First Order (Aristotlean) logic. Probabilities are values in the range of [0,1] of a variable that satisfies the axioms of probability.

quote:
Original post by irbrian
If I''m understanding so far, I''ll re-evaluate my original statement:
"I believe someone has robbed my store."

Perhaps then this would be best broken into two statements:
A) "I believe that my store was robbed.
B) "Someone robbed my store."

This is a hard example to deal with, because it''s really hard to describe how a store was both robbed and not robbed. It''s a little nonsensical. However, given this example, I would personally write it as:

A) My store was sort of robbed
B) Someone may have robbed my store.

A) Is clearly now a statement suggesting that the store was both robbed and not robbed.
B) Is now a clear statement relating a belief held by an agent.

quote:

Seems to me the following is true:
1. The evaluation of B is predicated upon the truthfulness of A.

Not necessarily. While it may be true that the store was not
robbed, an agent can hold false beliefs. That is, they can beliefve something to be true,even though in reality, it is not true (and vice versa).

I hope this helps to further clarify the issue for you

Cheers,

Timkin

##### Share on other sites
quote:
Original post by Timkin
quote:
Original post by irbrian
and proposition B is more of, I dunno, a boolean probability I guess you could say.

No. Boolean suggests only 1 of two mutually exclusive values. That would be First Order (Aristotlean) logic. Probabilities are values in the range of [0,1] of a variable that satisfies the axioms of probability.
Alright, so boolean probability is a contradiction. You didn''t state it, but it seems clear that prop. B in that case was an issue of probability. Incidentally, unless its a common usage, I wouldn''t define probability as a range of values [0,1] as that really causes some confusion with the whole Fuzzy Logic 0.0-1.0 thing. Can''t we just use percentages for probability like the rest of the world and make the distinction as clear as possible?
quote:
quote:
Perhaps then this would be best broken into two statements:
A) "I believe that my store was robbed.
B) "Someone robbed my store."
This is a hard example to deal with, because it''s really hard to describe how a store was both robbed and not robbed. It''s a little nonsensical. However, given this example, I would personally write it as:

A) My store was sort of robbed
...
A) Is clearly now a statement suggesting that the store was both robbed and not robbed.
Ugh, now you''re trying to turn it back into Fuzzy Logic. I thought we agreed to stay AWAY from Fuzzy Logic in this case, as I now agree that it really doesn''t apply. Seems to me it''s a simple case of probability. What you''re saying up there simply doesn''t make any sense at all -- FL strikes me as a statement of fact just as Boolean logic is a statement of fact. Of course it''s possible I''m totally off-base (and I''m sure you''ll correct me if I am), but I don''t think even FL should allow two mutually exclusive conditions to co-exist.

Going back to probability-based beliefs, we could say that at a certain level of probability, an NPC forms a belief that something is about something is true, even though the NPC will never understand something to be 100% true. For instance:

0-10% --- Invalid Range for Belief formed by AI
10-20% -- NPC Believes the proposition is FALSE
21-40% -- NPC Believes the proposition is "probably FALSE"
41-60% -- NPC Believes the proposition is EITHER True OR False -- NOT True AND False.
61-80% -- NPC Believes the proposition is "probably TRUE"
81-90% -- NPC Believes the proposition is TRUE
91-100% - Invalid Range for Belief formed by AI

quote:
quote:
Seems to me the following is true:
1. The evaluation of B is predicated upon the truthfulness of A.
Not necessarily. While it may be true that the store was not robbed, an agent can hold false beliefs. That is, they can beliefve something to be true,even though in reality, it is not true (and vice versa).
I agree that NPCs can believe something to be true or false. I meant that the ultimate reality of statment B is predicated upon the TRUE OR FALSE value of A.
quote:
I hope this helps to further clarify the issue for you
I sort of understand -- let''s call it a 0.7.

##### Share on other sites
quote:
Original post by irbrian
Incidentally, unless its a common usage, I wouldn''t define probability as a range of values [0,1] as that really

Probabilities are most definitely described on the set [0,1]. Percentages and probabilities are NOT the same thing, since a percentage is just talking about a proportion of something. Any decent book on probability theory should be clear about this.

quote:
Original post by irbrian
causes some confusion with the whole Fuzzy Logic 0.0-1.0

Yes, it does for many people, which is why these people think it is appropriate to use Fuzzy Logic to describe uncertainty.

quote:

Can''t we just use percentages for probability like the rest of the world

At least within the scientific communicty, the ''rest of the world'' does not use percentages instead of probabilities.

quote:
Ugh, now you''re trying to turn it back into Fuzzy Logic.

Sorry. I hadn''t slept in a very long time when I read your post. I''m sure I simply misinterpreted what you wrote as you trying to make a disctinction between FL and probability theory using that example. Sorry if it has confused the issue.

quote:

but I don''t think even FL should allow two mutually exclusive conditions to co-exist.

Actually, that was the whole point of Fuzzy Logic. One common example used to teach people FL is to ask those people in the audience "put up your hand if you are happy with their job" and then to ask "now put down your hand if you are unhappy with your job". Anyone with their hand still up is displaying Fuzzy Logic in that they are both happy and unhappy with their job. Given only those two statements, it seems nonsensical to be both happy and unhappy about something. But clearly, hidden in that example is the possibility that they are not always happy and unhappy, but rather happy at some times and unhappy at others. The temporal aspect is withdrawn from the premises, allowing the apparently contradictory result.

quote:

0-10% --- Invalid Range for Belief formed by AI
10-20% -- NPC Believes the proposition is FALSE
21-40% -- NPC Believes the proposition is "probably FALSE"
41-60% -- NPC Believes the proposition is EITHER True OR False -- NOT True AND False.
61-80% -- NPC Believes the proposition is "probably TRUE"
81-90% -- NPC Believes the proposition is TRUE
91-100% - Invalid Range for Belief formed by AI

But here you''re trying to map a continuous variable to discrete outputs, which is what happens in the final step of Fuzzy Logic (and vice versa for the input). Why is it necessary to do this? If you''re looking for a way of describing confidence in the probability of an event, you might want to use Dempster-Shafer theory. If you''re simply trying to relate probabilities to linguistic statements of belief, the yes, what you''ve done above might be quite reasonable, however it''s also quite arbitrary, so if you said event X was probably true, you would mean that the probability of the event is between 0.61 and 0.8. However, someone else might think that this means the probability of the event is between 0.75 and 0.90, obviously because they use a different mapping function. How do we decide on an ''appropriate'' mapping?

Timkin

##### Share on other sites
hello how a bout a more simple system "neural frame"??

it has three layer
1: all concept are hand code in net, each concept has hard code relation to other, they form inherent beleif (mostly class relation type)

2: an appraisal system, which value the information

3: a thinking process that manage knowledge

explanation

all concept needed are encode, that no need to the agent to learn them, concept like what an object is, the name of a persone, frame like knwoledge, but not only, action, event, are also code in a network

now once an information is known, the agent build link between concept in the information that are activate, and the appraisal give it a strength,

for ex
job has rob the car
the concept which are activate are car, job, and rob and a link is build and a strength is given according to the importance of the information
the statement or fact concept would also be activate depending if someone has told him to the agent or if it''s a direc observation

now if the information is given once again the strength would increase, building strong relation between these concepts
now someone give an contraditory information, this would decrease the strength of some link and increase a new link with the NOT statement

now when the agent has to consider the fact, concept would be reactivate and could retreive the relation by following the strength of the link

better, when considering element about job, the concept is activate, and then transmit a part of his activation to associate concept which would transmit their activation to their neighbor as well, until the activation fall to 0
by activate job it would also activate has rob the car link, is the strength are sufficient, is job has rob many object this would activate the concept of thief, because multiple ROB association has raise the activation of thief concept
just like neural network has an activation function, a concept is activate under certain strength

another side effect is analogy

for example a child say: my car is yellow like a banana
actually the concept yellow is activate and has stong relation with banana (from experiance) and automatically activate banana in priority but with the statement concept
but the concept car is also activate with the fact concept
then car has the priority and banana is discard as relevent to the situation, however because he had receive an activation the agent find a relation between the two, the car is yellow (fact)LIKE a banana (statement)
now imagine what would happen if a situation remember an agent another strong relation (the death of his father) and look how it would affect in some interested way is action!

the strength with this is that it''s context sensitive,
the agent don''t actually hold fact as an object, but hold a TOPOLOGY, concept could be share with all agent
it''s build around combinatorial and implicit knowledge, then memory is not really a problem, since memory doesnot change, whether the agent know or don''t know something (it would be a problem according to allocated memory, this would affect the flexibility of the agent)

the third layer, is not yet really tested
the agent could sometimes evaluate his knowledge in order to detect flaws (contradiction for ex), for ex when two relation type came in conflict, the agent would make inference to break the problem in a more satisfying way.
this what apraisal system is for, you could actually see them as "temperature" of a problem or more simply EMOTION
you have different temperature which represent the priority of a problem, for example the agent would not make indefinitly inference about a case, but only to reduce the temperature under an acceptable state, the problem is to choose the good apraisal set, it''s better to think the brain as a system that seek equilibrium.
actually the sensibility of the appraisal build the personnality of the agent, and a side effect of overfitting relation is that it build stubborn ability...

actually the emulation of human brain, even has low resolution, has not to be 100% rational and deal perfectly with problem, since ourselve don''t
this is all the spice of life, because we hold a lot of inconsitancie, we create dramatic moment of conflict, if we was as perfect that we want AI be, the life would peace and love
the neverending flow of life is all drive by these problem that we never really resolve, seeking only for a optimal state of satisfaction rather than true understanding
that''s all the purpose of story, showing us strugling with our own imbalance with the world, your ai would not be less if it''s for creating story, simulate imperfection to create perfect story

hope my english is not enough ugly to prevent reading, sorry for my writing

>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>

##### Share on other sites
Wow... I''m lost again.

Maybe it''s just late. I''ll try reading this again tomorrow.

##### Share on other sites
Neoshaman,

I suspect that your system would suffer many of the same problems that large production systems suffer: the management and storage responsibilities of the database grow exponentially with the amount of information to be stored in it. If you want to look at other systems that try and do what you are suggesting, try Cyc by Doug Lenat, as a starting point.

Cheers,

Timkin

##### Share on other sites
have you got other example??

well actually it doesn't seems like CYC
basically my ai is set have unaccuracy and being emotional rather than smart and rational, it was design for dramatic aspect and work in association with scripting
it's an embodied, feeling, intuitive, contextual ai to the game

all revolve around emotion and it's more like an heterogenous neural net where we had freeze the concept store in, than everything else, all i want was a clever fast and simple system to handle memory, the memory only serve as an temporal context of experiance through emotion
from one given experiance, the past experiance is use as a context in the decision and then activate the appropriate script (action)

for ex a character at the worry state would seek experiance that would reduce the worry state, then this would ouput one goal in the decision system

the key word is drama!
and in drama misunderstood is a strong tool!
i think if human where rational we would have no story to tell and utopy would pop up wisely on earth

(finally some research show that the brain store memory as concept in cluster, still the entire object is a pattern of many cluter just like frame had attribute, and activation of a concept activate other concept has well, there is also some hardcode concept in the mind, for ex our ability to read is a derivation of the capacity of recongnizing animal foot print during the hunting stage of humanity, all of this was discovered by studying brain local lesion, which create strange result, conclusion, memory are both in cluster and diffuse in the network)

actually is a side effect of the emotional ai than i have design one year ago, i have adapt the construal engeenering aproch to gamedesign and find out a system that i had hard time to understand (i wasn't in ai yet) and now i call it 'neural frame'

it's more like metacat (of hofstadter and the farg group) cross with neural network, but i'm still looking at metacat sys
or things like affordance, etc... (for example the mind space is design like the method give in the gamedev thread anotated object, but instead of searching in a spatial object space we seek in a concept space, concept activate are the perception field)
did it make sens to you TIMKIN??

however i did not test the rational logic top system that manage the whole, i did use the reinforcement like approach (well the system was never fully implement, just little case, i have to find a generic structur which would permit to author to handwrite concept adapted to their game, hardcode template)
and if i include it it would be an irrational rationality!! since it would be invoke solely for some classe of problem solving and only browsing in activate fact

i would try to implement it better for my social sim, but i just finish the first design stage of the binary dm and go for the second part, that i have to keep this for another time

>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>

[edited by - neoshaman on April 12, 2004 11:28:45 PM]

##### Share on other sites
quote:
Original post by Neoshaman
have you got other example??

well actually it doesn''t seems like CYC

Perhaps what I wrote was misleading (sorry). I wasn''t suggesting that your idea and Cyc were the same, but rather that Cyc was an attempt at trying to store lots and lots of relational information about the real world... and that both your idea and Cyc would suffer many of the same problems. Thus, looking at these issues with regards to Cyc might give you insight as to how you could handle them in your system.

Timkin

##### Share on other sites
quote:
Original post by Timkin
quote:
Original post by Neoshaman
have you got other example??

well actually it doesn''t seems like CYC

Perhaps what I wrote was misleading (sorry). I wasn''t suggesting that your idea and Cyc were the same, but rather that Cyc was an attempt at trying to store lots and lots of relational information about the real world... and that both your idea and Cyc would suffer many of the same problems. Thus, looking at these issues with regards to Cyc might give you insight as to how you could handle them in your system.

Timkin

no need to sorry, i think it''s that did not understand, and still not see the matter, sorry but could you explain more?
i''m looking at and can''t find the problem for now...

thanks

>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>

##### Share on other sites
quote:
Original post by Neoshaman
i''m looking at and can''t find the problem for now...

It''s a question of the size of your database and the complexity of adding and retrieving information efficiently. Consider your database has 10 items in it. Then, the least number of links you could have is zero, because everything in it is unrelated. This isn''t very likely. The worst cast is that every item is related to every other item. That means 102 links. That means you''re storing 10 items and 100 links. That doesn''t sound like much, but what if you have 10,000 items? That means in the worst case scenario you''re storing 1010 links. Now, ask yourself how much computation is required to extract information from a single query to the database.

Certainly, you are unlikely to have a worst case scenario where every item is related to every other item. But certainly, we might expect every item to be related to say 10 other items. For 10,000 items in your database, that''s still 100,000 links.

Do you see now that storage is going to become a problem very quickly.

Cheers,

Timkin

##### Share on other sites
Certainly its good to keep general technical issues in mind, and I also realize that some may be thinking about this in terms of near-future projects, so there''s no problem with pointing out practical limitations.

However I''d just like to restate my original intention that this discussion be more or less tech-free... that is, assuming that computational requirements were no barrier. The original point was to focus on the AI theories and practices that might someday lead to implementation.

Carry on.

##### Share on other sites
quote:
Original post by Timkin
quote:
Original post by Neoshaman
i'm looking at and can't find the problem for now...

It's a question of the size of your database and the complexity of adding and retrieving information efficiently.

well at least this part is exactly what emotion and activat is for
by having some "energetic" we limit the knowledge to the context needed and take cost and ressource in the equation
there is some perception space (could be share by many agent, for example the game is cut in "scene"), this perception activate there perceptual concept in the "brain" and would let antything else that is not perceive (this is similar to directly put the external into the internal, and in my engine is the same, since a game deal with abstract entity)
now those concept are activate on each brain they transmit their actication to surrounded concept (those that are link to him) both by inhibition or activation BUT this activation is weigth
now a neural frame are just like neuron, they had input, fct of activation and output, unlike neuron they are not anonymous and some link are hand wired like frame
now each neural frame had a degree of activation depending of the weigth and if the activation threshold is met, then this prevent that all concept to be call back, this eliminate concept that are inhibit, left only concept that tied the concept, this is context sensitive, experiance build temporal context
mood can change the threshold of activation generally to change perception but did not change weigth of link

ARCHITECTURE
here is the schema
there is a concept A and a concept B
A B
there is no link there is virtually an infinite distance beetween the two
A>>>>>>.001>>>>>B
now there is a relation, but it's weak, A is far from B, the amount of energie that cross the path is weak and may not be sufficient to activate B
A>>>>>>1.>>>>>>>B
the relation is maximum, if A is activate B will certainly
A>>>>>>.04>>>>>>B
now A may activate B

directly perceive concept has mark has facts (perceive) and have the stronger activation

while considering element, the agent first follow link that he had interest with and then follow those wich have the strongest activation first (priority)
for ex if the amount of ressource dedicate for an action is low, it would cut the lesser prioritized first and would only consider the high priority one

now you can model it as 3D model of concept
XY is the lateral relation between element
Z is the depth
lateral relation model belief about concept and their relation
while the depth model classification between them

of course there is a time decay with link that those which are not stimulate fall under some strength, and we could had a limit among the number link erasing lesser strength, simulating a forget option

note that we could put some hidden concept with anonym neuron which would work as the blank letter in scrabble to had flexibility or even hidden layer to make the agent build is own "mind dialect", but depending on the design of the game and his requirement

EMOTION
emotion are also represented as concept and then activate their representation as well, but emotion are not present in the scene description, they belongs to the internal state of agent making the agent aware of impact of emotion within a context (self awareness)
emotion activate reaction of the agent to a given contexte and regulate is behaviour and action by giving him an understanding about what is taken, they work also has a retroregulation, since they score a particular state and this score oriant the agent behaviour towards a better state (equilibrium)

RATING
pleasure : body state
liking : context (from participation of a better state)
satisfaction: thougth process
hope : expectation
praise/blame: events
etc...

you can even create emotion to regulate what you want the agent to seek
what is funny is to see how the system would evolve, actually dinamic system evolve towards 4 state, including stasis, catalyst, oscillation or chaos
the drama come when one can't meet the equilibrium of all emotion (optimal score) mostly when one goes up and the other goes down, put many agent and then the game became very complexe and impredictable but not incontrolable

NOTE ABOUT DRAMATIC AI BUILDING IN THE MODEL
story is well known
but when we came to game we forget everything from both story and game and get stuck in false problematic of representation, focusing on what and how of thing before knowing WHY
in story there is role, role is distribute around the goal of the story, knowing this help to manage ai in game because he wont give the same ressource to all agent, we won't need because they don't have the same importance, actually by having a focus on some agent and by knowing what role they had we can controle more finely the experiance and still left room for freedom, by providing a structure we enhance the experiance and remove the mundane, better we can understood what it is happening in almost all detail

story told about problem, problem must met a solution, solution provide goal, to acheive goal one must take the role to pursue him and to be able to solve him

in story all role (and then behaviour) is drive by goal
around this goal, caracter have role
indentifying these role allow to have more controle in the process, it's a basic of storywriting

aroung the goal then we have:
Protagonist: those which seek goal
Antagonist : those which prevent the goal to be reach
Guardien : those which ease the goal to be reach
contagonist: those which slow the goal to be reach
Sidekick : those which support the goal (positive feedback)
skeptic : those which doubt the goal (negative feedback)
Rational : those which calm things
emotional : those which stress things

role can be take by any agent, all role must be take to have a complete story structure BUT agents can change role

now i think it's obvious that all role doesnot need the same ai
some only need some words to say to pass information while other need a fully fctnal and dinamic ai to reach their requirement (protagonist and antagonist for ex), but it also depend on the situation of the agent (antagonist could be a dragon which protect the treasure, common monster script could do the job but the role is still meet, while angry farmer which prevent you to cross a river would have to think a little more and adapt the situation), the real fact is that with role even low end ai for agent could work if a top ai said them what to do to meet their role (saving resource for ONE only expensive ai, and you could still doing with actual RTS ai but adapt to dramatic structure )

more about story here: www.dramatica.com (check the theory book)

NOTE

now i just realize that i havent to put full activation to the perceive concept, if they comme with strength this could simulate the degree of perception of a scene, however this turn the scene appraisal to each agent rather than a general description pass to them all, hum, it's more a problem of what is need for a design then...

i don't beleive that cyc would work, because it will meet fossilisation and then is expose to catastrophe, it lacks flexibility, it's merely a toy without purpose

EDIT:
just notice that you could create an principal dramatic emotion to passe to agent in order to keep them in the role, they would want to seek the better state that meet this emotion optimal

>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>

[edited by - neoshaman on April 16, 2004 3:43:31 AM]