Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Adaptive Virtual Game Worlds: Where to Begin?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
90 replies to this topic

#81 Timkin   Members   -  Reputation: 864

Like
Likes
Like

Posted 07 April 2004 - 01:58 PM

quote:
Original post by irbrian
Incidentally, unless its a common usage, I wouldn''t define probability as a range of values [0,1] as that really


Probabilities are most definitely described on the set [0,1]. Percentages and probabilities are NOT the same thing, since a percentage is just talking about a proportion of something. Any decent book on probability theory should be clear about this.

quote:
Original post by irbrian
causes some confusion with the whole Fuzzy Logic 0.0-1.0



Yes, it does for many people, which is why these people think it is appropriate to use Fuzzy Logic to describe uncertainty.

quote:

Can''t we just use percentages for probability like the rest of the world


At least within the scientific communicty, the ''rest of the world'' does not use percentages instead of probabilities.

quote:
Ugh, now you''re trying to turn it back into Fuzzy Logic.



Sorry. I hadn''t slept in a very long time when I read your post. I''m sure I simply misinterpreted what you wrote as you trying to make a disctinction between FL and probability theory using that example. Sorry if it has confused the issue.

quote:

but I don''t think even FL should allow two mutually exclusive conditions to co-exist.



Actually, that was the whole point of Fuzzy Logic. One common example used to teach people FL is to ask those people in the audience "put up your hand if you are happy with their job" and then to ask "now put down your hand if you are unhappy with your job". Anyone with their hand still up is displaying Fuzzy Logic in that they are both happy and unhappy with their job. Given only those two statements, it seems nonsensical to be both happy and unhappy about something. But clearly, hidden in that example is the possibility that they are not always happy and unhappy, but rather happy at some times and unhappy at others. The temporal aspect is withdrawn from the premises, allowing the apparently contradictory result.

quote:

0-10% --- Invalid Range for Belief formed by AI
10-20% -- NPC Believes the proposition is FALSE
21-40% -- NPC Believes the proposition is "probably FALSE"
41-60% -- NPC Believes the proposition is EITHER True OR False -- NOT True AND False.
61-80% -- NPC Believes the proposition is "probably TRUE"
81-90% -- NPC Believes the proposition is TRUE
91-100% - Invalid Range for Belief formed by AI



But here you''re trying to map a continuous variable to discrete outputs, which is what happens in the final step of Fuzzy Logic (and vice versa for the input). Why is it necessary to do this? If you''re looking for a way of describing confidence in the probability of an event, you might want to use Dempster-Shafer theory. If you''re simply trying to relate probabilities to linguistic statements of belief, the yes, what you''ve done above might be quite reasonable, however it''s also quite arbitrary, so if you said event X was probably true, you would mean that the probability of the event is between 0.61 and 0.8. However, someone else might think that this means the probability of the event is between 0.75 and 0.90, obviously because they use a different mapping function. How do we decide on an ''appropriate'' mapping?


Timkin

Sponsor:

#82 Neoshaman   Members   -  Reputation: 170

Like
Likes
Like

Posted 07 April 2004 - 05:15 PM

hello how a bout a more simple system "neural frame"??

it has three layer
1: all concept are hand code in net, each concept has hard code relation to other, they form inherent beleif (mostly class relation type)

2: an appraisal system, which value the information

3: a thinking process that manage knowledge

explanation

all concept needed are encode, that no need to the agent to learn them, concept like what an object is, the name of a persone, frame like knwoledge, but not only, action, event, are also code in a network

now once an information is known, the agent build link between concept in the information that are activate, and the appraisal give it a strength,

for ex
job has rob the car
the concept which are activate are car, job, and rob and a link is build and a strength is given according to the importance of the information
the statement or fact concept would also be activate depending if someone has told him to the agent or if it''s a direc observation

now if the information is given once again the strength would increase, building strong relation between these concepts
now someone give an contraditory information, this would decrease the strength of some link and increase a new link with the NOT statement

now when the agent has to consider the fact, concept would be reactivate and could retreive the relation by following the strength of the link

better, when considering element about job, the concept is activate, and then transmit a part of his activation to associate concept which would transmit their activation to their neighbor as well, until the activation fall to 0
then the agent would follow link that match the goal
by activate job it would also activate has rob the car link, is the strength are sufficient, is job has rob many object this would activate the concept of thief, because multiple ROB association has raise the activation of thief concept
just like neural network has an activation function, a concept is activate under certain strength

another side effect is analogy

for example a child say: my car is yellow like a banana
actually the concept yellow is activate and has stong relation with banana (from experiance) and automatically activate banana in priority but with the statement concept
but the concept car is also activate with the fact concept
then car has the priority and banana is discard as relevent to the situation, however because he had receive an activation the agent find a relation between the two, the car is yellow (fact)LIKE a banana (statement)
now imagine what would happen if a situation remember an agent another strong relation (the death of his father) and look how it would affect in some interested way is action!

the strength with this is that it''s context sensitive,
the agent don''t actually hold fact as an object, but hold a TOPOLOGY, concept could be share with all agent
it''s build around combinatorial and implicit knowledge, then memory is not really a problem, since memory doesnot change, whether the agent know or don''t know something (it would be a problem according to allocated memory, this would affect the flexibility of the agent)

the third layer, is not yet really tested
it''s about building an management of link strength under certain condition.
the agent could sometimes evaluate his knowledge in order to detect flaws (contradiction for ex), for ex when two relation type came in conflict, the agent would make inference to break the problem in a more satisfying way.
this what apraisal system is for, you could actually see them as "temperature" of a problem or more simply EMOTION
you have different temperature which represent the priority of a problem, for example the agent would not make indefinitly inference about a case, but only to reduce the temperature under an acceptable state, the problem is to choose the good apraisal set, it''s better to think the brain as a system that seek equilibrium.
actually the sensibility of the appraisal build the personnality of the agent, and a side effect of overfitting relation is that it build stubborn ability...

actually the emulation of human brain, even has low resolution, has not to be 100% rational and deal perfectly with problem, since ourselve don''t
this is all the spice of life, because we hold a lot of inconsitancie, we create dramatic moment of conflict, if we was as perfect that we want AI be, the life would peace and love
the neverending flow of life is all drive by these problem that we never really resolve, seeking only for a optimal state of satisfaction rather than true understanding
that''s all the purpose of story, showing us strugling with our own imbalance with the world, your ai would not be less if it''s for creating story, simulate imperfection to create perfect story

hope my english is not enough ugly to prevent reading, sorry for my writing

>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>

#83 irbrian   Members   -  Reputation: 130

Like
Likes
Like

Posted 07 April 2004 - 05:26 PM

Wow... I''m lost again.

Maybe it''s just late. I''ll try reading this again tomorrow.

#84 Timkin   Members   -  Reputation: 864

Like
Likes
Like

Posted 12 April 2004 - 02:47 PM

Neoshaman,

I suspect that your system would suffer many of the same problems that large production systems suffer: the management and storage responsibilities of the database grow exponentially with the amount of information to be stored in it. If you want to look at other systems that try and do what you are suggesting, try Cyc by Doug Lenat, as a starting point.

Cheers,

Timkin

#85 Neoshaman   Members   -  Reputation: 170

Like
Likes
Like

Posted 12 April 2004 - 04:14 PM

have you got other example??

well actually it doesn't seems like CYC
basically my ai is set have unaccuracy and being emotional rather than smart and rational, it was design for dramatic aspect and work in association with scripting
it's an embodied, feeling, intuitive, contextual ai to the game

all revolve around emotion and it's more like an heterogenous neural net where we had freeze the concept store in, than everything else, all i want was a clever fast and simple system to handle memory, the memory only serve as an temporal context of experiance through emotion
from one given experiance, the past experiance is use as a context in the decision and then activate the appropriate script (action)

for ex a character at the worry state would seek experiance that would reduce the worry state, then this would ouput one goal in the decision system


the key word is drama!
and in drama misunderstood is a strong tool!
i think if human where rational we would have no story to tell and utopy would pop up wisely on earth

(finally some research show that the brain store memory as concept in cluster, still the entire object is a pattern of many cluter just like frame had attribute, and activation of a concept activate other concept has well, there is also some hardcode concept in the mind, for ex our ability to read is a derivation of the capacity of recongnizing animal foot print during the hunting stage of humanity, all of this was discovered by studying brain local lesion, which create strange result, conclusion, memory are both in cluster and diffuse in the network)

actually is a side effect of the emotional ai than i have design one year ago, i have adapt the construal engeenering aproch to gamedesign and find out a system that i had hard time to understand (i wasn't in ai yet) and now i call it 'neural frame'

it's more like metacat (of hofstadter and the farg group) cross with neural network, but i'm still looking at metacat sys
or things like affordance, etc... (for example the mind space is design like the method give in the gamedev thread anotated object, but instead of searching in a spatial object space we seek in a concept space, concept activate are the perception field)
did it make sens to you TIMKIN??

however i did not test the rational logic top system that manage the whole, i did use the reinforcement like approach (well the system was never fully implement, just little case, i have to find a generic structur which would permit to author to handwrite concept adapted to their game, hardcode template)
and if i include it it would be an irrational rationality!! since it would be invoke solely for some classe of problem solving and only browsing in activate fact

i would try to implement it better for my social sim, but i just finish the first design stage of the binary dm and go for the second part, that i have to keep this for another time

if you have good ref for paper (or more accurate keyword) about your example or cyc because google return to much noise

>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>

[edited by - neoshaman on April 12, 2004 11:28:45 PM]

#86 Timkin   Members   -  Reputation: 864

Like
Likes
Like

Posted 12 April 2004 - 07:24 PM

quote:
Original post by Neoshaman
have you got other example??

well actually it doesn''t seems like CYC



Perhaps what I wrote was misleading (sorry). I wasn''t suggesting that your idea and Cyc were the same, but rather that Cyc was an attempt at trying to store lots and lots of relational information about the real world... and that both your idea and Cyc would suffer many of the same problems. Thus, looking at these issues with regards to Cyc might give you insight as to how you could handle them in your system.

Timkin

#87 Neoshaman   Members   -  Reputation: 170

Like
Likes
Like

Posted 13 April 2004 - 03:09 AM

quote:
Original post by Timkin
quote:
Original post by Neoshaman
have you got other example??

well actually it doesn''t seems like CYC



Perhaps what I wrote was misleading (sorry). I wasn''t suggesting that your idea and Cyc were the same, but rather that Cyc was an attempt at trying to store lots and lots of relational information about the real world... and that both your idea and Cyc would suffer many of the same problems. Thus, looking at these issues with regards to Cyc might give you insight as to how you could handle them in your system.

Timkin


no need to sorry, i think it''s that did not understand, and still not see the matter, sorry but could you explain more?
i''m looking at and can''t find the problem for now...

thanks

>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>

#88 Timkin   Members   -  Reputation: 864

Like
Likes
Like

Posted 13 April 2004 - 02:30 PM

quote:
Original post by Neoshaman
i''m looking at and can''t find the problem for now...



It''s a question of the size of your database and the complexity of adding and retrieving information efficiently. Consider your database has 10 items in it. Then, the least number of links you could have is zero, because everything in it is unrelated. This isn''t very likely. The worst cast is that every item is related to every other item. That means 102 links. That means you''re storing 10 items and 100 links. That doesn''t sound like much, but what if you have 10,000 items? That means in the worst case scenario you''re storing 1010 links. Now, ask yourself how much computation is required to extract information from a single query to the database.

Certainly, you are unlikely to have a worst case scenario where every item is related to every other item. But certainly, we might expect every item to be related to say 10 other items. For 10,000 items in your database, that''s still 100,000 links.

Do you see now that storage is going to become a problem very quickly.

Cheers,

Timkin

#89 irbrian   Members   -  Reputation: 130

Like
Likes
Like

Posted 14 April 2004 - 06:18 AM

Certainly its good to keep general technical issues in mind, and I also realize that some may be thinking about this in terms of near-future projects, so there''s no problem with pointing out practical limitations.

However I''d just like to restate my original intention that this discussion be more or less tech-free... that is, assuming that computational requirements were no barrier. The original point was to focus on the AI theories and practices that might someday lead to implementation.

Carry on.

#90 Neoshaman   Members   -  Reputation: 170

Like
Likes
Like

Posted 15 April 2004 - 08:39 PM

quote:
Original post by Timkin
quote:
Original post by Neoshaman
i'm looking at and can't find the problem for now...



It's a question of the size of your database and the complexity of adding and retrieving information efficiently.


well at least this part is exactly what emotion and activat is for
by having some "energetic" we limit the knowledge to the context needed and take cost and ressource in the equation
there is some perception space (could be share by many agent, for example the game is cut in "scene"), this perception activate there perceptual concept in the "brain" and would let antything else that is not perceive (this is similar to directly put the external into the internal, and in my engine is the same, since a game deal with abstract entity)
now those concept are activate on each brain they transmit their actication to surrounded concept (those that are link to him) both by inhibition or activation BUT this activation is weigth
now a neural frame are just like neuron, they had input, fct of activation and output, unlike neuron they are not anonymous and some link are hand wired like frame
now each neural frame had a degree of activation depending of the weigth and if the activation threshold is met, then this prevent that all concept to be call back, this eliminate concept that are inhibit, left only concept that tied the concept, this is context sensitive, experiance build temporal context
mood can change the threshold of activation generally to change perception but did not change weigth of link

ARCHITECTURE
here is the schema
there is a concept A and a concept B
A B
there is no link there is virtually an infinite distance beetween the two
A>>>>>>.001>>>>>B
now there is a relation, but it's weak, A is far from B, the amount of energie that cross the path is weak and may not be sufficient to activate B
A>>>>>>1.>>>>>>>B
the relation is maximum, if A is activate B will certainly
A>>>>>>.04>>>>>>B
now A may activate B

directly perceive concept has mark has facts (perceive) and have the stronger activation

while considering element, the agent first follow link that he had interest with and then follow those wich have the strongest activation first (priority)
for ex if the amount of ressource dedicate for an action is low, it would cut the lesser prioritized first and would only consider the high priority one

now you can model it as 3D model of concept
XY is the lateral relation between element
Z is the depth
lateral relation model belief about concept and their relation
while the depth model classification between them

of course there is a time decay with link that those which are not stimulate fall under some strength, and we could had a limit among the number link erasing lesser strength, simulating a forget option

note that we could put some hidden concept with anonym neuron which would work as the blank letter in scrabble to had flexibility or even hidden layer to make the agent build is own "mind dialect", but depending on the design of the game and his requirement

EMOTION
emotion are also represented as concept and then activate their representation as well, but emotion are not present in the scene description, they belongs to the internal state of agent making the agent aware of impact of emotion within a context (self awareness)
emotion activate reaction of the agent to a given contexte and regulate is behaviour and action by giving him an understanding about what is taken, they work also has a retroregulation, since they score a particular state and this score oriant the agent behaviour towards a better state (equilibrium)

RATING
pleasure : body state
liking : context (from participation of a better state)
satisfaction: thougth process
hope : expectation
praise/blame: events
etc...

you can even create emotion to regulate what you want the agent to seek
what is funny is to see how the system would evolve, actually dinamic system evolve towards 4 state, including stasis, catalyst, oscillation or chaos
the drama come when one can't meet the equilibrium of all emotion (optimal score) mostly when one goes up and the other goes down, put many agent and then the game became very complexe and impredictable but not incontrolable

NOTE ABOUT DRAMATIC AI BUILDING IN THE MODEL
story is well known
but when we came to game we forget everything from both story and game and get stuck in false problematic of representation, focusing on what and how of thing before knowing WHY
in story there is role, role is distribute around the goal of the story, knowing this help to manage ai in game because he wont give the same ressource to all agent, we won't need because they don't have the same importance, actually by having a focus on some agent and by knowing what role they had we can controle more finely the experiance and still left room for freedom, by providing a structure we enhance the experiance and remove the mundane, better we can understood what it is happening in almost all detail

story told about problem, problem must met a solution, solution provide goal, to acheive goal one must take the role to pursue him and to be able to solve him

in story all role (and then behaviour) is drive by goal
around this goal, caracter have role
indentifying these role allow to have more controle in the process, it's a basic of storywriting

aroung the goal then we have:
Protagonist: those which seek goal
Antagonist : those which prevent the goal to be reach
Guardien : those which ease the goal to be reach
contagonist: those which slow the goal to be reach
Sidekick : those which support the goal (positive feedback)
skeptic : those which doubt the goal (negative feedback)
Rational : those which calm things
emotional : those which stress things

role can be take by any agent, all role must be take to have a complete story structure BUT agents can change role

now i think it's obvious that all role doesnot need the same ai
some only need some words to say to pass information while other need a fully fctnal and dinamic ai to reach their requirement (protagonist and antagonist for ex), but it also depend on the situation of the agent (antagonist could be a dragon which protect the treasure, common monster script could do the job but the role is still meet, while angry farmer which prevent you to cross a river would have to think a little more and adapt the situation), the real fact is that with role even low end ai for agent could work if a top ai said them what to do to meet their role (saving resource for ONE only expensive ai, and you could still doing with actual RTS ai but adapt to dramatic structure )

more about story here: www.dramatica.com (check the theory book)

NOTE

now i just realize that i havent to put full activation to the perceive concept, if they comme with strength this could simulate the degree of perception of a scene, however this turn the scene appraisal to each agent rather than a general description pass to them all, hum, it's more a problem of what is need for a design then...

i don't beleive that cyc would work, because it will meet fossilisation and then is expose to catastrophe, it lacks flexibility, it's merely a toy without purpose

EDIT:
just notice that you could create an principal dramatic emotion to passe to agent in order to keep them in the role, they would want to seek the better state that meet this emotion optimal

>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>

[edited by - neoshaman on April 16, 2004 3:43:31 AM]

#91 Jamaludin   Members   -  Reputation: 151

Like
Likes
Like

Posted 19 April 2004 - 03:16 AM

ooo, frigginfabulasspanktastic idea you have here dude. Cant see nothing wrong with it, and its a lovely thing for in a game, it would make dynamic quests that arrise bacuse of behaviors and the situations that come out of these behaviours, neat !

I have the same idea, but my world has no NPC''s except for mosters, animals and such, everything humanoid is a player, so these creatures would have a similiar live, that can make situations. Like them nesting in e valuable resource that needs to be taken, but all random and all. Cool, good luck !




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS