#### Archived

This topic is now archived and is closed to further replies.

# Adaptive Virtual Game Worlds: Where to Begin?

This topic is 5026 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

quote:
Original post by Timkin
I'm curious as to the reference for this information. I work in the neurosciences and this doesn't sound like any accepted theory to me. It does, however, sound like a computer science view of neuronal networks from the 80s. Perhaps it's just because you've paraphrased it though, that it reads that way. If you have a specific reference, I'd be very interested to read it.
It's likely I'm misreading here, but your implication seems to be that I'm making this stuff up. Well, I don't remember the precise moment at which I claimed to be an expert on neuroscience.

What I wrote above was not merely paraphrase, it was a paraphrasing of my interpretation of various things I've read and heard over many years, and I couldn't begin to vouch for the accuracy of any of it, let alone the specific sources. Apparently I'm pretty far off.
quote:
There are competing theories for how memories are stored, at least from the literature I've read. One is that memories are stored by little folicles on certain cell bodies and these follicles affect the firing behaviour of the neuron under external stimulus. Another view is that memories are stored in synaptic sensitivity to inputs. The problem with this model is that it doesn't explain how one part of the brain (the same neuronal columns for example) can store multiple memories (since they don't really store multiple synaptic sensitivies. Certainly, research into the olfactory system (which has been completed understood now for many years) tells us that smell memories are encoded in a manner that generates lower dimensional, stable attractors in the signal phase space. That is, the unstimulated state of neuronal loops in the olfactory system is a chaotic attractor. When these loops receive stimulus, the dynamics collapse to a simpler, stable, periodic or quasi-periodic behaviour, signalling a change in the attractor of the system. Unfortunately this model doesn't extend to all regions of the brain. For example, in the occipital lobes of cats, it has been shown that some information is actually encoded in the varying phase synchronisation of discrete neuronal columns (spatially distinct neuronal loops). Recent work on rat hippocampi has supported this view of information encoding. How this relates to memory though is not known at this time.
Sounds much to complicated to try go about modeling it for any game.. maybe even for any AI simulation. Frankly, however inaccurate, I like my original interpretation better.
quote:
On the significance issue... I think that there are really two levels of significance of information: significance to us and our perception of significance to others. Significance to us is, I think, best measured by the emotional response that the information generates. Significance to others could be measured by how many other people mention the information to us, or ask us about it, or alternatively, how other people (and how many) responded to the information (emotionally speaking). Bill telling Mary that he finds the information significant should be different to Bill telling Mary that the whole town was in an uproar when they heard the information. This might even affect the significance of the information in Mary, depending on her empathy for others and her 'community spirit'.
I think this goes back to NPC Observation and forming of Belief relevant to the character's observation. Between the nature of the observation and the significance of the actual information to the character receiving the information, a new level of significance might be established.

[edited by - irbrian on April 2, 2004 5:35:45 AM]

##### Share on other sites
quote:
Original post by irbrian
It''s likely I''m misreading here, but your implication seems to be that I''m making this stuff up.

That''s certainly NOT what I''m saying. I was asking because in all likelihood you were talking about a model that I had not read about, or that I had read about, but that which I could not identify from what you wrote. Since it''s important in my job for me to keep up with current theory and practice, I asked for a reference. Nothing else was meant by that.

Timkin

##### Share on other sites
quote:
Applying this to the general idea of NPC memory recollection, NPCs would always keep their memories, but the less significant the memory and the less frequently the memory is accessed, the more the memory fades. Maybe the system could institute a formula that determines how significant a memory must be to be recalled efficiently after X amount of time, weighted differently for different NPCs? i.e. Significance Threshold Over Time...

On the topic of memory recall, I think we''re going about it wrong. Memories should be stored in a LIFO structure, last in, first out. As you add memories, it pushes older memories to the back. Whenever you do something that needs to check memory, you start looking through memories from the front. Depending on how long you allocate to that process, determines how far back in memory you go. How far back you can go can change, since sometimes, we just try harder to remember something. Everytime you access a memory succesfully, you could bring it back to the front. So memories that you use constantly, are always staying near the front.

Now how do you add significance to that? One thing I was thinking, is that as you add memories, you take the significance of that new memory, and push it down the list till it finds either an equal, or lower significant memory. This way, higher significant memories don''t get pushed down.

How exactly do we determine significance though? Their are alot of different methods being thrown around, including emotional response to the memory, number of times they''ve heard a memory, number of people that have told them, etc.

Number of times heard/number of poeple heard from should really only keep that memory to the front longer. The emotional response to that memory should make it more significant. You can be told 20 times the same thing, and though it may stay to the front for a while (because of constantly being told), if you don''t care, you''ll forget it.

If you''re at a party, and 2 women tell you their phone numbers, one you like, one you don''t, who''s number are you going to remember?

The problem is, how do we determine the emotional significance? And what about memories tied to an emotional response that change? How do we decay the significance of a memory?

##### Share on other sites
I really want this to move completely to the group, to make it easier to track, so im double-posting. sorry to the few of you who are in the group yet.

In response to all the talk about the recalling and memory significance. I think we need two values for that, simply. They would be SigniAlpha and SigniBeta, both between 0.0 and 1.0.

In a newly created memory, SigniAlpha would be nearly 1.0 and SigniBeta would be nearly 0.0. Anytime a memory is accessed, both are raised, but SigniAlpha much more than SigniBeta. Over time, both values drop, but SigniAlpha more quickly than SigniBeta. When SigniBeta drops to 0, the memory is forgotten. Perhaps, there could also be a threshold for SigniBeta to reach where it will never drop below again (moving from short-term to long-term memory).

The recallability of a memory would be a product of SigniAlpha and SigniBeta, weighted mostly for SigniAlpha.

This would cause new memories to be forgotten quickly if they aren''t thought about (apparently not important) as they wouldnt be accessed enough to keep SigniBeta above 0. After that value is raised however, SigniAlpha can drop low enough to make the memory difficult to recall, yet the memory is still retained because of the value of SigniBeta.

I''ve been doing lots of my own work on the subject, mostly research and hypothosis and this is one of the things I''ve been working with. I''m trying to mention my ideas carefully tho, as I need to make sure they aren''t brain-dead, first :-) I''m also doing some demos of the ideas, which I''ll post on the group.

##### Share on other sites
I can in no way support the move of this thread to a Yahoo group that is unrelated to GameDev. The contributions made in this thread belong here. Anyone is, of course, free to post wherever they like. Personally I''m not going to join a Yahoo group for the discussion of Game AI when I can do that here. Splintering into disassociated groups will, IMHO, only do damage to GameDev by removing important and interesting discussion from these forums.

Cheers,

Timkin

##### Share on other sites
I agree with Timkin, if you''re gonna do that, make it possible for someone who is not registered to at least read the discussions... the fact that registration is required just to have a look at them is very annoying.

Anyways, great discussion =) lots of good ideas. I''d like to make a suggestion though. I found a nice way to model knowledge in a game, it''s easy and very flexible. I described it in this thread, I''ve got 4 versions of it, the first 2 are enough for almost anything you can think of and VERY lightweight for a game with thousands of NPCs, you can find them in that thread; I also have another one that I''m still working on a prototype (it''s done but I''m having trouble coding an editor that is intuitive enough =) ), and the 4th I believe should be able to model knowledge just like it is stored on the human brain (it''s consistent with all the studies I read about), but it''s a bit too complex for a game. Ok here''s the thread, you probably already checked it out but I thought it would be worth mentioning:

http://www.gamedev.net/community/forums/topic.asp?topic_id=211811

##### Share on other sites
I understand the desire to keep the discussion at gamedev. and, dont get me wrong, i love gamedev. But a single forum thread is not a very good enviroment for an on-going discussion like this, especially if any projects come out of it. the idea of the yahoo group is to create an eviroment that can support something lasting so long.

Maybe if gamedev had hierarchial forum threads and an option to be mailed the threads, it would be different. I, for one, prefer my email client to coming back here every day and trying to see what the last message was that I read and which ones are new and who is replying to what messages, etc.

I''ll continue to double post (aside from this, no need obviously) until this thread dies down considerably as it inevitably will. I do agree the group archive should be public, i think the members-only setting was just a default.

##### Share on other sites
Truthfully, the idea of implementing NPC memory in any type of linear fashion deeply bothers me. Humans don't think in a linear fashion, and while I realize that we're not talking about making artificial Humans here but game characters, I don't see how one can operate a truly convincing AI NPC without at least paying attention to how humans' minds operate on an abstract level.

I have relatively insignificant memories from as far back as my 3rd Christmas. Yet there are significant events I don't remember too well, if at all -- although those memories might resurface if I thought about it enough and had some help (like talking to family members about them).

I think NPCs should, at least in a general sense, have the same capability. It makes sense of course for more significant memories (i.e. my wife died) to be more easily accessible for longer periods than insignificant ones, but certainly any memory should be accessible to some degree given enough stimulation.

[Off Topic]
Regarding the Yahoo! group, I had a heck of a time trying to get access and I STILL haven't successfully been able to access the discussions there. Furthermore, I've always hated anything Yahoo! related for half a dozen reasons unrelated to this.

On the other hand, while I understand Timkin's position, I do think the topic would be best discussed in a more dynamic format. I'm all for a group of some kind, but I've officially and resolutely dismissed Yahoo! as the solution.

Oh, and I never really liked the tree-oriented post/reply structure. So between that and my general distaste for Yahoo, I think I'll keep talking about it here until someone comes with a better proposal.
[/Off Topic]

****************************************

Brian Lacy
ForeverDream Studios

"I create. Therefore I am."

[edited by - irbrian on April 4, 2004 9:47:08 PM]

##### Share on other sites
Yeah, I''m sure you could remember your third christmas if you talked about it with family members. I''m also sure you''d be very wrong about most the events. They probably would be, too.

I''m surprised this issue hasn''t been raised before. What about remembering things incorrectly? When you have trouble remembering an old memory, as much as you remember more details you just fill it in with things that seem right.

Should this be implemented in the kind of systems we have been discussing?

##### Share on other sites
Good point! Definitely an intriguing aspect to consider. I did actually think about this at one point. I wonder if it could be implemented naturally as a side effect of other concepts we''ve discussed, though. I.E., when an NPC stores a memory, he technically stores it in small chunks, and some of those chunks may be recalled more easily than others. If a chunk seems to be missing as the NPC Agent isn''t able to recall the complete memory, the NPC either conveys the incomplete information as is, or treats the memory as an observation and tries to piece together what might have happened. Maybe the resulting belief will be correct, maybe not. Over time, these beliefs may even begin to replace the original beliefs, thus causing different NPCs to recall events with subtle variations.

So, it comes down to an NPC being able to take given information about which it has already made an observation or formed a belief, compare the data, and potentially draw additional conclusions.

Guess I''m talking pretty advanced stuff here -- NPCs drawing natural, logical conclusions off of incomplete information, including info drawn from their own "memories" -- but it would be cool, no?

****************************************

Brian Lacy
ForeverDream Studios

"I create. Therefore I am."

##### Share on other sites
Another thing that you need to do is combine similar memories. For example, Jack hears that Empire A has attacked a small village on the outskirts of Empire B. His memory now contains "Empire A attacked town of whatever in Empire B." The next day, Empire A makes another attack. And the next day. And the next. And pretty soon Jack''s memory will contain "Empire A frequently attacks Empire B." Sure, if the village of Asdfgh was attacked, he would remember, and tell other people for the next few days. But only when this memory is in his short term memories. When it enters his long term memories, it joins with "Empire A frequently attacks Empire B." He can''t keep track of every single little town that was attacked after all.

--------------------------------------
I am the master of stories.....
If only I could just write them down...

##### Share on other sites
quote:
Original post by irbrian
I.E., when an NPC stores a memory, he technically stores it in small chunks, and some of those chunks may be recalled more easily than others. If a chunk seems to be missing as the NPC Agent isn''t able to recall the complete memory, the NPC either conveys the incomplete information as is, or treats the memory as an observation and tries to piece together what might have happened. Maybe the resulting belief will be correct, maybe not. Over time, these beliefs may even begin to replace the original beliefs, thus causing different NPCs to recall events with subtle variations.

You might be interested to know that accepted cognitive theory regarding memory holds that sensory information is not stored in memory as a faithful reproduction of the original information. Indeed, experiments have shown that memories display reordering, recontruction and condensation of information. The subjects were interpreting the original material so that it made sense upon recall. Indeed, it is beleived that recall involves a process whereby representations of past experiences are used as clues in reconstructing the model of the event. Different cognitive strategies, like comparison, inference, guesses and suppositions are used to generate a consistent and coherent memory.

This partly explains why memory changes with age, since our methods of recall and our ability to utilise them change with our experiences. So, a memory of your 3rd christmas, recalled when you are at age 10, would probably be different to that same memory recalled at age 50.

As to whether you want to build such a model into an NPC? Well, it would be an extremely complex system, even for basic memories. It would also need to be individual, or tunable to each agent. That''s a LOT of work and a lot of computational load required to train and utilise the system.

Cheers,

Timkin

##### Share on other sites
Actually, it wouldn''t be that hard. You don''t need to simulate a real brain, because even though some parts really need to be close to how a brain works, others can be kinda emulated. Specially in a game world, where information is not so varied as in the real world. Some approximations will work just as fine. Like, the difficult topic of inference and logic. With a database of a few hundred logic assumptions (assuming that you have a structure that models knowledge correctly), you can achieve almost the same results if you just compare some pieces of knowledge with these logic rules. Like, Empire A attacked Town B. Town B is close to our town. So, we could be attacked by Empire A at any moment! With a powerful scripting engine this wouldn''t be that hard to do IMHO =)

##### Share on other sites
quote:
Original post by Jotaf
With a powerful scripting engine this wouldn''t be that hard to do IMHO =)

Actually, it''s by no means a trivial feat. One of the problems with such systems is that they cannot deal with contradictory facts. Additionally, one something has been asserted as true, its nigh impossible to draw the conclusion that it is false given new information. That is, you cannot retract things that you currently know. There are lots of other problems, but I wont go into them... any decent AI text dealing with First Order Logic in knowledge bases should give a good coverage and explanation of why such systems are no longer in use.

Furthermore, the example you just gave involves induction as well as deduction, which is computationally difficult, particularly in FOL systems.

Probability models using Bayesian calculus solve many of these problems, however such systems (such as Bayesian Belief Networks) are computationally expensive to operate.

Finally, I think you might be underestimating the complexity of the task of even approximating how memories are stored and retrieved in the human brain, even for a limited domain such as an artificial world.

Cheers,

Timkin

##### Share on other sites
quote:
Original post by Timkin
One of the problems with such systems is that they cannot deal with contradictory facts. Additionally, one something has been asserted as true, its nigh impossible to draw the conclusion that it is false given new information. That is, you cannot retract things that you currently know.

Contradictory facts should be allowable, and should be the basis for questions. If the NPC believes that John robbed the store, and then stores a new belief that John DIDN''T rob the store, all other factors being equal, he now has a question: Did John rob the store or not?

For this to work though, it seems to me that NPCs should rarely be given enough information to assert something as "True" (as in Universal Law). Even we humans cannot say that something is universally true (except under very specific circumstances of manifestation, which I will not get into here).

In fact, I would suggest that Universal Truths are to be "hard-wired" by the programmer, and the NPC should be incapable of forming beliefs about new Truths on his own. All NPC beliefs should be based on fuzzy logic -- "I believe that someone has robbed my store." Sure, it seems pretty obvious when the NPC walks into his store and finds things missing -- or, let''s say, when the NPC personally witnesses someone break into his store and steal things. That would be a pretty strong belief. But ultimately this is all based on perception/perspective. While it is probably true, it could also theoretically be the case that the whole thing is an act, and the robber was only pulling a prank, and will return the items at some point in the future. It''s a long shot, sure, but the NPC cannot KNOW with absolute certainty that it is not the case, or for that matter he cannot know that it is not an illusion or delusion.

This way, if an NPC ever is informed that something is false when he has already assumed it to be true, he will not "break." It would take a lot of evidence to convince him, of course.

quote:
Finally, I think you might be underestimating the complexity of the task of even approximating how memories are stored and retrieved in the human brain, even for a limited domain such as an artificial world.

Everything we''ve been discussing here is, I believe, monumentally difficult, especially considering the sheer amount of research that has gone into getting us to our current level of collective knowledge on AI. As I stated originally, I don''t expect this whole thing to come into being in the very near future, and certainly not without vast efforts made by many highly knowledgeable individuals. Additionally, the amount of processing power that I think would be required for a system like this would even today be incredibly expensive.

My idea is to begin to model small parts of this system with one and two NPCs for experimentation.

****************************************

Brian Lacy
ForeverDream Studios

"I create. Therefore I am."

##### Share on other sites
Hmm... Timkin you didn''t get my point. I know it''s really hard to have an AI like that. The example I gave you should be very complex for an AI agent to come up with it by itself. But that''s on purpose. What I''m suggesting is that there''s a database with lots of rules like this and the NPC just has to access these rules and use them to make inferences and draw conclusions. Of course they wouldn''t cover everything... but it would take a lot less effort than trying to get the same results with emergent behaviour, especially if you''re using neural networks. It would also solve the robbery problem: "evidence A" -> "It''s likely that [John robbed the store]"; "evidence B" -> "It''s not likely that [John robbed the store]" - so "John robbed the store" has 2 conflicting informations attached to it; when the NPC wants to know if it''s true or not, it sees the contradiction and answers "I''m not sure".

##### Share on other sites
I''m afraid I still don''t understand your point. Could you give a more detailed example?

##### Share on other sites
quote:
Original post by Jotaf
Hmm... Timkin you didn''t get my point.

Actually, I did get your point. The sort of system you are describing has long been discarded in AI as insufficient for a knowledge base for an agent. Please go and look up production systems and first order logic, rule-based systems specifically.

irbrian, you used the term fuzzy logic in your last post. If you meant Fuzzy Logic (as in Fuzzy Set Theory) then this is an inappropriate tool for this task. Fuzzy Logic is an alternative to Aristotlean (First Order) Logic and deals with set membership. FL does not offer a representation of uncertainty. If, on the other hand, you simply meant fuzzy to mean uncertain, then a) you shouldn''t really use the words fuzzy logic, because they mean something else; and, b) you need a representation like Dempster-Shafer Theory or Probability Theory, both of which enable the representation of uncertainty of beliefs, the incorporation of evidence to update beliefs, the ability to hold paradoxic beliefs, etc.

Given the recent discussions, I think you really should take a look at NAG. It might be a good first project to create the argumentation and inference mechanism for the NPC.

Cheers,

Timkin

##### Share on other sites
Actually Fuzzy Logic could work, if you had the certainty represented by a fuzzy variable. 1.0 meaning absolutely certain and 0.0 meaning not certain at all. But I do agree that fuzzzy logic is not very useful for the task. I just wated to point out that it COULD be used.

You know what? More and more, I want to get my hands on a copy of AI Game Programming Wisdom. I want to buy it, but I don''t want to pay $50 for a book, when I am living off of my parents good will. -------------------------------------- I am the master of stories..... If only I could just write them down... #### Share this post ##### Link to post ##### Share on other sites quote: Original post by Timkin irbrian, you used the term fuzzy logic in your last post. If you meant Fuzzy Logic (as in Fuzzy Set Theory) then this is an inappropriate tool for this task. Fuzzy Logic is an alternative to Aristotlean (First Order) Logic and deals with set membership. FL does not offer a representation of uncertainty. If, on the other hand, you simply meant fuzzy to mean uncertain, then a) you shouldn''t really use the words fuzzy logic, because they mean something else; and, b) you need a representation like Dempster-Shafer Theory or Probability Theory, both of which enable the representation of uncertainty of beliefs, the incorporation of evidence to update beliefs, the ability to hold paradoxic beliefs, etc. I have always heard (and read) the term Fuzzy Logic used simply to refer to logic that was not simply True or False. I understood this to be the basic definition of FL. If that is incorrect, then so be it.. just means I''ve been reading a whole lot of inaccurate material, including in a rather official-looking textbook on Set Theory (the name of which I cannot recall but can locate if you care). quote: Given the recent discussions, I think you really should take a look at NAG. It might be a good first project to create the argumentation and inference mechanism for the NPC. I have no idea what NAG is or where to find it. Google, here I come...! #### Share this post ##### Link to post ##### Share on other sites quote: Original post by Nathaniel Hammen Actually Fuzzy Logic could work, if you had the certainty represented by a fuzzy variable. 1.0 meaning absolutely certain and 0.0 meaning not certain at all. But I do agree that fuzzzy logic is not very useful for the task. I just wated to point out that it COULD be used. How is what you''re describing as Fuzzy Logic any different from the way I was describing FL as logical uncertainty? quote: You know what? More and more, I want to get my hands on a copy of AI Game Programming Wisdom. I want to buy it, but I don''t want to pay$50 for a book, when I am living off of my parents good will.
Step 1) Get a Job
Step 2) www.amazon.com

##### Share on other sites
What is this, my fourth post in a row? Sorry guys.
quote:
Original post by Timkin
Given the recent discussions, I think you really should take a look at NAG. It might be a good first project to create the argumentation and inference mechanism for the NPC.
As luck would have it, NAG just isn''t a specific enough term for Google to give me any useful results. Can''t imagine why. Can you give me more info? Or a URL, if that''s not too much to ask?

##### Share on other sites
quote:
Original post by irbrian
I have always heard (and read) the term Fuzzy Logic used simply to refer to logic that was not simply True or False. I understood this to be the basic definition of FL. If that is incorrect, then so be it.. just means I''ve been reading a whole lot of inaccurate material, including in a rather official-looking textbook on Set Theory (the name of which I cannot recall but can locate if you care).

Your interpretation might have been wrong.

##### Share on other sites
A lot of people misunderstand FL. Fuzzy logic is deterministic. The membership value (between 0 and 1) of an element to a fuzzy set represents the confidence of its membership to that set, not the probability.

My Website: ai-junkie.com | My Book: AI Techniques for Game Programming

##### Share on other sites
irbrian, FL is about logic... and you are correct, it''s a logic in which there are other values besides 0 and 1. But this is all set theory... As fup pointed out, the membership value is not a probability or likelihood of membership in a set. It''s a degree as to how much the item belongs to that set. In Aristotlean logic, items belong to one set or another. Statements are either true or false. In Fuzzy logic, statements can be partly true and partly false at the same time . This, however, has absolutely nothing to do with uncertainty in a statement. Fuzzy logic makes statements about things in the world. Uncertainty formalisms - such as Bayesian probabilities - make statements about the what we believe to be true or false in the world. Consider an example satement: John is a thief. Uncertainty in this statement might be represented by saying that there is a 70% chance that this statement is true. Fuzzy logic however would say that either John is a Thief, John is not a Thief, or to some degree, John is both a Thief and not a Thief.

Do you see the distinction?

As to Nathaniel''s comments that you could use Fuzzy Logic to represent uncertainty... well, I could use a hammer to crack and egg, but would it be the right tool for the job. Fuzzy Logic suffers from the same problems as all other Truth Maintenance Systems, in that the Fuzzy Calculus doesn''t produce the results we expect when performing inference.

As to NAG: Check out the websites of Ingrid Zuckerman and Kevin Korb, both from Monash University. You should be able to find sufficient information and pointers there to get you going.

Cheers,

Timkin