Jump to content

  • Log In with Google      Sign In   
  • Create Account

Adaptive Virtual Game Worlds: Where to Begin?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
90 replies to this topic

#41 dathui   Members   -  Reputation: 122

Like
Likes
Like

Posted 30 March 2004 - 08:45 PM

timkins "Recallability" should be raised each time you actually think of it activly (amount raised should be affected by the different SignificanceToXXXXXX variables), up to a certain point of couse. This is to keep certain memories from dissapering, like "i''m married to ", because he talks to his wife everyday. "Recallability" should alse lower slowly by each passing day, making them harder to remember.

With the robbery timkin mentioned, talk would die down after a while, he would think of it less often and after a while forgett it completely.

Another thing about memories, association. If timkins merchant think about his store, he might also remember that it was robbed a while ago, so each memory object should have a list of associated memories, whose "Recallability" is raised by a smaller amount each time the memory if thought of. Each association link might also have a weight of some sort, which says how close they are related.

Sponsor:

#42 ghost007   Members   -  Reputation: 139

Like
Likes
Like

Posted 31 March 2004 - 02:00 AM

This is an interesting problem. I’ve never given much thought to this type of scenario, but mostly respected how companies like Mythic and Blizzard tackle the bare bones of generally-acceptable AI today.

The first problem I would think about in such a dynamic universe is speed. There’s no doubt that every NPC has to be active at all times, even when no active players are about. If your entire world consists of thousands of NPCs, you will need a powerhouse computer to meet the demands. This then comes down to managing NPCs with the fewest post-processing requirements.

The second problem is addressing how dynamic you’re willing to go. It’s equally possible to convince John not to steal and earn a decent job. This is getting a bit to complex. However, if John becomes unreasonable, and even responds aggressively to other players that wish to ally with him (not assault him); then again you have a basic fate system any current game employs.

The best way to approach this is to ask yourself how dynamic you want to take this. If you want to limit it and effectively have control over your own game, then it would be best to adopt an event based system that is configured with many alternatives and can be continually updated by real persons to provide newer quests. If you want it to be purely dynamic and have no control over your own game, then an evolution algorithm would have to be used per NPC. NPC characters would have a sense of individuality and make decisions on it’s own without any human intervention, but governed by a set of rules designated by people (to prevent purely chaotic outcomes). As far as I know, no algorithm exists today can do that, otherwise we’d have computers that can talk to us and have their own free will =P

[edited by - ghost007 on March 31, 2004 9:02:05 AM]

#43 ironfroggy   Members   -  Reputation: 122

Like
Likes
Like

Posted 31 March 2004 - 04:57 AM

I suggest this entire thread be moved to the new Yahoo! Group, where many of the dangling issues are being addressed.

#44 irbrian   Members   -  Reputation: 130

Like
Likes
Like

Posted 31 March 2004 - 06:12 AM

I''m thoroughly enjoying participating in, and especially passively reading, this discussion. It''s something I''m very interested in. But I actually just realized again that my initial questions were never conclusively answered.

What we need to decide on, I think, is the general technologies that will come into play. Are NNs applicable? What other types of AI technologies are going to be involved with this kind of system?

I think I''d like to start developing a relatively simple AI Agent that would be capable of doing a few of the things we''ve been talking about. Nothing fancy (relatively speaking!)... just a basic simulation to start working out how an Agent could form beliefs, store memories, etc. Of course I have no idea what technologies to study, nor, for that matter, where to begin at all!

Anyone know of a really good book for this sort of thing? Particularly, I''d like a book that assumes the user will be developing in C++ and is a mathematical moron. :D

#45 dathui   Members   -  Reputation: 122

Like
Likes
Like

Posted 31 March 2004 - 08:36 AM

quote:
Original post by irbrian
What we need to decide on, I think, is the general technologies that will come into play. Are NNs applicable? What other types of AI technologies are going to be involved with this kind of system?


Before we do that i think we need to specify how the ai should work, roughly, then we can choose an AI-technology to follow that.

quote:
Original post by irbrian
Anyone know of a really good book for this sort of thing? Particularly, I''d like a book that assumes the user will be developing in C++ and is a mathematical moron. :D

Theres quite alot of books on the matter, look at GameDevs book-list and pick one of the 5star-books or some such


#46 irbrian   Members   -  Reputation: 130

Like
Likes
Like

Posted 31 March 2004 - 09:21 AM

quote:
Original post by dathui
Before we do that i think we need to specify how the ai should work, roughly, then we can choose an AI-technology to follow that.
True. Let me rephrase the question then. Given popular technologies and methods, what options might be applicable? And are there any we can eliminate?
quote:
Theres quite alot of books on the matter, look at GameDevs book-list and pick one of the 5star-books or some such
I'll check out the list. But while there are hundreds of books on AI out there, I'm not just interested in books on AI. I'm interested in books that might relate to this topic specifically, and ideally within the guidelines I mentioned in my last post.

[edited by - irbrian on March 31, 2004 4:21:54 PM]

#47 Timkin   Members   -  Reputation: 864

Like
Likes
Like

Posted 31 March 2004 - 06:14 PM

Dathui:

I think it would be better to alter the significance of a memory (wrt to people and wrt to the community) over time and keep recallability a function of significance and NPC state. This seems more natural to me and means that insignificant information can still be recalled easily if the context allows it.

Ghost007:

I don''t think one needs to worry about hardware when discussing such ideas. One should work out the nuts and bolts of the methodology and then work out whether it is tractable with current technology. If it isn''t then either you approximate the technique, or you develop better hardware (or wait for it to be developed).

irbrian:

I''ve already mentioned some enabling technologies earlier. Essentially though, you need to clearly define the problems to be solved before you''ll know what algorithmic methods are going to be applicable for solving them.

Cheers,

Timkin

#48 Thesolitas   Members   -  Reputation: 122

Like
Likes
Like

Posted 31 March 2004 - 07:39 PM

quote:
Original post by Timkin

I think it would be better to alter the significance of a memory (wrt to people and wrt to the community) over time and keep recallability a function of significance and NPC state. This seems more natural to me and means that insignificant information can still be recalled easily if the context allows it.


I definitely agree with this. Even though a memory may seem to be gone, in the right context or with the right stimuli that memory can be as vivid as the day it happened. As you suggested, the significance of a memory should determine how well we recall it. Another thing significance does, is allow us to compare significant memories against current experiences. A good use of that is in solving problems in NLP for example.

Think about a person showing up to the hospital and seeing a friend in the waiting room and having the friend immediately say "He's doing better". How could a computer understand that sentence? Who is he? By recalling significant memories that are related to that friend, the computer could remember that their friend's father got ill, and figure out who "he" is.

quote:
Original post by Timkin

I don't think one needs to worry about hardware when discussing such ideas. One should work out the nuts and bolts of the methodology and then work out whether it is tractable with current technology. If it isn't then either you approximate the technique, or you develop better hardware (or wait for it to be developed).


That's something I brought up in the Yahoo! Group we started. One of the common trends I see (in this thread also), is people trying to solve technical problems before solving the real problem. Pretend that you have unlimited resources, solve the problem, then figure out how to make it work with what you have. Chances are by the time you actually DO figure out the problem, your resources will have changed so dramatically that you have far more to work with.

quote:
Original post by Timkin

I've already mentioned some enabling technologies earlier. Essentially though, you need to clearly define the problems to be solved before you'll know what algorithmic methods are going to be applicable for solving them.


Another good point (I feel like I'm brown nosin lol). It's way too early to be choosing technologies at this point, it's even too early for knocking some off the list. I also don't believe this can be done (at least currently) with a single technology, so I would suggest you keep your options open.

One more thing, if anyone else would like to discuss this further, as other's have mentioned we have started a Yahoo! Group. Nothing much happening yet, as its only a day old, but maybe we can get some conversations going there, and start to kick around some more ideas.

[edited by - Thesolitas on April 1, 2004 2:48:51 AM]

#49 irbrian   Members   -  Reputation: 130

Like
Likes
Like

Posted 01 April 2004 - 04:53 PM

Off Topic -- Yahoo has THE MOST ANNOYING user account system I have EVER dealt with (and that''s saying something). *Sigh* I''m still trying to get the signing up and verification done.

Alright, alright, I understand what you guys are saying about the technologies, and for the most part I agree. But I don''t see anything wrong with wanting to understand the technologies that make up AI and start thinking about them within the context of dynamic, persistent NPC communities.

Besides, seems to me it''s a good idea to start with small, practical, stepping stone projects... don''t try to walk before you can crawl, they say. Or to put it another way, why start off aiming for the stars when you can learn so much about space travel by shooting for Mars first? (oops -- too political? )

Anyway. I agree with Timkin about basing memory recollection on significance, but there may be some validity in the memories also having some level of time-sensitive Recallability.

I had some interesting thoughts about this today. Admittedly I was studying up on NNs at the time. I know, don''t worry about the technology -- but this is more about the way the human brain works than ANNs.

If I remember right, the human brain stores memories in clusters of neurons. The more significant the memory, the larger the cluster, and thus the memory is easy to recall because it takes up a significant area in the brain. Also, the more frequently the memory is accessed, the stronger the path to the individual neurons. As time passes though, if those neurons aren''t accessed the path to them begins to degrade and they become less accessible. They are still there, but I guess you could say the "threshold" has increased -- it takes more effort to access those memories.

Applying this to the general idea of NPC memory recollection, NPCs would always keep their memories, but the less significant the memory and the less frequently the memory is accessed, the more the memory fades. Maybe the system could institute a formula that determines how significant a memory must be to be recalled efficiently after X amount of time, weighted differently for different NPCs? i.e. Significance Threshold Over Time...

Come to think of it, it could be as simple as each memory having a significance rating that explicitly indicates the number of hours a memory can be recalled. Each NPC has a value relative to one hour to represent their ability to recall things -- a photographic memory might be 3.0 hours while a creature with no memory at all might have a 0.0 memory hour rating.

To stir things up, though, the human brain has a short term memory and long term memory. Events are stored in short term memory as they happen and then moved to long term memory after awhile. If one wanted to make things super complex, NPCs could have both, and have two memory hour values -- one for recalling short term memories, say the last 1-24 hours, and one for recalling long term memories. I dunno -- it probably doesn''t even matter that much unless you wanted to go so far as to simulate NPCs with severe memory-related disabilities, which is pretty extreme for any game.

****************************************

Brian Lacy
ForeverDream Studios

Comments? Questions? Curious?


"I create. Therefore I am."

#50 Timkin   Members   -  Reputation: 864

Like
Likes
Like

Posted 01 April 2004 - 07:43 PM

quote:
Original post by irbrian
If I remember right, the human brain stores memories in clusters of neurons. The more significant the memory, the larger the cluster, and thus the memory is easy to recall because it takes up a significant area in the brain. Also, the more frequently the memory is accessed, the stronger the path to the individual neurons. As time passes though, if those neurons aren't accessed the path to them begins to degrade and they become less accessible. They are still there, but I guess you could say the "threshold" has increased -- it takes more effort to access those memories.



I'm curious as to the reference for this information. I work in the neurosciences and this doesn't sound like any accepted theory to me. It does, however, sound like a computer science view of neuronal networks from the 80s. Perhaps it's just because you've paraphrased it though, that it reads that way. If you have a specific reference, I'd be very interested to read it.

There are competing theories for how memories are stored, at least from the literature I've read. One is that memories are stored by little folicles on certain cells and these follicles affect the firing behaviour of the neuron under external stimulus. Another view is that memories are stored in synaptic sensitivity to inputs. The problem with this model is that it doesn't explain how one part of the brain (the same neuronal columns for example) can store multiple memories (since they don't really store multiple synaptic sensitivies). Certainly, research into the olfactory system (which has been completely understood now for many years) tells us that smell memories are encoded in a manner that generates lower dimensional, stable attractors in the signal phase space. That is, the unstimulated state of neuronal loops in the olfactory system is a chaotic attractor. When these loops receive stimulus, the dynamics collapse to a simpler, stable, periodic or quasi-periodic behaviour, signalling a change in the attractor of the system. Unfortunately this model doesn't extend to all regions of the brain. For example, in the occipital lobes of cats, it has been shown that some information is actually encoded in the varying phase synchronisation of discrete neuronal columns (spatially distinct neuronal loops). Recent work on rat hippocampi has supported this view of information encoding. How this relates to memory though is not known at this time.

On the significance issue... I think that there are really two levels of significance of information: significance to us and our perception of significance to others. Significance to us is, I think, best measured by the emotional response that the information generates. Significance to others could be measured by how many other people mention the information to us, or ask us about it, or alternatively, how other people (and how many) responded to the information (emotionally speaking). Bill telling Mary that he finds the information significant should be different to Bill telling Mary that the whole town was in an uproar when they heard the information. This might even affect the significance of the information in Mary, depending on her empathy for others and her 'community spirit'.

Cheers,

Timkin


[edited by - Timkin on April 2, 2004 6:48:30 AM]

#51 irbrian   Members   -  Reputation: 130

Like
Likes
Like

Posted 01 April 2004 - 10:35 PM

quote:
Original post by Timkin
I'm curious as to the reference for this information. I work in the neurosciences and this doesn't sound like any accepted theory to me. It does, however, sound like a computer science view of neuronal networks from the 80s. Perhaps it's just because you've paraphrased it though, that it reads that way. If you have a specific reference, I'd be very interested to read it.
It's likely I'm misreading here, but your implication seems to be that I'm making this stuff up. Well, I don't remember the precise moment at which I claimed to be an expert on neuroscience.

What I wrote above was not merely paraphrase, it was a paraphrasing of my interpretation of various things I've read and heard over many years, and I couldn't begin to vouch for the accuracy of any of it, let alone the specific sources. Apparently I'm pretty far off.
quote:
There are competing theories for how memories are stored, at least from the literature I've read. One is that memories are stored by little folicles on certain cell bodies and these follicles affect the firing behaviour of the neuron under external stimulus. Another view is that memories are stored in synaptic sensitivity to inputs. The problem with this model is that it doesn't explain how one part of the brain (the same neuronal columns for example) can store multiple memories (since they don't really store multiple synaptic sensitivies. Certainly, research into the olfactory system (which has been completed understood now for many years) tells us that smell memories are encoded in a manner that generates lower dimensional, stable attractors in the signal phase space. That is, the unstimulated state of neuronal loops in the olfactory system is a chaotic attractor. When these loops receive stimulus, the dynamics collapse to a simpler, stable, periodic or quasi-periodic behaviour, signalling a change in the attractor of the system. Unfortunately this model doesn't extend to all regions of the brain. For example, in the occipital lobes of cats, it has been shown that some information is actually encoded in the varying phase synchronisation of discrete neuronal columns (spatially distinct neuronal loops). Recent work on rat hippocampi has supported this view of information encoding. How this relates to memory though is not known at this time.
Sounds much to complicated to try go about modeling it for any game.. maybe even for any AI simulation. Frankly, however inaccurate, I like my original interpretation better.
quote:
On the significance issue... I think that there are really two levels of significance of information: significance to us and our perception of significance to others. Significance to us is, I think, best measured by the emotional response that the information generates. Significance to others could be measured by how many other people mention the information to us, or ask us about it, or alternatively, how other people (and how many) responded to the information (emotionally speaking). Bill telling Mary that he finds the information significant should be different to Bill telling Mary that the whole town was in an uproar when they heard the information. This might even affect the significance of the information in Mary, depending on her empathy for others and her 'community spirit'.
I think this goes back to NPC Observation and forming of Belief relevant to the character's observation. Between the nature of the observation and the significance of the actual information to the character receiving the information, a new level of significance might be established.

[edited by - irbrian on April 2, 2004 5:35:45 AM]

#52 Timkin   Members   -  Reputation: 864

Like
Likes
Like

Posted 01 April 2004 - 11:52 PM

quote:
Original post by irbrian
It''s likely I''m misreading here, but your implication seems to be that I''m making this stuff up.



That''s certainly NOT what I''m saying. I was asking because in all likelihood you were talking about a model that I had not read about, or that I had read about, but that which I could not identify from what you wrote. Since it''s important in my job for me to keep up with current theory and practice, I asked for a reference. Nothing else was meant by that.

Timkin



#53 Thesolitas   Members   -  Reputation: 122

Like
Likes
Like

Posted 02 April 2004 - 06:29 AM

quote:
Applying this to the general idea of NPC memory recollection, NPCs would always keep their memories, but the less significant the memory and the less frequently the memory is accessed, the more the memory fades. Maybe the system could institute a formula that determines how significant a memory must be to be recalled efficiently after X amount of time, weighted differently for different NPCs? i.e. Significance Threshold Over Time...


On the topic of memory recall, I think we''re going about it wrong. Memories should be stored in a LIFO structure, last in, first out. As you add memories, it pushes older memories to the back. Whenever you do something that needs to check memory, you start looking through memories from the front. Depending on how long you allocate to that process, determines how far back in memory you go. How far back you can go can change, since sometimes, we just try harder to remember something. Everytime you access a memory succesfully, you could bring it back to the front. So memories that you use constantly, are always staying near the front.

Now how do you add significance to that? One thing I was thinking, is that as you add memories, you take the significance of that new memory, and push it down the list till it finds either an equal, or lower significant memory. This way, higher significant memories don''t get pushed down.

How exactly do we determine significance though? Their are alot of different methods being thrown around, including emotional response to the memory, number of times they''ve heard a memory, number of people that have told them, etc.

Number of times heard/number of poeple heard from should really only keep that memory to the front longer. The emotional response to that memory should make it more significant. You can be told 20 times the same thing, and though it may stay to the front for a while (because of constantly being told), if you don''t care, you''ll forget it.

If you''re at a party, and 2 women tell you their phone numbers, one you like, one you don''t, who''s number are you going to remember?

The problem is, how do we determine the emotional significance? And what about memories tied to an emotional response that change? How do we decay the significance of a memory?

#54 ironfroggy   Members   -  Reputation: 122

Like
Likes
Like

Posted 03 April 2004 - 12:36 PM

I really want this to move completely to the group, to make it easier to track, so im double-posting. sorry to the few of you who are in the group yet.

In response to all the talk about the recalling and memory significance. I think we need two values for that, simply. They would be SigniAlpha and SigniBeta, both between 0.0 and 1.0.

In a newly created memory, SigniAlpha would be nearly 1.0 and SigniBeta would be nearly 0.0. Anytime a memory is accessed, both are raised, but SigniAlpha much more than SigniBeta. Over time, both values drop, but SigniAlpha more quickly than SigniBeta. When SigniBeta drops to 0, the memory is forgotten. Perhaps, there could also be a threshold for SigniBeta to reach where it will never drop below again (moving from short-term to long-term memory).

The recallability of a memory would be a product of SigniAlpha and SigniBeta, weighted mostly for SigniAlpha.

This would cause new memories to be forgotten quickly if they aren''t thought about (apparently not important) as they wouldnt be accessed enough to keep SigniBeta above 0. After that value is raised however, SigniAlpha can drop low enough to make the memory difficult to recall, yet the memory is still retained because of the value of SigniBeta.

I''ve been doing lots of my own work on the subject, mostly research and hypothosis and this is one of the things I''ve been working with. I''m trying to mention my ideas carefully tho, as I need to make sure they aren''t brain-dead, first :-) I''m also doing some demos of the ideas, which I''ll post on the group.

#55 Timkin   Members   -  Reputation: 864

Like
Likes
Like

Posted 04 April 2004 - 03:02 AM

I can in no way support the move of this thread to a Yahoo group that is unrelated to GameDev. The contributions made in this thread belong here. Anyone is, of course, free to post wherever they like. Personally I''m not going to join a Yahoo group for the discussion of Game AI when I can do that here. Splintering into disassociated groups will, IMHO, only do damage to GameDev by removing important and interesting discussion from these forums.

Cheers,

Timkin

#56 Jotaf   Members   -  Reputation: 280

Like
Likes
Like

Posted 04 April 2004 - 05:32 AM

I agree with Timkin, if you''re gonna do that, make it possible for someone who is not registered to at least read the discussions... the fact that registration is required just to have a look at them is very annoying.

Anyways, great discussion =) lots of good ideas. I''d like to make a suggestion though. I found a nice way to model knowledge in a game, it''s easy and very flexible. I described it in this thread, I''ve got 4 versions of it, the first 2 are enough for almost anything you can think of and VERY lightweight for a game with thousands of NPCs, you can find them in that thread; I also have another one that I''m still working on a prototype (it''s done but I''m having trouble coding an editor that is intuitive enough =) ), and the 4th I believe should be able to model knowledge just like it is stored on the human brain (it''s consistent with all the studies I read about), but it''s a bit too complex for a game. Ok here''s the thread, you probably already checked it out but I thought it would be worth mentioning:

http://www.gamedev.net/community/forums/topic.asp?topic_id=211811

#57 ironfroggy   Members   -  Reputation: 122

Like
Likes
Like

Posted 04 April 2004 - 07:16 AM

I understand the desire to keep the discussion at gamedev. and, dont get me wrong, i love gamedev. But a single forum thread is not a very good enviroment for an on-going discussion like this, especially if any projects come out of it. the idea of the yahoo group is to create an eviroment that can support something lasting so long.

Maybe if gamedev had hierarchial forum threads and an option to be mailed the threads, it would be different. I, for one, prefer my email client to coming back here every day and trying to see what the last message was that I read and which ones are new and who is replying to what messages, etc.

I''ll continue to double post (aside from this, no need obviously) until this thread dies down considerably as it inevitably will. I do agree the group archive should be public, i think the members-only setting was just a default.

#58 irbrian   Members   -  Reputation: 130

Like
Likes
Like

Posted 04 April 2004 - 02:45 PM

Truthfully, the idea of implementing NPC memory in any type of linear fashion deeply bothers me. Humans don't think in a linear fashion, and while I realize that we're not talking about making artificial Humans here but game characters, I don't see how one can operate a truly convincing AI NPC without at least paying attention to how humans' minds operate on an abstract level.

I have relatively insignificant memories from as far back as my 3rd Christmas. Yet there are significant events I don't remember too well, if at all -- although those memories might resurface if I thought about it enough and had some help (like talking to family members about them).

I think NPCs should, at least in a general sense, have the same capability. It makes sense of course for more significant memories (i.e. my wife died) to be more easily accessible for longer periods than insignificant ones, but certainly any memory should be accessible to some degree given enough stimulation.

[Off Topic]
Regarding the Yahoo! group, I had a heck of a time trying to get access and I STILL haven't successfully been able to access the discussions there. Furthermore, I've always hated anything Yahoo! related for half a dozen reasons unrelated to this.

On the other hand, while I understand Timkin's position, I do think the topic would be best discussed in a more dynamic format. I'm all for a group of some kind, but I've officially and resolutely dismissed Yahoo! as the solution.

Oh, and I never really liked the tree-oriented post/reply structure. So between that and my general distaste for Yahoo, I think I'll keep talking about it here until someone comes with a better proposal.
[/Off Topic]

****************************************

Brian Lacy
ForeverDream Studios

Comments? Questions? Curious?


"I create. Therefore I am."

[edited by - irbrian on April 4, 2004 9:47:08 PM]

#59 ironfroggy   Members   -  Reputation: 122

Like
Likes
Like

Posted 04 April 2004 - 03:39 PM

Yeah, I''m sure you could remember your third christmas if you talked about it with family members. I''m also sure you''d be very wrong about most the events. They probably would be, too.

I''m surprised this issue hasn''t been raised before. What about remembering things incorrectly? When you have trouble remembering an old memory, as much as you remember more details you just fill it in with things that seem right.

Should this be implemented in the kind of systems we have been discussing?

#60 irbrian   Members   -  Reputation: 130

Like
Likes
Like

Posted 04 April 2004 - 04:54 PM

Good point! Definitely an intriguing aspect to consider. I did actually think about this at one point. I wonder if it could be implemented naturally as a side effect of other concepts we''ve discussed, though. I.E., when an NPC stores a memory, he technically stores it in small chunks, and some of those chunks may be recalled more easily than others. If a chunk seems to be missing as the NPC Agent isn''t able to recall the complete memory, the NPC either conveys the incomplete information as is, or treats the memory as an observation and tries to piece together what might have happened. Maybe the resulting belief will be correct, maybe not. Over time, these beliefs may even begin to replace the original beliefs, thus causing different NPCs to recall events with subtle variations.

So, it comes down to an NPC being able to take given information about which it has already made an observation or formed a belief, compare the data, and potentially draw additional conclusions.

Guess I''m talking pretty advanced stuff here -- NPCs drawing natural, logical conclusions off of incomplete information, including info drawn from their own "memories" -- but it would be cool, no?

****************************************

Brian Lacy
ForeverDream Studios

Comments? Questions? Curious?


"I create. Therefore I am."




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS