Quote:I could see something small (like accidentally killing the tax man of a small town) unraveling everything, leading to a complete ecological meltdown (baker takes over tax mans job, everyone goes hungry leading to attacks on other cities with food, leading to full scale civil war, ect...). Every change the hero does to correct the situation leading to even more problems. Something like out of a Groo comic.
That sounds hilarious, I'd love to play a game like that.
Although it does sound fun, I think the fun will be short lived.
In another thread (on making a trade game), I proposed an idea of an AI that would adapt to these kids of changes.
The way the AI world was that they would look for opportunities for them to make a "profit".
Now it doesn't need to be a profit in terms of money, indeed, the original AI didn't actually keep track of money at all and instead abstracted it to "Value" (which included reputation, money, friendship, etc as part of the value too).
It worked by having several layers.
The first layer only concerned itself with direct value. It didn't concern itself with long term situations or other factors. It would select an action based purely on whether or not that action would give it a net profit (help the NPC). This wasn't a value of money, but a value of how much it wou8ld benefit the NPC (which was a complex factoring of many aspects, but it could be done with a needs based system like something in the Sims, so not just with money).
The next layer factored in the social network. It did this by looking at the other NPCs that it interacted with and put a weighting on the value of the actions it could take. It also factored in whether or not it helped the NPC or hindered them (either * -1 for a hinder or * 1 for a help).
If the NPC was a friend (positive weighting) and it helped them (*1) then this would result in a positive weighting.
If the NPC was a friend (positive weighting) and it hindered them (*-1) then it would result in a negative weighting (a positive number multiplied by a negative number gives a negative).
If the NPC was an enemy (negative weighting) and it helped the NPC (*1), then it would result in a negative weighting (a negative number multiplied by 1 gives a negative).
If the NPC was an enemy (negative weighting) and it hindered them (*-1), then it would result in a positive weighting (a negative multiplied by a negative gives a positive).
So this system would give a weighting that would either increase the chance that the NPC would take a particular action or reduce it.
This level could have several sub levels if you desire, and each sub level would deal with NPC in increasing degrees of separation from the NPC making the decision. So the first sub level would deal with the relationships of the NPC that this NPC directly knew, then the next sub level would deal with the NPCs that that the NPCs that this NPC knows (and so on). I would guess that you would not need more than 6 sub levels at the most (the old saying that you are no more than 6 degrees of separation from anyone else), and that a more likely value would be only 2 sub levels.
The next level of the AI was slightly separated from the previous levels. With the previous levels, this was only dealing with a single AI rather than groups of them.
In this next level, the AI would attempt to assign NPCs actions by giving them a weighting factor on the particular actions that was needed (making them more likely to choose that action).
This actually exists as a Town AI and a layer on each NPC AI to interpret these instructions (factor in the weighting set by the town AI).
Now this Town AI is really just an implementation of the Trader AI set out above. The Town AI would also have these two levels as well both the personal level where it puts a weighting on actions that have a benefit directly to themselves, and a social level where they would put weightings on the actions based on social ties between the Town AIs.
The actions the Town AI takes would not necessarily be direct actions, but would instead be weightings for specific actions that the Trader AI's get passed.
This can then be extended for other organisational groupings (districts, counties, kingdoms, alliances, etc), each grouping would be running a version of the basic Trader AI and passing the weightings down to the next AI type in the hierarchy.
The beauty of this system is that the player can interact on any level (or all levels, or even switch between levels). It is also abstract enough that you can map most action types to the output of the system
It is also an implementation of the principles that I described in my previous post. In this, the actions that can be taken are the "language" that is used for the "dialogue" between the game system and the player (or another AI).
It is also robust as if the player kills an NPC that is needed (say the Tax Man - which would be a trader AI), then the Town AI would see that a Tax Man is needed and then apply a weighting on certain AIs it controls to become a Tax Man.
If an AI is not making much value (ie it is free), then it will likely become the tax man, where as an AI that is essential (making high value actions), would not be as likely to become the tax man, and even if they do, then someone else will be able to take advantage of the opening that this AI leaves and fill it (even if they are not directly under the control of the Town AI).
It is then possible to add in "internal" weightings for each level (and sub levels). These would be for the personality of the NPC. So one NPC might like to fish, so she has an internal weighting for fishing actions which makes them more likely to fish.
Another might have a weighting that makes them consider social actions over personal actions (actions specified in the second level gives bigger weightings) and this then changes the AI's behaviour.
You can make an Altruist AI by giving a higher weighting to actions that help people (this would be applied as a straight positive weighting on the social level after the factoring for friend or foe).
So by manipulating the weightings (and adding in more factors for the weightings), this AI system is extremely flexible and robust, and it has the negative feedback mechanisms to create interesting behaviours.
It can also be used to generate Quests as one of the Actions that could be allowed is that the AI requests the aid of another to complete the task (like the player). So if one AI wants to harm an enemy AI, then it might post a quest for someone to steal something valuable form that AI, or even assassinate them and if the AI want to help them, then the player might be give a standard "FedEX" type quest where they ahve to deliver something to the target NPC.
This brings me to the final aspect of this system.
Different actions not only have a weighting applied to them, they can also have a threshold of weighting that is required before that action can be chosen.
For example, the Assassinate action might require a high level of weighting (ie they really hate the guy) on it before the AI will consider that as an action. This might be as a straight negative value on the action, or it might be a hard threshold (or combination of several methods).