Is this agent-based "political adventure" game possible?

Started by
13 comments, last by s.Mason 11 years, 9 months ago

I'm very inexperienced when it comes to design, so please forgive me if some of my questions sound stupid - but how do I decide what qualifies for the outline, as something to decide on before "getting into (proper) design"?

Sorry that was rather vague and simplistic of me - what I meant is to define all the types of events/systems that exist in the game - e.g. If there are bandits that attack, then there has to be some sort of combat/ambush/hostage system or something, even if it is just a simple mechanism, it has to be defined what exactly this entails and this reveals more questions - how will it be decided when the player is ambushed? Where can this occur? What types of variables will be needed in facilitating this? So what can seem like a small incidental feature during the early design phase, can be revealed as quite a large amount of work later on if it was not well defined during that design phase.
And i'm sure you have thought alot about most of the details but just didn't write too much in the post, so my apologies if i'm stating the obvious here! xD


When I talked about realism, I was referring more to things like: How many people live in the cities? What are the ratios of the various professions? How many villages are there for each city, and how much do they pay in taxes? How are governors distributed among the settlements? Who trades what with whom, and how much? Some of these details may only live in the background of the game, in the form of area layouts, NPC appearances and so forth; but by aiming to make them realistic I hope to enhance the player's willingness to "buy into" the setting (as well as satisfying my own pedantic tendencies).

IMO the player will buy into if it's fun for alot longer than if its just trying to be realistic. I personally wouldn't be so concerned about population demographics and economic distributions unless I could find a direct and important way to make it impact on the game play. This point is, I acknowledge, just my opinion, and I don't know the full extent of your idea - but for me, realism won't improve the game in a big way (it really does sound amazing to have these realistic large cities I know) , you began with a vision for the game, that was to put the player into interesting political and social situations/conflicts and challenge them to win trade/allies/prestige/etc - and that's what I would stick with. I'm not sure how realistic cities would enhance this, and it could eat into valuable processing time. From my point of view, it looks as if having a select few types of NPCs that are dynamic agents (e.g. royals, consorts, diplomats, generals, merchants, nobles) and not worrying about the rest of the population that would satisfactorily be left as fairly one-dimensional, randomly generated agents appearing in towns whenever the player vistis them. The benefit of this in my mind is twofold, firstly the player has alot less to think about and worry about in the game - he's concerned with individuals of status; and secondly the complexity of the game is not increased exponentially and without much impact on the game. This may be a pertinent question, do you feel more satisfied making a game? or a historical simulator?


3. Minor aside point - the game is set in the Bronze age, - awesome - is the player trading at all in this game?

Trade will absolutely play a part in the game - for example, the player will end up negotiating trade agreements, that sort of thing. I don't know yet how much trading the player himself will be doing: he gets his social status from being related to a king, who presumably got to his position through trade or conquest - or is descended from someone who did. These days the royal family will get its wealth from collecting taxes, tributes and trade with neighbouring kingdoms, all of which (at least on paper, so to speak) pertain to the property of the king himself. The player will have access to some of that wealth as an allowance, as well as his wardrobe etc. - and might occasionally need to barter for particular items, but I don't see him taking on the role of a full-time merchant.

Interesting! Does the player have an inventory for carrying items then if they need to barter? And if so, why is item collecting/trading a necessarily limited part of the game?
Advertisement
Some aspects of the state might lie on a sliding scale, such as the happiness of various factions of the populace. Each agent assigns a level of desirability for each possible state, and also a level of belief that a given state is actually true (since there are many things the agent cannot directly know for certain). Finally, the other agents' goals and beliefs themselves form part of the state - so, for example, A might somewhat believe that B strongly wants C to declare war on D. Infinite regress is not necessary - a few nested levels of belief with a goal or "plain" state at the end is generally sufficient.


Sounds a lot like utility-based agents, if you haven't investigate more on those: http://en.wikipedia....telligent_agent

I'm focused on the problem of giving the player better ways to explode the role of his character in my RPG. I have thought of introducing a class called "Landlord" that allows the player to do commerce. My idea is based on Sid Meier's Colonization, in which you define trade routes and assign transports to automatically carry goods between points. Well, the are many tycoon games that work like that. But I want to implement this into a 3D RPG. If you want to read more on my approach: http://www.gamedev.n...n-the-game-rpg/

My response to Bluefirehawk’s comments lead naturally to a slightly more in-depth discussion of how agents work, so I’ve left them til last.


you need to consider the speed and potentially combinatorial explosion


Quite true. That's why I only plan to simulate a few dozen agents in any level of detail, and as I work on implementation hopefully I'll find a way for each agent to choose only relevant beliefs to process when making decisions. Another point is that agents will be interacting quite infrequently, so they'll be able to spend many millions of operations on each individual decision. I think I can achieve an interesting level of emergent behaviour within that constraint.


The rest could be broken up by faction, e.g. which faction is the individual's personality closest to, simulate them using that faction's motivations. Also do you intend to only simulate the important people? if you want to simulate everybody... I'm afraid the unwashed masses would need to be simulated in a more "down is down" manner.


The faction idea is more or less what I'm planning on doing. Where possible I'll avoid simulating individual NPCs at all, instead trying to work out what sort of consensus a faction would come to. Only those characters who are important enough in their own right will get dedicated agents assigned to them. I'm also considering using a sort of "scaled-back" agent (with few beliefs and goals) to stand in for individual unimportant NPCs the player interacts with, such as merchants, artisans - everyday people on the streets. Such an agent would only need to be used for as long as the player was interacting with it, which in many cases would only be a few seconds.


What are the 7 actions that define your game? Cut it down brutally. For example, forget up/down/left/right, focus on the nuts and bolts. Perhaps "travel to location", "give tribute", "demand tribute", "negotiate", "agree", "disagree", "trade". Not great examples, but that's your job. It's not essential to be as minimal as possible, but until you do that you won't really know what the core of your game is.


Other than "travel to location" and possibly "take / give item", the important actions will all be forms of "give information", whether true or false. That information might be "I would like X", i.e.a request, where X could itself be information, an item or some other form of assistance; it could be "A has asked me to tell you Y", i.e. passing on a message; or it could be "I will (not) do this if you do that", i.e. an agreement, a refusal or a threat.


what I meant is to define all the types of events/systems that exist in the game


Fair enough - thanks for clearing that up. The truth is, other than general world-building, I’ve spent most of the time thinking about how the decision engine for the agents would work, at the expense of other aspects of the game - so I don’t have a clearly-defined list of features for gameplay yet (which is why I was a bit vague about trade, for example). I recognize that this is something I need to sort out before it will be possible to begin developing the game proper, but so far my thoughts have been towards prototyping the decision engine itself and seeing how plausible my ideas for it were.


IMO the player will buy into if it's fun for alot longer than if its just trying to be realistic.


Yeah, you’re right that it won’t make any difference to 99% of players. To be honest, this was more a matter of self-satisfaction because I came at the whole thing from a world-building point of view, and I like things to be coherent.


I'm not sure how realistic cities would enhance this, and it could eat into valuable processing time.


You make a fair point. I’ll try to limit “realism-enhancing” features to the aesthetic side of things, at least until I have an idea of how more important things will use up resources. A realistic early- Bronze Age city doesn't actually need to be that big, so we'll see.


From my point of view, it looks as if having a select few types of NPCs that are dynamic agents […]


I’m completely with you on this (see above comments on factions).


Interesting! Does the player have an inventory for carrying items then if they need to barter? And if so, why is item collecting/trading a necessarily limited part of the game?


I’m embarrassed to admit it, but I really haven’t thought this side of things through in enough detail to give you a good answer. I’m not saying collecting / trading is a novelty feature which will only happen once or twice, but I don’t feel like it should be too big a part of gameplay because it has the potential to change the flavour of the game quite a lot, and doesn’t feel like something a diplomat would spend most of his time doing. I’m being a bit vague and wordy, but it’s hard to convey ideas about “amount” when I don’t have that clear an idea myself yet.


Sounds a lot like utility-based agents, if you haven't investigate more on those: http://en.wikipedia....telligent_agent


Yes, that's the basic idea.

I'd had a look at your RPG idea already - it looks interesting, but I can't really think of any input at this stage beyond what people have already said. Good luck with it - I'll keep an eye on your thread, and post if I think of anything useful.

Now for Bluefirehawk’s comments - sorry, this is going to get a bit lengthy.


Sorry if I have sounded a bit aggressive in my last post, this wasn't my intention.


Not at all.


In your previous posts about you wrote about belief differently


Sorry for the confusion - the level of belief that a particular outcome will occur is based on the levels of belief that various states are currently true. At this point I should give a more in-depth outline of how each agent works. While still not at the level of a complete model, hopefully it will clarify some issues (please note that I'm not particularly familiar with object-oriented programming and am not using any terminology in a formal sense):

The agent has three main categories of object: beliefs (a percentage assigned to each possible current state), goals (a desirability assigned to each potential future state) and planned actions (to change the state in the direction of something more desirable). When the agent witnesses an event or - more often - communicates with another agent, it updates those three types of object with each incoming piece of information. Its ability to update its own beliefs, goals and plans is based on its ability to emulate other agents' beliefs, goals and plans, which in turn is based on how accurate its beliefs about those other agents are. In effect, the agent will perform lots and lots of cost-benefit calculations, mostly on behalf of other agents - or rather, on what it believes to be the states of other agents.

Say agent A is told something by agent B. A then has to do the following:

1. Update any pre-existing beliefs regarding the subject B is talking about, and - if appropriate - create new belief objects pertaining to the new information; simultaneously, A needs to update its beliefs about all the agents which might have played a part in the message eventually arriving via B, including B itself. To do this, A will make use of its pre-existing beliefs: for example, how likely it thinks B would be a priori to lie about this particular thing, how likely another agent would be to lie to B about it and for B then to believe it and pass the message on - these things in turn are based on what A thinks various agents, including B, might stand to gain or lose by lying, i.e. what it thinks their goals are.

In practice, what A will do is perform Bayesian inference: it considers each possible current state (including other agents' beliefs, goals etc.) in turn, imagines that state was true and works out what the other agents would be most likely to have done in those circumstances; then it decides how closely each of the hypothetical outcomes matches up with what it's observed and with its prior beliefs to come up with new belief estimates for the relevant states.

2. Update its goals. The desirability that A assigns to most potential future states will depend on what events it thinks will follow on from each of those states. That in turn depends on how it thinks other agents will act, which is determined by what it believes their current beliefs and goals to be; so once A has updated its beliefs about those things in stage 1, it needs to imagine what the other agents would be most likely to do in each of the hypothetical future scenarios, just as in stage 1. Then it can assign updated desirability levels to the future states based on updated estimates of the likely outcome of each state.

3. Update its plans. Having already worked out the desirability of the various possible future states in stage 2, A can now restrict itself to looking at those which it can immediately move to via its own actions (including talking). It can pretty much go ahead and pick the one with the highest desirability, but with the caveat that time is also a limited resource - the player might not spend hours travelling from one end of the kingdom to the other, but the characters still do. Ideally, agents' locations and the time it takes to travel between them should be part of the state used when calculating desirability in stage 2.

I'm currently at the stage of working out how all of this will work at a lower level.


Agent A goes to agent B and wants to know if he can trust agent B. […] For now you can also remove the option that B is trying to double cross A.


The thing is, in order for A to decide whether or not B is lying there has to be a possibility that he is. A needs to consider the possible states that could motivate B to lie, the possible states that could motivate him to tell the truth and then work out, on the basis of other prior beliefs, which one he thinks is actually happening. The situation is in a sense irreducible: I might be able to program in a simple scenario, with few possible states and correspondingly simple sets of beliefs and goals, but the decision engine which analyses those states will still have to have full functionality.


on the Dialog thing: You don't seem to be the guy that is happy with easiest solution. I presume this is a time eater, you can put in as much time in it as you want, it is never finished and can always be better. Keep that in mind when you really want to implement this.


You’re certainly right about that. I’m an extreme perfectionist; it’s a habit I’m trying to break, but it’s not easy. I think I’ll be satisfied with anything that works, but I’ll be happier the more I can add to it.
This is definitely a math problem.
agent A hears rumors of agent B deceiving agent A by talking to agent A's arch-enemy, agent Z. agent A who trusts agent C, gets agent C to confirm rumors.. agent C despising agent B and knowing the rumors are true, confirms rumors are true to agent A. agent A's trust towards agent B decrease and agent A plans on taking action. agent E (who was there when agent C gave the information to agent A regarding agent B) who despises agent C and is more loyal to agent B rather then agent A tells agent B of agent C's demise. agent B immediately to remedy relations *lies* to agent A via saying that the agent he was seen talking to was simply agent E, not agent Z, it was a mix up. And that agent C is the one making relations with agent Z and only wishes to create a distraction via making up rumors about agent B. agent A is now doubtful of both agent C and agent B. agent A then decides to ask agent E to confirm what agent B said. agent E being more loyal to agent B decides to *lie* to cover up for agent B hoping agent B's goals will be achieved. Since agent A trusts agent E, agent A makes up his mind to believe the perpetrator in this whole dilemma is agent C. And now with demise and what not agent A is taking out one of his own!
Oh damn, I'm just too good for this.
- political savage

This topic is closed to new replies.

Advertisement