Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 23 Jan 2011
Offline Last Active Feb 17 2016 07:18 PM

#4950689 Grid pathfinding with a lot of entities

Posted by on 19 June 2012 - 01:34 PM

actually, don't loop thru the 'map' - but rather generate an array of all unobstructed map spaces, looping thru that and updating it and the map with each iteration.

#4950688 Grid pathfinding with a lot of entities

Posted by on 19 June 2012 - 01:30 PM

interesting problem!

I was thinking about this and how to fill in the 'walk map' and the algo I would try first is just loop thru the map, placing an incremented number next to any space which borders a non-zero space, loop until no more zero spaces

The entities which are surrounded by other entities (and thus can't move) are then not even checked, and if a space is reachable by two or more other spaces is only dealt with once! Seems like perhaps the speediest way to make the move maps. No need to use any pathfind algo at all!

#4932956 Turn Based Strategy AI

Posted by on 19 April 2012 - 03:19 PM

Just wanted to say that I am following your progress with great interest and encourage you to continue to develop your project as well as keep updating this thread - its quite enjoyable to follow you with your trials and tribulations!

Thank You!

#4928891 Getting your AI opinions on what makes a good AI

Posted by on 06 April 2012 - 02:07 PM

Regarding triggers vs. threads... the two are not mutually exclusive. That said, constant polling of the environment is computationally expensive if done incorrectly but also yields some more subtle behavior than having things entirely trigger-based. There are pros and cons. You can also have "immediate action" triggers show up as high-priority decisions in the landscape (so to speak) so that they can't be ignored.

Actually, this is akin to my current thought, and kind of coincides with my lame attempts at 'memory' or 'historical environment knowledge'. As an agent blunders through the environment, all manipulatable objects and their locations are stored along with a timestamp. The timestamp is used to degrade the knowledge 'value' over time so that a berry bush at X one hour ago has more 'value' then an equal (size, type, distance, etc) berry bush recorded at an earlier time. When accessing this array, I divide the values into 2 groups: Immediate Vicinity (Line of Sight) and Long Distance. Now, as Goals are formulated, any objects required to achieve a particular Goal are stored AND then the 'memory' is evaluated and the objects in memory have their values additionally adjusted upwards in importance, becoming 'triggers'. Then, as movement occurs, any trigger that comes into view causes the AI to re-evaluate the goal in which that object is a part of and possibly altering the current goal to incorporate manipulation of this trigger object into the plan. I refer to these 'triggers' as Dynamic Triggers. I like the idea of Dynamic Triggers and the Timestamps allow me to put an upper limit on the amount of memory required, flushing out 'old' info.

BUT, implementation of this has been more memory/computationally intensive then I would like, and seems to me kind of 'clunky' - still debugging and re-balancing values which I fear has been the majority of time involved coding.

#4928554 Getting your AI opinions on what makes a good AI

Posted by on 05 April 2012 - 12:15 PM

Sounds like we (you & I) are trying to achieve basically the same objective: 'realistic' NPC actions determined by individual personality traits and dynamic environment variables. In exploring this thought, I have determined, so far, that a hierachial task network based on a version of Maslov's hierarchy of needs (modified by personality traits) to determine a small list of 'goals' that are then GOAPed through an action/skill tree to come up with a plan (series of specific actions to undertake to achieve a particular goal) which an agent/npc then uses to figure out 'what do I do now'.

Though this is all based on known AI routines and techniques, I have come up against several obstacles which really hinder 'realism'. First and foremost is that of 'agent memory' - having each agent retain and have access to their own perception of world objects/dangers/etc. This, by itself, greatly increases the amount of physical memory each agent consumes, as well as processing time to access/interpret the data. Without memory, agents will continually repeat mistakes (choosing to go through 'dangerous' areas, etc) in an unrealistic fashion. I have yet to solve these types of memory issues.

I also recommend Dave Mark's book, "Behavioral Mathematics in Game AI" which has helped me in a variety of ways, mostly to figure out alternate ways to use the personality traits through use of different math functions so that responses are more 'human' and less linear.

#4764459 Figuring out economic problems

Posted by on 25 January 2011 - 08:55 AM

Of course there might be different available jobs; hunting and cooking one rabbit might be less unpleasant than 4 hours of rabbit hunting, but more unpleasant than a 2% quota of a quick $1000 armed robbery.

Remember to include the potential for success and failure -- and the ramifications of each. For example, what is the risk-reward ratio of the armed robbery? If the penalty is negligible, then give it a shot. If the penalty far outweighs the gain, then pause. But what is the potential for getting caught? If the penalty is death, but you have a 0.001% of getting caught, it doesn't matter. Fun with math, folks!

I'm of course referring to expected value, which in the case of rabbit hunting can be easily modeled as a random variable (time needed to catch one rabbit) or two (time needed to find a rabbit, and time needed to kill it) but in the case of a robbery requires a rather complex weighted tree of outcomes and random variables (successful completion, losing the loot to run away, being caught, kinds of penalty if caught, loot obtained if not caught, amount of fine if fined, length of vacation if jailed, choice of prison if jailed...).

When I have looked at this aspect with my limited knowledge of AI, my first instinct was apply the following formula:

(1) quick estimate based on known factors (distance to goal, amount of time to work it, distance back, etc). Remember, the 'goal' might be 'Build a shelter', of which one part will be gather the necessary resources - BUT not just gather them, also transport them to the desired location of where the shelter is to be built. So need to include the 'distance back' figure.

(2) Check a 'memory' table (don't know how to do this efficiently yet - need to learn) of previous attempts of achieving each part of the goal.

the #2 (check memory) was an attempt to estimate tasks which had a random time component and random 'danger' component - for instance: hunting rabbits! If agent goes out hunting rabbits, and spends x time looking (its variable), then y time actually trying to kill the thing, then another z time traveling back home... then next time he can use the previous total time as a weighted average for a better guestimate. This method will also allow for agents to factor into a decision if a task is too dangerous or not. In my thoughts, I wanted to accomplish two things here: (1) be able to predict an outcome roughly if never attempted, (2) to better predict an outcome if done before, and have this prediction change when outside influences affect the task (for instance, the first 10 times he goes out to hunt rabbits, it takes a running average of 100 turns, but, he (and possibly others) have now decimated the local rabbit population and it takes longer and longer to find rabbits to hunt - or, dangerous entities have moved into the area and rabbit hunting becomes more risky as time wears on): so each agent needs to have a way to constantly adjust their predictions to reflect a dynamic environment and the actions of other agents (and non-agents: players!)

An agent who has a value of $10 an hour will pick option 2 but if the agent has a value of $100 an hour they will go for option 3.

The same would apply to your scenario of fishing, picking berries, and buying from the market, but in that case you’d also want to adjust the result by the agent’s likes. I’d probably make likes and dislikes magnifiers in this case. If an agent likes fish(x2) and hates berries (x4) then he more or less has to be standing next to the berry bush and starving before they would choose them over going fishing.

the 'time value' thing is relatiely easy to calculate EXCEPT for determining the 'unit' of value to use (this is, in essence, the money unit) - I can randomly choose a unit of any commodity (berries, ore, wood, chairs, sandals, etc) but need to figure out a way to actually make a choice between using one type of unit vs another based on logical agent rational - the same determination for what commodity to use for money: (1) needs not to depreciate over time (2) easily divisible (3) easy to transport and carry (4) recognized as valuable by others (5) and I forgot the 5th quality of money that humans have looked for in the past.... How do I program an agent to weigh these abstract qualities of each type of commodity (potential money)? Don't know yet - its a problem that may just need to be solved by brute force (ie: introduce GOLD and force all agents to calculate in those terrms)

#4764037 Figuring out economic problems

Posted by on 24 January 2011 - 12:36 PM

yes, it is a common misconception of the average person to assume that prices are 'determined' by costs, where actually it is backwards: prices determine costs. The reason a particular apple is priced at $1 is NOT because it costs $0.50 to pay the laborer to pick it and the owner wants to make a profit, but rather the laborer is paid $0.50 BECAUSE the apple is only worth $1 on the market. And, really, its not 'prices' as if they are set, its 'valuations'.

Which brings us back around to, what I see as a major flaw in my method for choosing which task to attempt in order to satisfy a need: If I base the calculations on 'labor type and time', then I have set into the system an economically fraudulent assumption - Marx's Labor Theory of Value. and it would definitely be the ruin of all the world over, or at least have a world of 'ants' instead of simulating people to some capacity....

I am also toying with the idea of not forcing a monetary system and seeing if one springs forth - but the ability to 'spring forth' involves an awareness that I don't think I can code into these agents. But, maybe if I focus on just commodity valuations: ie 10 fish = 2 ore, 20 wheat = 10 fish, 10 wheat = 1 ore, etc then perhaps a certain commodity will spring up as being 'the easiest' in each locality as a money base (fish=money in fish town, ore=money in ore town... or perhaps it ends up being reversed where ore=money in fish town and fish=money in ore town because the units are more valuable respectively in those particular locals. I think I will push such dreaming off until I get to that bridge, and keep focusing on the individual details, the small tasks, and see what happens...

I have been researching this idea for over a year, and my travels have taken me to economic simulators that economists have tried to create... but they almost always use a top-down approach, central planned, without even thinking about it.... they are trying to derive 'formulas' which model human behavior instead of focusing on the individual differences between human actors in an economic setting. I think their approach is a fool's errand. You can't calculate a person's ever-changing desire. Though I guess that is exactly what I am trying to do too!!! LOL!

we shall see.

#4763580 Figuring out economic problems

Posted by on 23 January 2011 - 02:05 PM

First off, I am an AI novice. I have researched FSMs, Hierarchical behaviors, and had my mind melted while trying to understand neural-nets. I have implemented FSMs (not hard) and am in the process of adding a few twists.

Here is my thought: I do believe that emergent patterns that mimic life-like behaviors can and will be produced from a basic simple ruleset. So, along these lines, I am trying to program an AI which, based on a basic Maslov Hierachy filtered through a set of variables which are particular for each agent (lazy-thru-productive, weak-thru-strong, stupid-thru-smart, cowadly -thru-courageous, follower-thru-leader, homebody-thru-adventuresome, favorites: color blue, forests> mountains, fish>meat>fruits>bread, etc,), and watch the agent 'live' in its virtual world, scratching out a meager existence and interacting with other agents. My biggest problem that I have encountered so far is in trying to determine which of multiple possible solutions to pick. I have some background in micro-economics and am trying to incorporate some of that into the decision process, so, one thought is to rate each possible solution based on an estimate of time it would take to complete it, weighted by the different categories of labor involved, and come up with a value which could then compare to other possibilities.

It would work like this: Agent A determines that it is hungry, and knows that it can fish in the nearby stream, or, go to a far tree to pick some fruit, and either method would satisfy its 'hunger' issue. It calculates the distance to the stream, an estimate of amount of time spent fishing modified by how much it 'likes' to fish, and then compares it to the same thing but the steps involved in picking fruit. OK, this works fine.

The problem I have is when economics and interaction with other agents occurs. Suppose all agents are willing to sell any object they possess if the price is higher than their cost to produce. And any agent is willing to buy if the price is less than their costs to produce. I am hoping that market prices will naturally arise from such a system, where the 'seller' initially offers cost + 100%, and the buyers initially offer cost - 50%.... and perhaps a short series of haggling in between until either both agree or decide not to trade. NOW, assuming that this 'price system' produces some sort of market prices, how in the world do I get Agents to take into account the possibility of purchase when making its decision on how to satisfy its hunger issue? I mean, its easy enough to compare to 'like' variables which describe the same 'units' like time and preference, if fishing takes 10 min and berry picking takes 5 minutes then perhaps the choice to fish only occurs if the agent 'likes fishing' sufficiently enough (twice as much?) as he likes picking berries... but what about going to market and buying the Fish? I can total up the distance to market, use the current market price for fish, even make sure that 'fish' are currently available in the market, but how do I equivocate in a money price into the equation?

Besides the interesting nature of the problem itself, the reason I am pursuing this line of simulation is that I feel that the future of MMO's will be entirely dependent on 3 main things: an AI living world that players can participate in (the Sims, Dwarf Fort, etc), a dynamic world which can be affected physically by players (Dwarf Fort, Minecraft, etc) and advanced user interface design (the Wii, virtual eyewear, Xbox Connix, etc). Ultima Online at its release WAS going to go down this path, but they got overwhelmed by a myriad of other unforeseen issues with player interaction which put the kabbosh on doing things like: the players decimate the local deer population (the current food source of the bears), and so the hungry bears start rampaging closer to town looking to players as a replacement food source.... I always was awe-struck by that sort of living, breathing environment.

Well, any comments or help would be greatly appreciated. Thanks for reading my ramblings!