• Create Account

"Collector" agent vs a collector opponent (how to collect more?)

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

10 replies to this topic

#1Icebone1000  Members   -  Reputation: 1484

Like
0Likes
Like

Posted 07 November 2012 - 10:44 AM

Hi, Im doing a small prototype of a game (it is really like an excuse to implement an ai agent).
I imagined the simple situation, where you have goodies spreaded randomly in a map (2d grid), and you "need" to collect more then your opponent (the initial idea wasnt really a race to see who collects more (it was ammunition that give the power to shoot a single bullet, so collecting more simply put you in enormous advantage), but it got really interesting/complex (to me) just the collecting part, so lets consider its really a race to collect more goodies)..

So, logically thinking, I guess a good agent would run for the part of the map where are more goodies in less space(concentrated), considering the opponent distance to the same part of the map, and his possible and consequent(future) actions..

This resembles me like the salesman problem a bit (most efficient way to collect all goodies), but with a lot more dynamic added...or maybe Im just plain idiot...I mean, theres any optimal solution to this?

How to detect wheres the best concentrated goodies in the map? (I can think in doing a BFS for EACH goodie, and detect witch one have more goodies arround in less space iterated..lame?)

Assuming you detected the best part of the map to collect..what do you do? Run to there (a*) and start collecting the closer ones (a* from the closer to the next closer)?

No idea what to do in relation to the opponent..

Is this really a lot more complicated than it looks at first or am Im over complicating? How would you solve this?

#2SuperVGA  Members   -  Reputation: 1129

Like
0Likes
Like

Posted 07 November 2012 - 10:51 AM

I'd say it's exactly like the travelling salesman problem, if the only purpose is to decrease
Time_Spent / Goodies_Collected

It's still interesting, though.
-Are there obstacles involved?

#3Icebone1000  Members   -  Reputation: 1484

Like
0Likes
Like

Posted 07 November 2012 - 11:03 AM

I'd say it's exactly like the travelling salesman problem, if the only purpose is to decrease

Time_Spent / Goodies_Collected

It's still interesting, though.
-Are there obstacles involved?

Not at first, my first prototype will be for sure an empty/open map..My initial idea was to keep going till I end with the 'ammunition for bullets' game, and the maps be dungeons/rooms...But that collecting part might be enough to burn my gray matter.

Like
1Likes
Like

Posted 07 November 2012 - 11:29 AM

It is, indeed, the Traveling Salesman Problem. Welcome to the world of NP-Hard.

However, remember that you don't need to actually solve TSP to get your agents to look good. There are a number of ways you can go about it so that they look reasonable without being mathematically purely rational.

The most obvious way of doing it is by simply moving to the closest one at any given time. When you reach it, move to the closest one from that point. That does make for some edge cases that are really stupid, however. Imagine that one bullet is 10 west of your position and ALL the rest of the bullets are 11 east of your position. The "nearest" algorithm will move to the single west one first and then backtrack (21 spaces!) east to the rest of them. This is obviously not optimal.

Another (much better) solution is to use multi-layered influence maps that combine the locations of bullets with your own location. That combines "where is the stuff?" with "what is closest to me?" Each bullet would radiate out a decaying influence map onto the world. By adding up all those influences, you will get a "heat map" that shows where the biggest clusters of bullets are. The problem is, if you move towards that area, you might bypass "targets of opportunity" on the way. (for example, bypassing a single bullet at 5 east to get to the cluster at 15 east). However, if you also overlay information radiating out from your current position, you are adding in the part that says "and is close to me". Therefore, on your way to that cluster at 15 east, you will pass by the one at 5 east and take advantage of it simply because it is being combined with the fact that it is close to you. This approach will result in behavior that looks a lot more like how humans process the problem and, therefore, will look natural to us. It may not be perfectly mathematical... but neither are we.

Sorry I don't have time to make pretty pictures for you. Hope it made sense in text.
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC

Professional consultant on game AI, mathematical modeling, simulation modeling
Co-advisor of the GDC AI Summit
Co-founder of the AI Game Programmers Guild
Author of the book, Behavioral Mathematics for Game AI

Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

#5Icebone1000  Members   -  Reputation: 1484

Like
0Likes
Like

Posted 07 November 2012 - 12:14 PM

It is, indeed, the Traveling Salesman Problem. Welcome to the world of NP-Hard.

However, remember that you don't need to actually solve TSP to get your agents to look good. There are a number of ways you can go about it so that they look reasonable without being mathematically purely rational.

The most obvious way of doing it is by simply moving to the closest one at any given time. When you reach it, move to the closest one from that point. That does make for some edge cases that are really stupid, however. Imagine that one bullet is 10 west of your position and ALL the rest of the bullets are 11 east of your position. The "nearest" algorithm will move to the single west one first and then backtrack (21 spaces!) east to the rest of them. This is obviously not optimal.

Another (much better) solution is to use multi-layered influence maps that combine the locations of bullets with your own location. That combines "where is the stuff?" with "what is closest to me?" Each bullet would radiate out a decaying influence map onto the world. By adding up all those influences, you will get a "heat map" that shows where the biggest clusters of bullets are. The problem is, if you move towards that area, you might bypass "targets of opportunity" on the way. (for example, bypassing a single bullet at 5 east to get to the cluster at 15 east). However, if you also overlay information radiating out from your current position, you are adding in the part that says "and is close to me". Therefore, on your way to that cluster at 15 east, you will pass by the one at 5 east and take advantage of it simply because it is being combined with the fact that it is close to you. This approach will result in behavior that looks a lot more like how humans process the problem and, therefore, will look natural to us. It may not be perfectly mathematical... but neither are we.

Sorry I don't have time to make pretty pictures for you. Hope it made sense in text.

This is awesome! And looks fun to implement.
I never worked with influence maps, so, how should I use it?
Simply move the agent to the greatest adjacent heat ?
Or do an a* to the all map most heat, and use the heat of the iterated cells as positive cost on the a* heuristic?

#6SuperVGA  Members   -  Reputation: 1129

Like
1Likes
Like

Posted 07 November 2012 - 12:43 PM

There was a colourful and well-written article on influence mapping here on gdnet a while ago:
http://www.gamedev.net/page/resources/_/technical/artificial-intelligence/the-core-mechanics-of-influence-mapping-r2799
-With the waypoint nodes and such, it becomes a lot more complex that what you need,
but i found it very inspirational.

I suggest you start by painting the heatmap Dave mentioned to an image,
and create a debug feature to display that image runtime. (So you can check that the map creation works, and tweak it if needed)

Then perhaps add sampling of the map afterwards; which could eventually involve visibility limits and such. But an omniscient approach could also be good to start with.

Like
1Likes
Like

Posted 07 November 2012 - 06:55 PM

You don't actually use the map itself for picking a destination. Transfer the influence map to the targets themselves is by using the value of the cell that the bullet sits on as the priority for that bullet. Pick the top ranked bullet as the one to go to next. As I described, that bullet score will be a combination of it's proximity to you AND the proximity to other bullets.

The tricky part is determining the decay spread rate of each bullet onto the influence map itself. It needs to be of a great enough radius that it is likely to help identify clusters. The same applies to your character -- but even more so. My initial gut reaction would be to use a radius for your area that covers most of the map and ones for the bullets that cover about half the map. You can either do linear decay (just a linear function of the distance) or some sort of exponential function so that there is a spike ON the bullet with a drop off that shallows out as your move away. At this point, it is more art than science, though. I work with these sorts of response curves often enough that I tend to intuit the right ones as I work. *shrug*

One last reminder... if the bullets don't move, you can create the map once and leave it for all players to use. As a bullet is picked up (by either player), you don't even have to reprocess the whole map... just decrement the values that the now missing bullet affected by the inverse of what it would have added. i.e. if that bullet would have added 0.34 to a cell, subtract 0.34 from that cell and you have negated its influence. This saves a lot of time in processing the map.

As the agent moves, it only needs to take the static bullet map and apply its own influence to generate its own personal map. There are some tricks you can use to speed that process up as well, but I will leave those to you to find.

Once you get the mapping system up and running, you can have all sorts of fun with it. For example, perhaps the agent wants to take into account what the other characters/players are doing. For example, if it realizes that the player will likely go for A B and C, it might instead bias away from those and get D, E, and F instead. There's all sorts of things you can do!
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC

Professional consultant on game AI, mathematical modeling, simulation modeling
Co-advisor of the GDC AI Summit
Co-founder of the AI Game Programmers Guild
Author of the book, Behavioral Mathematics for Game AI

Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

#8Icebone1000  Members   -  Reputation: 1484

Like
0Likes
Like

Posted 13 November 2012 - 03:51 PM

Hey,

My influence maps are working, one for the bullets and one for the agent, and I already have the agent going to the most "influenced"(most heat) bullet.
Things seems to be working just as expected, witch is awesome.

But I did a stupid mistake..even aiming to have an empty map for now, I did the influence maps thinking on future map obstacles..So Im actually computing it with BFS from each bullet.
I though that by doing this the influence wouldnt pass trough obstacles, but I was a fool, Im computing the influence based on the distance, so what happens is that the obstacles receive 0 influence, but things trough it keep receiving influences as always.. "

example: 100 - 90 - 80 - 0 - 60 - 50 (0 is the wall)

(thats really how my influences are for now, it starts from 100, and a linear decay of 10*distance, I can set it to anything I want thou, but the formula is always:
current - decayrate*dist, I suck at figuring out smarter formulas ).

So now I know that I have to consider the amount of steps/cells, not the straight distance..So I added another data member on my Node struct that I use only for those searchs stuff..And lt works..

now: 100 - 90 - 0 - 50 - 40
----------------80 - 70 - 60

Id like to know if this is really the approach I should be taking..

-------

Second, its a question about BFS (not much an AI topic itself), theres any way to keep track of the steps without storing it in each node added to the open list, but on the algorithm itself?

I tried simply keeping an int steps, and at each loop iteration(just before adding the adjacent cells) I ++ steps, seemed logical to me, but this didnt work, it made an awkward 'leaning towards right' influence map...perhaps Im doing some shit in my BFS (although they are doing their job well)...

I just want to know if its possible to know at what step the current cell/nodes being added during the BFS is, without storing this step value inside the (previous) cells/nodes.

Edited by Icebone1000, 13 November 2012 - 03:55 PM.

#9Icebone1000  Members   -  Reputation: 1484

Like
0Likes
Like

Posted 13 November 2012 - 05:03 PM

video
check this out
The bullets have a initial influence of 100 and decay rate of 10 per step;
The agent have a initial influence of 300 and a decay rate of 20 per step;

One important thing to note is that I dont update the influence map of the agent each time he changes position, I only update his influence map when the bullets influence map changes (when one is collected).

I did this so that the agent doesnt "change his mind" halfway his current path to the bullet he already decided to go to.

So basically, each time the bullets inf map updates, I update the agent inf map, add to the bullets one, get the most heat bullet, compute a path to it, and just keep going in this path till the bullet inf map updates again.

Like
0Likes
Like

Posted 13 November 2012 - 07:53 PM

Video looks good except for the fact that he (she?) is ignoring targets of opportunity. That is, he is walking past bullets that are not really that far out of the way. That's the reason for using the agent's influence map. Looks like it needs to be cranked up a bit so that it is paying more attention to things closer to it.

As for your "walls" issue, rather than using straight distance, you can do a flood fill approach so that influence moves around walls rather than through them. This is, of course, important since you need to account for the pathing distance.
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC

Professional consultant on game AI, mathematical modeling, simulation modeling
Co-advisor of the GDC AI Summit
Co-founder of the AI Game Programmers Guild
Author of the book, Behavioral Mathematics for Game AI

Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

#11CuriosityKills  Members   -  Reputation: 114

Like
0Likes
Like

Posted 15 November 2012 - 10:50 AM

Second, its a question about BFS (not much an AI topic itself), theres any way to keep track of the steps without storing it in each node added to the open list, but on the algorithm itself?

BFS deals with planning, and so is very much AI, even in the general sense.

With BFS you're going to have to store at the very least the parent of each state you consider in your search. If BFS is too memory-intensive for your application, consider DFS with iterative deepening. Like BFS, this is an uninformed search that also finds the optimal path on an unweighted graph. If implemented with a stack, you would only need to remember a very few states, but that algorithm does require a bit more processor time than BFS. That's often the trade-off when you optimize for low memory usage.

(Note: If you're ascribing different weights to different steps via an influence map, you're not actually implementing a BFS.)

Edited by CuriosityKills, 15 November 2012 - 10:58 AM.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

PARTNERS