RTS-AI built on potentialfields

Started by
7 comments, last by badday 12 years, 1 month ago
Hi there,

we are currently developing an RTS-game and working on the fight-implementation. Of course you come to the field of AI immediately, so we thought how to realise the AI implementation. We looked for a straightforward technic and finally chose the potential(field)-approach. Let me point out what this means:

All moving depends on potentials. So if you want to move some unit from A to B, we have a layer for the static world, that is the heightmap. So you make a regular A-star algorithm execution which is a bit different to what you use normally. We start at the target and go from there to find our start-position. The class path_potential_admin supplies the static layer, so obstacles are not considered. It also has some caching-functionality, so if this path is already known (specified by pos A and B), we can save time and supply our previously found path. This is possible as the static world never changes. The dynamic world is also based on potentials, that is e. g. a repulsive one for units or buildings so that we do not collide. We offer a few different potentials, just have a look at the code ( http://sourceforge.net/p/potentialfield/code/ci/b9930a37bc114a64601d976d0c2d774fa919789d/tree/ - it comes with a VS-project which is ready to compile and will create some pictures which illustrate how it works - be aware, might take a long time to finish).

When it comes to the moving process, we need some force which tells us where to go. You can imagine that like a 3D-potential-field and some bullet. The bullet will always move downwards (due to the earth's gravitational pull). Downwards means the lowest potential in this context.

So far to the theoretical idea and method, now we come to the real world and the problems we currently face with our implementation. We have some requirements:
* do not rely on external libraries
* be as performant as possible

So we would like to discuss the following questions:
* What do you think about this idea at all?
* What improvements do you see in the implementation?
* We are working with a hex-grid currently, what do you think about this approach?
* There is an issue when it comes to mapping a quadratic heightmap to a hex-grid (you know radius of the inner circle and the side length are not equivalent, just see http://upload.wikimedia.org/wikipedia/commons/e/e2/Sechseck-Zeichnung.svg ). This is a known issue and will be fixed soon
* The A-star implementation is somehow not really efficient when it comes to a changed start-position. Are there any improvements?
* Anything else you want to say.


Thanks a lot for your contributions.


Greetings,

badday
Advertisement
I can't look at the code right now, but I suggest against a pure "follow the potential field" approach. Any potential field can have local maxima that are poor from a global perspective. For example, you combine an "attack enemy" field with an "avoid fire" field. The attack enemy portion tells you to approach the enemy, the avoid fire tells you to stay away. Your unit may well halt at a distance where it can be fired upon, but not fire itself. Or where it can't attack or be attacked. I think it's important that units commit to some objective rather than risk achieving none.
I think you talk about the "local optima problem" described here: http://aigamedev.com/open/tutorials/potential-fields/#WhatabouttheLocalOptimaProblem , don't you?
What is talked about in this article is some problem resulting from an approach which doesn't rely on a path calculated with A*-star, so I think we can almost be sure that we won't have such a problem on the static-layer.

Another point is the one you mentioned. This is a problem in fact, but I'm not sure whether this is a potential-field-approach problem. To solve this we might say to use some hybrid-approach (as you said, pure potential field approach seems not to be the best solution), e. g. using some fuzzy-logic-system to determine which potential to put around an object (while one object has only on potential) and therefore making sure that by combining various potentials without looking at the global context such a problem does not occur. So at the end, the AI is transformed to potentialfields and not the other way around.
Yes, that's the one. I wasn't sure how much AI background you had, so better to mention that one. :) You're correct, A* won't be affected by that problem, unless you dynamically regenerate the path often. If your position affects the potential field, and you recalculate the path every x time steps, there is some possibility of getting stuck even with A*, e.g. when you're going left the lowest cost path goes right and vice-versa.

I think the second problem could mostly be avoided using a metric for preferring the current course of action over other actions, and maybe a pherenome type metric to discourage going back to do an action you just attempted recently (particularly if it was a course of action that was never completed).
Our current implementation should make it quite unlikely (at least if we do not have really many or big obstacles) that a recalculation is necessary. Although, I'm not yet sure what you mean with the connection of times of recalculation and getting stuck, as any recalculation from some position does not differ from an initial calculation of the path from that position in our implementation. Maybe I understood you wrong.
When it comes to what you say, that due to the dynamic layer, we need to go right or left, a problem which could - dependent on the implementation - occur is that we are dead-locked and have a force of 0. But I think you can avoid that if you have a different weight of the layers, which might come close to what you said to the second problem. So we might have some smooth gradient-potential around an unit up to a certain distance and than immediately an infinite value to avoid at any price that we collidate. That would - as far as I can see - avoid that we get dead-locked.
We could also have some fallback, so if we really face a 0-force and are not at our target, let's clean the dynamic layer of any potentials for smooth moving (any gradients) and let's have only 0 or infinite values. I think it is very unlikely that the 0-force remains in this case, although I'm not yet sure.

Thanks a lot for your help :)
I did some work like this a while back; my environments were large and open with relatively few obstructions, so it made a reasonable amount of sense to do micro-routing like this. You're right in saying that static solutions are better for larger scale routing. It's *possible* to get the field systems to (say) find the bridge over the river, but it's not as good a solution as a better macro scale system. For example, we used potential fields to get units onto the road network, and they would then drive the road network (which is modelled as a graph) until they get close to their destination, at which point they get off the road again. Road driving uses an attractor which moved ahead of them to tow them down the road. It means that if they come across a partial blockage, they'll simply drive around it, and then regain their correct route. It actually produce very "realistic" looking movement.

We got quite good results with "formed" infantry moving through woods/obstructions etc, but back then it all had to run on the CPU and it was actually quite expensive to run. (These days it might run nicely as a GPU job). The other issue was that it meant it was difficult to predict arrival times at destinations/waypoints along the route, because the local field-movement system might add arbitrary distance to the actual movement at the macro level. Never did work out what to do about that really.


"Your unit may well halt at a distance where it can be fired upon, but not fire itself."

This problem can (sort of) be solved by using scalings on the potential attractors/repellers. Conveniently they can be hooked up to "fuzzy logic" outputs for things like "How much do I want to seek cover" vs "how much do I want to obey instructions" although one needs to be careful to order the scalings properly and that's the part which gets difficult if there are many scalings.
For example, we used potential fields to get units onto the road network, and they would then drive the road network (which is modelled as a graph) until they get close to their destination, at which point they get off the road again.[/quote]
Hm... Our terrain is generated with some deterministic noise-generator, therefore a modelled graph would have to be computed, I'm not sure whether this is easy to archieve.


It's *possible* to get the field systems to (say) find the bridge over the river, but it's not as good a solution as a better macro scale system.
[/quote]
I agree with you for that point, but our game has no such "high interest" places, so the only thing which might be a problem are more or less little obstacles (buildings or units), but not such things like rivers which influence to hole navigation-process much more.
Yeah, if your obstacles are small related to the scale of the movement areas, it works well. Nice realistic looking movement, easy to code, easy to debug. No complicated giant datastructures and it doesn't even need a lot of work to amortise costs across frames the way (say) doing partial A* computes does.

Seriously look at trying to run this as a GPU job though, cos it really does chew up cycles.
You mean combining the potentialfields/layers using GPU (OpenCL, as we need to run it on our server, too)? Are there any code-snippets available which show something similar in action?

Apart from that, our A*-algorithm should be optimized I think, so maybe someone might have a look at the code ( http://sourceforge.net/p/potentialfield/code/ci/b9930a37bc114a64601d976d0c2d774fa919789d/tree/ ) who is familiar with the algorithm.

Thanks a lot.

This topic is closed to new replies.

Advertisement