Pathfinding - moving target

Started by
12 comments, last by ragonastick 22 years, 6 months ago
Ok, here is the problem: In an RTS, you will often have to navigate to a unit. This unit may or may not be moving. How can you find the path which will have the highest probability of finding the unit while still appearing to have some common sense (act intelligently as new information is found). To simplify this problem, here are the things I have: 1) high intelligence - no fog of war, and I can also query the target unit as to where it is heading. But, I cannot find out where it will actually be at a specific time. (Possibly estimate though) 2) influence maps available - the pathfinding algorithm I am using creates an influence map which is basically the "true" distance from each point on the map to a fixed point. These can be created fairly quickly (about 5 frames work), but calculating these constantly will not be good for performance. 3) influence map following available - if a new influence map was to be made which factored in the probability of finding the other unit, it is quite easy to follow. Of course, if you think up a better way which doesn''t involve influence maps then go right ahead, but I''ve just used them for everything because paths are self repairing and can be shared by multiple units without trouble. So, any ideas, or links or words which if I use then people will think I''m smart. (I found that calling pathfinding a branch of graph theory I sound smart - sorry to all you people who are actually smart and don''t need to use intelligent words to show it... you know who you are, I have greatest respect for you ) And to see what this is going in, you can download the tech test of my rts from this post, and it includes the standard (fixed point) pathfinding so you can see how well it works if you are considering it. (sorry about the blatant ad, just I have had a few compatibility problems so I need a few more people to download it and spill the beans) Thank you Trying is the first step towards failure.
Trying is the first step towards failure.
Advertisement
Offtopic:

quote:Original post by ragonastick
Trying is the first step towards failure.


Yes, I have heard that really good athletes don''t TRY or think about trying, they just DO. Like the difference between walking along a plank a foot off the ground, and the same plank if it were a hundred metres from the ground!

On the other hand, if that is just a defeatist slogan then it is silly and a bit sad (even if meant as a joke -was it in the Simpsons?).. if you never try anything, you will never stand a chance of succeeding.. and will just waste your time.

quote:Yes, I have heard that really good athletes don''t TRY or think about trying, they just DO. Like the difference between walking along a plank a foot off the ground, and the same plank if it were a hundred metres from the ground!

On the other hand, if that is just a defeatist slogan then it is silly and a bit sad (even if meant as a joke -was it in the Simpsons?).. if you never try anything, you will never stand a chance of succeeding.. and will just waste your time.


Lately I''ve been having more comments about my sig than my posts , maybe I will change it... or is it a hint that my posts stink . It might be from the Simpsons, I used to watch it a fair bit, but if it is, it is unintentional and only because it was in my subconscious (scary thought), now, I don''t watch that much tv, so I wouldn''t know. (yes, it is meant as a joke, and if you download my tech test... see above, then you''ll see that I do try, and it doesn''t look like failure )
Trying is the first step towards failure.
Homer Simpson: "Trying is the first step towards failure"
Sorry for the forth reply that is not about the actual topic, but I just wanted to say this.

Ketchaval, it would be to realise that it was a joke. My signature''s hardly optomistic, but it''s a joke - something that all games programmers should have. As for "you will never stand a chance of succeeding", our team used my signature as our motto at our last sports day, and we won! Yep, I couldn''t believe it either.


As for the actual question, well, I don''t have a clue. Sorry.


Btw, I wouldn''t mind nicking your signature if you are going to get rid of it...

~ There''s no substitute for failure ~
How ''bout this: Use Djitska''s algorithm (A* where h(x)=0) and end not when you reach a specific destination tile, but whenever you reach a tile that is part of the destination unit''s path.

That ought to do it.
quote:How ''bout this: Use Djitska''s algorithm (A* where h(x)=0) and end not when you reach a specific destination tile, but whenever you reach a tile that is part of the destination unit''s path.

The influence maps are created using Dij(can''t spell).

The problem with that method is that it may over estimate the position of the other object, say it is already on the path of it, then it can either 1) move closer, 2) move further away or 3) wait. with all options satisfying that idea.

What I have now done after sleeping on the problem (fixes everything), is basically like this: since we can tell how far away we are from the destination of the other object and compare that to the other object, we can decide to either, follow the other units path and therefore end up at the same location. Or move to the current location of the object.

The beauty of this method is that if the unit is not found when this happens for some reason or other, the same criterea can be used to try and find a new path. I haven''t encountered any problems with it yet, and it seems to be fairly processor friendly since usually only 1 extra influence map has to be found and in many cases, the existing map can be used which means that there is almost no performance hit.

But, if anyone has any experience with this and knows a better method, then please speak up, because this time around, I''ve actually coded stuff in a way that I can change it... I learnt my lesson from my last project
Trying is the first step towards failure.
*moderator hat on*
It is unfortunate that there are so many inappropriate replies in this thread... perhaps people could refrain from off-topic posts in future.
*moderator hat off*

Okay, now that I've got that out of my system...


ragonastick... the answer you want is a linear filter. It is extremely fast in implementation (a set of matrix multiplications) and fairly useful as a prediction tool over short periods. For unit 1 trying to move to the location of unit 2, that varies over time. Unit 1 (the AI controlling it) observes the position of unit 2 at various timesteps. The observations are used online (as they are taken) and added to the filter. The filter predicts the state vector of unit 2 (position and velocity) given the position observations. There is a measure of uncertainty also computed (the covariance matrix) that can be used to define a region of the world within which the unit should be at any given time with a given probability (assuming of course that unit 2 doesnt change direction... but if it does, this does not matter, because in the next iteration, this will be taken into account). So, unit 1 would predict this region and then choose a plan that takes it to this region. Once Unit 1 has line of sight to unit 2 you could change to a simple steering behaviour designed to decrease distance and relative heading angle between the two units.

So, how do you implement the linear filter?

Let's assume we have the state vector s = [x v ]' (where the ' means transpose... I had to write it like this because I cannot format the vector vertically!) to describe the state of unit 2. x is the position of unit 2 and v is it's velocity. If the state is 2 dimensional - which it probably is for an RTS - then you can actually solve each dimension independantly... i.e., solve the problem as two 1-dimensional filters. Below is a 1-dimensional linear filter (i.e., motion in 1 dimension... a line).

We can define a process model that predicts the state of unit 2 at some future time (t+dt) given the state at the current time (t) (yes, this is a Markov model):

s t+dt = F ts t+w t

where F t is the state transition matrix at time t and w t is the process noise. The transition matrix defines the deterministic evolution of unit 2 (which we are assuming is a straight line because this is the least biased estimate). The process noise is added to incorporate the uncertainty in the units velocity. F is defined as

F t =
|1 dt|
|0 1|

and we will assume that the process noise is Gaussian, meaning that w t = N(0,Qt) (a Gaussian with zero mean and variance Qt).

We can also define an observation (sensor) model:

y t = H s t + v t

where H =
|1 0|
|0 0|

and v t = N(0,Rt)

If you wish to assume that an observation y taken at time t is exact (i.e. there is no noise in the observation sensor) then you can set v t = 0.

We define Qt =
|b11dt 0|
|0 b22dt|

and
Rt =
|u 0|
|0 0|

(b11, b22 and u are arbitrary and determine the level of uncertainty in the model).

The Kalman filter equations (optimal linear filter in the minimum mean squared error sense) are:

m t = E[s t]

V t = E[(s t)(s t)']

...here E[] is the expectation operation, but you don't need to compute this...

P t = F tV t(F t)'+Q t

m t+dt = F tm t+K t+dt(y t+dt-H F tm t)

V t+dt = (I -K t+dtH )P t

K t+dt = P tH '(H P tH '+R)-1


The system is initialised with s 0 = N(m 0,V 0).

For some timestep t+dt compute the estimate of unit 2's state, given by m t+dt (the first component of which is the position) and the uncertainty in this estimate, the matrix V t+dt. It is this matrix that you want to use to figure out where the unit will be. The element V11t+dt represents the variance in the estimate of position. The square root of this is the standard deviation of the Gaussian distribution that represents the probability of unit 2 being at position x at time t+dt. For a Gaussian distribution, 99.99% of the probability mass lies within 3 standard deviations of the mean. You can use this fact in each dimension to define the circle/ellipse within which you expect to find unit 2 at time t+dt.

Well, I hope this helps. I would be happy to explain any parts of the above in more detail if you have some specific questions.

Cheers and good luck,

Timkin

Edited by - Timkin on October 15, 2001 4:56:22 AM
What does it mean in english?
Mathematics does not always translate easily into English. To understand the above you require a basic knowledge of linear algebra (matrices), some probability theory and some kinematics. 1st year level University courses would be sufficient or, if you want to spend some time reading, the concepts would be accessible to good students in the final years of secondary school.

Anyway, what does the above mean...

First, lets start with some basic mechanics. For an object moving in one dimension, its position can be given by the coordinate x and its velocity v can be given by the first derivative of x with respect to time, so v = dx/dt. These two pieces of information can be combined and called the 'state' of the object. Using a vector to represent this state gives s =[x v]. Striclty speaking this should be a column vector, however it's not possible to neatly format that within HTML. So s is a vector with two components: s[1] is the position of the object and s[2] is its velocity.

Using a simple linear model for motion, the position at any future time t+dt can be written in terms of the position at time t and the displacement of the object during the timestep dt.

Hence,

xt+dt = xt + vtdt

You can see that this is an approximation because vt is assumed constant over the timestep dt. If dt is small enough, this is a reasonable assumption for many systems. The assumption that vt remains constant over this timestep leads to the equation:

vt+dt = vt

The above equations, written in terms of the state vector, are:

s[1]t+dt = s[1]t+s[2]tdt
s[2]t+dt = s[2]t

These two state vector equations can be written in matrix form to give

s t+dt = F ts t

where F t =

|1 dt|
|0 1|


This is a deterministic process model describing the evolution of the object over a timestep dt.

Now, let's assume that the object is not moving with exactly constant speed but that it fluctuates slightly about some average speed. A Gaussian distribution (yep, that bell shaped curve, also called the Normal Distribution) can be used to represent this fluctuation.

In one dimension, a Gaussian distribution has the equation

N(m,v) = (1/(v*sqrt(2*pi)))*exp{((x-m)/v)2}

where m represents the mean of the distribution and v the variance. The two parameters, m and v, are said to characterise the distribution. If these two parameters are known, then the entire function is known over the domain of x.

So let's use this Gaussian distribution by taking a sample from it at time t and adding it to the velocity. This 'sampling distribution' needs to have a zero mean, so that the average velocity of the object will still be given by v. The variance is arbitrary and the larger it is, the more variability there will be in the velocity in any given timestep.

Let's also add some noise to the position of the object. This noise is added to take into account the fact that the position at the next time step is not given by the simple Euler equation above, but rather by an integral equation. In other words, the noise can be used to 'explain' the error between the model of motion of the body and the actual motion of the body.

So, there are two noise variables to add: one for the position equation and one for the velocity equation. These two components can be made into a 'noise vector' and added to the state equation, giving:

s t+dt = F ts t + w t

where w t = [w1 w2], the two noise samples taken from the two Gaussian distributions (one for position noise and one for velocity noise).

Now, it turns out that if the object starts at a known point (ie, its position is known exactly), then according to the process model above its position after a timestep will be given by another Gaussian distribution centered on the position that the object would have moved to if it had purely deterministic motion. That is, if there was no noise added to the model. This is a VERY NICE property of this model. What's more, the product of two Gaussian distributions (i.e., N1*N2) is also a Gaussian distribution (meaning the Gaussian distribution is closed under multiplication). This property means that after a second timestep, the position will again be given by a Gaussian distribution, centered again on the position predicted by the deterministic model. Additionally, the velocity distribution will also be Gaussian

So, the question remains, how are these Gaussian distributions (for the position and velocity) computed? Well, I'm going to leave that for another post (perhaps later today or tomorrow). Before going on to that post, you should make sure you understand the above. If anyone has any specific questions about the above explanation, please post them here.

Cheers,

Timkin

Edited by - Timkin on October 16, 2001 11:11:32 PM

This topic is closed to new replies.

Advertisement