# Newtonian flight

This topic is 3319 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I am building a 2-dimensional (top-down) newtonian flight sim. I have reached the point of developing the AI (just a simple queued order system for now), but the equations for a couple of the physics problems are eluding me at the moment. Taking the simplest case, I have a target position, and a ship, which is at rest, facing the target. The goal of this exercise is for the ship to end up at rest at the target location. The ship can only accelerate in the forwards direction, and it can either accelerate or not at a fixed rate of acceleration, A. The ship also has a fixed turn rate, R. Now, to reach the goal, the ship must accelerate at rate A for some time interval t1, turn around to face the opposite direction (which takes time t2), and then decelerate at the same rate A for some time interval t3 = t1. Now, obviously, t2 = pi/R, the time to rotate by pi radians, but I don't have a clear handle on how to calculate t1. Can anyone offer insight into how best to proceed?
Edit: if we divide the distance into the segments d1, d2 and d3, which are the distance traveled in each of t1, t2 and t3, the from the basic equations of motion we have: d1 = 0.5*A*t1^2, d2 = A*t1*t2 and d3 = d1 therefore: d = A*t1^2 + A*t1*t2 which gives the quadratic equation: t1^2 + t1*t2 - d/A = 0 so: t1 = [ -t2 ± sqrt(t2^2 + 4*d/A) ]/2 and t1 must be positive, so finally t1 = [ -t2 + sqrt(t2^2 +4*d/A) ]/2 Which works nicely, although I seem to have a bit of error somewhere, might just be floating point error. [Edited by - swiftcoder on October 22, 2008 4:38:06 PM]

##### Share on other sites
OK, so apparently my rusty physics skills were up to the first job [wink]

In the current setup I bring the spaceship to a complete stop before applying that procedure to calculate the new trajectory, however, what if I want to change my target in the middle of movement (or my target moves)? It seems I need a version that takes into account the current velocity:

However, this appears to need some calculus. It will take time to rotate enough to modify the current velocity to match the new direction vector, but the new direction vector is dependant on the time it takes us. We could just pick an approximate v1, and thrust until our velocity is in the direction of the target, but this might be a little frail.

Are there any general approaches to this problem?

##### Share on other sites
There really isn't a single solution to the acceleration time. For instance, if the ship wanted to, it could accelerate for only a fraction of a second 't1' and reach some distance 'd1' before it stopped accelerating. This would cause it to travel extremely slowly, but in the absence of friction it would never slow down, and would essentially have plenty of time to turn around. It would then just float through space until it was 'd1' units away from the target, at which point it would accelerate again for 't1' seconds to slow down and stop. The problem is essentially symmetric, however the only constraint on 't1' and 'd1' is that the ship must have enough time to turn around before it's 'd1' units away from the target. This introduces an upper bound on the amount of time you can accelerate, at which point you have to immediately turn around, and immediately accelerate again once you're facing the opposite direction. Setting 't2' to PI/R in the equation you derived will calculate this upper bound. The equation also implies that 't2' can increase without limit (and hence 't1' also increases) and there will always be a solution, albeit the amount of time it will take the ship to reach the target will also increase (my opening example assumed that 't2' was quite large).

As for your second problem, if you need to worry about a dynamic target then a proper dynamics solution would probably be the easiest to work with. The problem with a straight Calculus approach is that it doesn't take into account constraints such as minimum and maximum linear/angular velocity, acceleration, etc. (at least without getting in to some really advanced math). It tells you what your ship needs to be doing, without regard for what it actually can do. Check out Steering Behaviors For Autonomous Characters by Chris Reynolds, it covers both the logic and locomotion of a simple dynamics approach.

##### Share on other sites
Quote:
 Original post by swiftcoderAre there any general approaches to this problem?

That's a dangerous question. ;-) The answer is Optimal Control. Unfortunately, this is something of a swiss-army-chainsaw of a mathematical theory which people normally don't study until graduate school (and I am still learning myself)! But certain parts of it really aren't that scary, so maybe we can use some of it here.

First, we need to define your problem carefully though. What do you want? My guess is that you are given an initial position, velocity, and orientation for your ship, and a goal position, and want to get to that goal position as quickly as possible and arrive at zero velocity. Your final orientation, I assume, does not matter. (Or do you want to arrive with a particular heading as well?) Are these assumptions correct?

OK. Now, let's think about this from the point of view of a state space. Assuming I understand the problem correctly, your ship has a 5-d state,

Position (x, y)
Velocity (vx, vy)
Orientation (theta).

It also has two control inputs:

Thruster (a -- takes two discrete values: A,0)
Steering (omega -- takes three discrete values: -R, +R, 0).

Finally, the dynamics -- this describes how the state of the ship changes in time -- are,

(d/dt)x = vx
(d/dt)y = vy
(d/dt)vx = a cos(theta)
(d/dt)vy = a sin(theta)
(d/dt)theta = omega

Now, there are a lot of ways to solve this problem. One is to discretize the problem (i.e., divide up the state space -- which is continuous -- into a grid) and basically "pathfind" in state space, using A* or something like it (e.g., a plain old breadth-first search). There are more sophisticated approaches as well, but this is the one I'd suggest.

Of course, all this supposes that you really care about having optimal maneuvers. If not, then there are lots of suboptimal strategies.

[Edited by - Emergent on October 25, 2008 2:45:36 PM]

##### Share on other sites
Quote:
 Original post by ZipsterThere really isn't a single solution to the acceleration time. For instance, if the ship wanted to, it could accelerate for only a fraction of a second 't1' and reach some distance 'd1' before it stopped accelerating. This would cause it to travel extremely slowly, but in the absence of friction it would never slow down, and would essentially have plenty of time to turn around.
Right, I should have specified that I was looking for an optimal (or near optimal) solution. Thanks for confirming my solution.

Quote:
 Check out Steering Behaviors For Autonomous Characters by Chris Reynolds, it covers both the logic and locomotion of a simple dynamics approach.
I have seen that page before, and used the same approach for some simple steering behaviours. However, I am having trouble mapping that approach onto my case where the ship can only accelerate forwards, and it takes time to turn. All of those steering behaviours require that accelerations can be applied instantaneously in any direction. Any insight into how I could modify them to approximate my case?

Quote:
Original post by Emergent
Quote:
 Original post by swiftcoderAre there any general approaches to this problem?
That's a dangerous question. ;-) The answer is Optimal Control. Unfortunately, this is something of a swiss-army-chainsaw of a mathematical theory which people normally don't study until graduate school!
I understand the concept, but pretty sure my undergrad math isn't up to the general case ;)

Quote:
 Your final orientation, I assume, does not matter. (Or do you want to arrive with a particular heading as well?) Are these assumptions correct?
All correct.

Quote:
 Now, there are a lot of ways to solve this problem. One is to discretize the problem (i.e., divide up the state space -- which is continuous -- into a grid) and basically "pathfind" in state space, using A* or something like it (e.g., a plain old breadth-first search). There are more sophisticated approaches as well, but this is the one I'd suggest.
can you elaborate a little on the process of discretization? From what I understand: I set up a grid, sample the equation at each intersection, and use these samples to approximate a solution. If that is correct, then it sounds like I need a 5-d grid, and 5-d pathfinding.

Quote:
 Of course, all this supposes that you really care about having optimal maneuvers. If not, then there are lots of suboptimal strategies.
Optimal would be nice, but ultimately this is for a fairly fast-paced space sim, so the appearance of optimality is sufficient. As long as the spaceship can reach its target in a reasonably close to optimal amount of time, I doubt anyone will quibble. Can you point me towards a few non-optimal strategies? Presumably non-optimal strategies can be simpler, so if the results look good enough...

##### Share on other sites
In an effort to come up with a simple approximation, I am using a steering behaviour which predicts positions a fixed time into the future, and chases the predicted point:
chase(ship, target):	while 1:			d = 3.0 # fixed time interval				p1 = ship.position + ship.velocity*d # predict ship position in the future		p2 = target.position + target.velocity*d # predict target position in the future				diff = p2 - p1 # grab the vector between predicted positions				ship.turn_toward(diff) # turn the ship towards the predicted direction				if abs(angle(diff) - ship.rotation) < pi/2:			ai.ship.thrust(dt) # thrust if we are within 90 degrees of the predicted direction				yield

This works surprisingly well, but I wonder if there is a (simple) method to improve on my choice of a constant time into the future to predict?

if I set d to a large constant, the ship will match velocities with the target, but will never match positions (it comes to a stop some distance away from the target). Small values of d (such as the 3.0 shown here) work fine up close, but if the ship is initially far away from the target, it will take a long time to match velocities (resulting in a stern-chase).

It also only works between 2 entities which are moving - in particular it doesn't stop very nicely when given a static target (does loop-the-loops during its approach). Can you think of a simple way to match arbitrary velocities?

##### Share on other sites
Unfortunately I can't be a ton of help since it's a hard problem. Finding good discretizations for the Hamilton-Jacobi-Bellman equation is really nontrivial. Yes, this is pathfinding on a 5d grid essentially, but you might not want the usual "grid." (If you're at one state on a grid and simulate forward by a timestep, who's to say that you end up at another state on the grid?) You could look at Value Iteration and maybe even reinforcement learning as well, since those are both ways to solve the HJBE -- sort of. But there's no really good universal method, unfortunately -- at least that I'm aware of.

(Maybe take a look at this...)

You also might want to check out Steve LaValle's planning book which is freely available online (and on the whole quite good). This I really do recommend.

As for suboptimal strategies... I was basically thinking steering behaviors, like you're already doing. :-
[Edited by - Emergent on October 23, 2008 11:56:14 PM]

##### Share on other sites
Quote:
 Original post by EmergentYou also might want to check out Steve LaValle's planning book which is freely available online (and on the whole quite good). This I really do recommend.
That is an incredible resource - I can't thank you enough! Now I am going to retreat into my shell and read it all ;)

##### Share on other sites
Quote:
 Original post by EmergentAs for suboptimal strategies... I was basically thinking steering behaviors, like you're already doing. :-
And they turned out to work very well, with a little creative fudging [smile]

If we replace my fixed-interval prediction from my earlier post:
		d = 3.0
With the following calculated interval:
		d = len(ship.velocity)/ship.acceleration + len(target.velocity)/target.acceleration
Everything works perfectly, both for moving and static targets. Why, I have no idea - I came up with that equation through trial/error, although it roughly equates to the sum of the time it would take each of them to decelerate to a stop.

##### Share on other sites
So, I've given some more thought to this problem, and some ideas are coming back to me.... Here's another approach to the optimal control problem...

(I'll refer to some equations and conditions here that I don't define; that's because I don't remember their general form; they're in a notebook in my office. For now I'll put a star next to these, and I'll post back with them later.)

Let's solve the min-time "stabilization" problem: We want to drive (x, y, vx, vy) to zero in minimum time. Assuming that the player doesn't maneuver (maybe not a good assumption, but this was the initial assumption, right?), any problem can be reduced to this by working in the frame in which the player is at the origin and stationary.

To give you a mental picture, what we WANT are "switching surfaces" in state space, which define regions; inside a given region the optimal control is some constant. We'll approximate this by finding what the optimal control is at lots of points in state space (not on a grid), ahead of time, and then while the game is running, use the nearest precomputed neighbor. That's the general idea. Here's one way to do it:

1. Find the costate equation.*

2. For each admissable final costate (every costate that satisfies the transversality condition*) and final state (i.e., sweep theta), simulate the state and costate backwards in time, for, say, ten seconds. As you do this, at every instant, the optimal control is the one that minimizes the Hamiltonian.* You simply need to plug each of the six possible controls (2 thrust options; 3 turning options) into the Hamiltonian and use whichever one makes it smallest.

3. Store in some data structure of points (in state space) the information, "at (x,y,vx,vy,theta), the optimal control is 'u.'" Maybe this is a K-D tree. (You store this for all points in all the trajectories that you simulate in #2).

(If this sounds computationally expensive, it's because it is! Do all this offline ahead of time.)

Then, when your ship is actually running, look at its state, and find the closest state in the data structure that we created in #3, and use that control.

The only thing that I've glossed over here is that sometimes a trajectory that starts (really, ends) at one angle might "get to" the same point in state space as a trajectory that starts at another angle, in which case, we want to use whichever trajectory gets there sooner. The problem is that trajectories will never really collide exactly, since this is all numerical, and sampled. So I guess the best way is to do this in a breadth-first style, simulating all trajectories by deltaT, and deciding if any hits earlier points of another trajectory (to some tolerance). If any does, you keep the points from it that you've gotten so far, but don't simulate it any more.

[Edited by - Emergent on October 25, 2008 3:26:13 PM]

##### Share on other sites
Quote:
 Original post by EmergentAssuming that the player doesn't maneuver (maybe not a good assumption, but this was the initial assumption, right?), any problem can be reduced to this by working in the frame in which the player is at the origin and stationary.
Sounds good - I think (but haven't checked) that even if the player can manoeuvre, the problem should be decomposable to the stationary case.
Quote:
 1. Find the costate equation.* 2. For each admissable final costate (every costate that satisfies the transversality condition*) and final state (i.e., sweep theta), simulate the state and costate backwards in time, for, say, ten seconds. As you do this, at every instant, the optimal control is the one that minimizes the Hamiltonian.* You simply need to plug each of the six possible controls (2 thrust options; 3 turning options) into the Hamiltonian and use whichever one makes it smallest. 3. Store in some data structure of points (in state space) the information, "at (x,y,vx,vy,theta), the optimal control is 'u.'" Maybe this is a K-D tree. (You store this for all points in all the trajectories that you simulate in #2).
Right, that seems pretty logical - not sure I can derive the costate equation though.

I am very surprised how well my trial/error steering behaviour works, so I think for this particular project I will forgo the analytical approach. Still very interested though, as I would like to build a more complex simulation in the future.

##### Share on other sites
Quote:
 Original post by swiftcoderI am very surprised how well my trial/error steering behaviour works, so I think for this particular project I will forgo the analytical approach. Still very interested though, as I would like to build a more complex simulation in the future.

Wow, that was quick! Yeah, I've been pretty much assuming that you have the problem solved to your satisfaction by your steering behavior and that this is all academic -- so I was doing this mainly to satisfy my own interest (and for anybody else who reads this, since people don't talk about optimal control much in this forum but it seems useful).

Also, while you were replying, I edited my previous post; there's a new paragraph at the end that starts "The only thing that I've glossed over here..." While I've actually used the algorithm I've described to solve a problem with a single goal state [it was just how to stabilize a simple harmonic oscillator in minimum time with force inputs of plus or minus 1 (it's more complicated than I'd have thought!)], I haven't done this for a terminal manifold (what we have here); that's why I had to add that last paragraph.

Anyway, I'll post back with the details later. I don't think the costate equations will be that bad; there's a nice general solution; I just don't remember it off the top of my head.

##### Share on other sites
Yegods, I'm an idiot. Almost all the information I needed was staring me in the face on the Wikipedia page. I don't have the transversality condition on me (sorry, I forgot to check my notebook at the office. Tomorrow!), but we can do the rest...

We want to minimize the total time this takes, so the cost is just the integral of the constant function equal to 1 for all time (this integrates to t). That is, L=1. And recall, (defining X=[x,y,vx, vy, θ]T),

f(X) = (d/dt) X

is defined by

(d/dt)x = vx
(d/dt)y = vy
(d/dt)vx = a cos(θ)
(d/dt)vy = a sin(θ)
(d/dt)θ = ω

so the Hamiltonian is, (where λ=[λ1, λ2,..., λ5]T),

H = L + λT f(X)
= 1 + λ1 vx + λ2 vy + λ3 a cos(θ) + λ4 a sin(θ) + λ5 ω

where

(d/dt)λ = -dH/dX

or, elementwise,

(d/dt)λ1 = -dH/dx = 0
(d/dt)λ2 = -dH/dy = 0
(d/dt)λ3 = -dH/dvx = λ1
(d/dt)λ4 = -dH/dvy = λ2
(d/dt)λ5 = -dH/dθ = -a λ3 sin(θ) + λ4 a cos(θ)

and of course we also know the differential equation for the state, so together with the costate equations (above), we have a coupled system of 2*5=10 scalar differential equations (or, a 10-d vector differential equation), which we can simulate backwards from a goal state (and any admissable final costate, at that goal state) through time.

(Also, remember that at any instant, we choose the (a, ω) that minimizes the Hamiltonian.)

This gives us almost everything we need. All that's left is to know which final costates go with which goal states; that's given by the transversality condition, and I'll post back with that later.

AWESOME! This stuff is coming back. (I should really have remembered this stuff better, but if I'm relearning it now, I'll be happy with that!)

[Edited by - Emergent on October 27, 2008 7:43:30 PM]

##### Share on other sites
Ok, FINALLY:

The transversality condition with free end times is,

(L + dφ/dT + λTf)t=T = 0

where the only thing you haven't seen before is φ, the terminal cost (which in this case we do not have, so φ=0). And since also, in our case, L=1, we have,

1 + (λTf)t=T = 0

or -- and if I could put this in a big bold box, I would,

T f)t=T = -1 .

This is the pure min-time transversality condition. In the case of our asteroids spaceship,

1 vx + λ2 vy + λ3 a cos(θ) + λ4 a sin(θ) + λ5 ω)t=T = -1

w00t!

[Edited by - Emergent on October 28, 2008 3:54:50 PM]

##### Share on other sites
Oh! And since at time t=T, we require x=y=vx=vy=0, this reduces to,

3 a cos(θ) + λ4 a sin(θ) + λ5 ω)t=T = -1

This characterizes the optimal final costates.

But finally, there's one thing to note, which is a restatement of this: Since f and L are not time-varying, the Hamiltonian H is constant along optimal trajectories. Since from the above, and the definition of the Hamiltonian, we see that Ht=T=0, and we know that H is constant, then we know that, in fact, H=0 along optimal trajectories.
[EDIT: The struck statement above is only true when dH/du=0. This is what occurs when we minimize H with respect to u when u is continuous and unconstrained. But in our case, u can only take one of a few discrete values, and there is actually no guarantee that the Hamiltonian is zero along optimal trajectories -- just minimized.]

Ah hah! Wow; how had I forgotten that?

So, FINALLY, this gives us an algorithm (sort of... I've left out handling trajectories that intersect each other. When this happens, you want to discard the slower trajectory).
- For every final u=(a, ω) // There are six    - Solve H=0 for λfinal // Numerically, perhaps (e.g., w. Newton-Raphson)    - For each λfinal found above [there are multiple (infinite, actually) solutions], do:        - Using this λfinal and your goal Xfinal, simulate          backwards in time for, say, 10 seconds.  At every timestep of this simulation,          use whichever u minimizes the Hamiltonian.  And record the (X, u) pairs in          some data structure as outlined before.

So, I've left out how to deal with trajectories that intersect each other, but hopefully this gives the general idea.

[Edited by - Emergent on November 3, 2008 6:30:54 PM]

##### Share on other sites
Quote:
 Original post by EmergentSo, I've left out how to deal with trajectories that intersect each other, but hopefully this gives the general idea.
Sweet, that actually works out a fair bit simpler than I expected.

##### Share on other sites
Quote:
 Original post by swiftcoderSweet, that actually works out a fair bit simpler than I expected.

:-)

Also, FYI, I just corrected a few mistakes in the algorithm above. And, since I was kind of fuzzy about how to iterate over the values of λfinal that meet the transversality condition, let me clear that up. In this case, here's one way:

Since,

λ3 a cos(θ) + λ4 a sin(θ) + λ5 ω = -1

then let's express (λ3, λ4) in polar coordinates (A, δ), where,

A = a sqrt(λ32 + λ42)
δ = atan2(λ4, λ3)

so the transversality condition becomes,

A cos(θ + δ) + λ5 ω = -1

or

cos(θ + δ) = -(λ5 ω + 1)/A

and changing coordinates again, defining α=-1/A,

cos(θ + δ) = α λ5 ω + α

There are only two solutions in (0, 2 π) for δ. They are given by,

δ1 = cos-1(α λ5 ω + α)
and
δ2 = δ1 + π

so what we can do is grid over (α, λ5), and for each point solve for δ.

##### Share on other sites
[EDIT: I made this algorithm up. There are better, more well-established techniques. One in particular is known as Value Iteration. This discussion is continued in swiftcoder and my replies to this newer thread. (I.e.: The following post is pretty much obsolete now).]

A completely different approach

Here, I just made this up, but see if it makes sense. It doesn't use the calculus of variations at all, and is very intuitive. The idea depends on two different discretizations:

1. There are 'particles' in state space.
2. The state space is partitioned into small 'cells;' e.g., a uniform grid of small cubes.

Then, the idea is,

Initialization: Start with a list of particles containing feasible final states. These should be a good 'sample' of the terminal manifold. If there is a single goal state, then this is a single particle. Also, maintain a spatial structure of cells (e.g., a simple grid). Each cell has a corresponding 'cost;' this is initialized to infinity.

Then, iterate the following:

Each particle "splits" into N particles, where N is the number of possible control inputs, and each of these particles is simulated (as x_dot = f(x,u) with the corresponding constant u) until it leaves the cell that it began in at the beginning of this step. When it reaches a new cell, if its cost is lower than that currently stored on the cell, overwrite the label, and otherwise destroy the particle.

Do this until the list of particles is empty.

Now, at the end, each cell contains an approximate value for the Bellman Value Function, V(X).

[Edited by - Emergent on February 23, 2009 11:43:10 PM]

##### Share on other sites
Good stuff. Now I just have to find the time to try it out ;)

##### Share on other sites
If you haven't already you should look at a game called Gravity Well. They do exactly what you are talking about. I think they have source code out there somewhere.

For those of us a little less math savvy could you explain, in simpler terms, the formulas you're using? I'm working on a little home project and you described perfectly a problem I was coming up too. I would love to see how you resolved it.

##### Share on other sites

This topic is 3319 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
628645
• Total Posts
2984023

• 9
• 9
• 9
• 10
• 21