Newtonian flight
I am building a 2-dimensional (top-down) newtonian flight sim. I have reached the point of developing the AI (just a simple queued order system for now), but the equations for a couple of the physics problems are eluding me at the moment.
Taking the simplest case, I have a target position, and a ship, which is at rest, facing the target. The goal of this exercise is for the ship to end up at rest at the target location.
The ship can only accelerate in the forwards direction, and it can either accelerate or not at a fixed rate of acceleration, A. The ship also has a fixed turn rate, R.
Now, to reach the goal, the ship must accelerate at rate A for some time interval t1, turn around to face the opposite direction (which takes time t2), and then decelerate at the same rate A for some time interval t3 = t1.
Now, obviously, t2 = pi/R, the time to rotate by pi radians, but I don't have a clear handle on how to calculate t1. Can anyone offer insight into how best to proceed?
Edit: if we divide the distance into the segments d1, d2 and d3, which are the distance traveled in each of t1, t2 and t3, the from the basic equations of motion we have:
d1 = 0.5*A*t1^2, d2 = A*t1*t2 and d3 = d1
therefore: d = A*t1^2 + A*t1*t2
which gives the quadratic equation: t1^2 + t1*t2 - d/A = 0
so: t1 = [ -t2 ± sqrt(t2^2 + 4*d/A) ]/2
and t1 must be positive, so finally t1 = [ -t2 + sqrt(t2^2 +4*d/A) ]/2
Which works nicely, although I seem to have a bit of error somewhere, might just be floating point error.
[Edited by - swiftcoder on October 22, 2008 4:38:06 PM]
OK, so apparently my rusty physics skills were up to the first job [wink]
In the current setup I bring the spaceship to a complete stop before applying that procedure to calculate the new trajectory, however, what if I want to change my target in the middle of movement (or my target moves)? It seems I need a version that takes into account the current velocity:
However, this appears to need some calculus. It will take time to rotate enough to modify the current velocity to match the new direction vector, but the new direction vector is dependant on the time it takes us. We could just pick an approximate v1, and thrust until our velocity is in the direction of the target, but this might be a little frail.
Are there any general approaches to this problem?
In the current setup I bring the spaceship to a complete stop before applying that procedure to calculate the new trajectory, however, what if I want to change my target in the middle of movement (or my target moves)? It seems I need a version that takes into account the current velocity:
However, this appears to need some calculus. It will take time to rotate enough to modify the current velocity to match the new direction vector, but the new direction vector is dependant on the time it takes us. We could just pick an approximate v1, and thrust until our velocity is in the direction of the target, but this might be a little frail.
Are there any general approaches to this problem?
There really isn't a single solution to the acceleration time. For instance, if the ship wanted to, it could accelerate for only a fraction of a second 't1' and reach some distance 'd1' before it stopped accelerating. This would cause it to travel extremely slowly, but in the absence of friction it would never slow down, and would essentially have plenty of time to turn around. It would then just float through space until it was 'd1' units away from the target, at which point it would accelerate again for 't1' seconds to slow down and stop. The problem is essentially symmetric, however the only constraint on 't1' and 'd1' is that the ship must have enough time to turn around before it's 'd1' units away from the target. This introduces an upper bound on the amount of time you can accelerate, at which point you have to immediately turn around, and immediately accelerate again once you're facing the opposite direction. Setting 't2' to PI/R in the equation you derived will calculate this upper bound. The equation also implies that 't2' can increase without limit (and hence 't1' also increases) and there will always be a solution, albeit the amount of time it will take the ship to reach the target will also increase (my opening example assumed that 't2' was quite large).
As for your second problem, if you need to worry about a dynamic target then a proper dynamics solution would probably be the easiest to work with. The problem with a straight Calculus approach is that it doesn't take into account constraints such as minimum and maximum linear/angular velocity, acceleration, etc. (at least without getting in to some really advanced math). It tells you what your ship needs to be doing, without regard for what it actually can do. Check out Steering Behaviors For Autonomous Characters by Chris Reynolds, it covers both the logic and locomotion of a simple dynamics approach.
As for your second problem, if you need to worry about a dynamic target then a proper dynamics solution would probably be the easiest to work with. The problem with a straight Calculus approach is that it doesn't take into account constraints such as minimum and maximum linear/angular velocity, acceleration, etc. (at least without getting in to some really advanced math). It tells you what your ship needs to be doing, without regard for what it actually can do. Check out Steering Behaviors For Autonomous Characters by Chris Reynolds, it covers both the logic and locomotion of a simple dynamics approach.
Quote:Original post by swiftcoder
Are there any general approaches to this problem?
That's a dangerous question. ;-) The answer is Optimal Control. Unfortunately, this is something of a swiss-army-chainsaw of a mathematical theory which people normally don't study until graduate school (and I am still learning myself)! But certain parts of it really aren't that scary, so maybe we can use some of it here.
First, we need to define your problem carefully though. What do you want? My guess is that you are given an initial position, velocity, and orientation for your ship, and a goal position, and want to get to that goal position as quickly as possible and arrive at zero velocity. Your final orientation, I assume, does not matter. (Or do you want to arrive with a particular heading as well?) Are these assumptions correct?
OK. Now, let's think about this from the point of view of a state space. Assuming I understand the problem correctly, your ship has a 5-d state,
Position (x, y)
Velocity (vx, vy)
Orientation (theta).
It also has two control inputs:
Thruster (a -- takes two discrete values: A,0)
Steering (omega -- takes three discrete values: -R, +R, 0).
Finally, the dynamics -- this describes how the state of the ship changes in time -- are,
(d/dt)x = vx
(d/dt)y = vy
(d/dt)vx = a cos(theta)
(d/dt)vy = a sin(theta)
(d/dt)theta = omega
Now, there are a lot of ways to solve this problem. One is to discretize the problem (i.e., divide up the state space -- which is continuous -- into a grid) and basically "pathfind" in state space, using A* or something like it (e.g., a plain old breadth-first search). There are more sophisticated approaches as well, but this is the one I'd suggest.
Of course, all this supposes that you really care about having optimal maneuvers. If not, then there are lots of suboptimal strategies.
[Edited by - Emergent on October 25, 2008 2:45:36 PM]
Quote:Original post by ZipsterRight, I should have specified that I was looking for an optimal (or near optimal) solution. Thanks for confirming my solution.
There really isn't a single solution to the acceleration time. For instance, if the ship wanted to, it could accelerate for only a fraction of a second 't1' and reach some distance 'd1' before it stopped accelerating. This would cause it to travel extremely slowly, but in the absence of friction it would never slow down, and would essentially have plenty of time to turn around.
Quote:Check out Steering Behaviors For Autonomous Characters by Chris Reynolds, it covers both the logic and locomotion of a simple dynamics approach.I have seen that page before, and used the same approach for some simple steering behaviours. However, I am having trouble mapping that approach onto my case where the ship can only accelerate forwards, and it takes time to turn. All of those steering behaviours require that accelerations can be applied instantaneously in any direction. Any insight into how I could modify them to approximate my case?
Quote:Original post by EmergentI understand the concept, but pretty sure my undergrad math isn't up to the general case ;)Quote:Original post by swiftcoderThat's a dangerous question. ;-) The answer is Optimal Control. Unfortunately, this is something of a swiss-army-chainsaw of a mathematical theory which people normally don't study until graduate school!
Are there any general approaches to this problem?
Quote:Your final orientation, I assume, does not matter. (Or do you want to arrive with a particular heading as well?) Are these assumptions correct?All correct.
Quote:Now, there are a lot of ways to solve this problem. One is to discretize the problem (i.e., divide up the state space -- which is continuous -- into a grid) and basically "pathfind" in state space, using A* or something like it (e.g., a plain old breadth-first search). There are more sophisticated approaches as well, but this is the one I'd suggest.can you elaborate a little on the process of discretization? From what I understand: I set up a grid, sample the equation at each intersection, and use these samples to approximate a solution. If that is correct, then it sounds like I need a 5-d grid, and 5-d pathfinding.
Quote:Of course, all this supposes that you really care about having optimal maneuvers. If not, then there are lots of suboptimal strategies.Optimal would be nice, but ultimately this is for a fairly fast-paced space sim, so the appearance of optimality is sufficient. As long as the spaceship can reach its target in a reasonably close to optimal amount of time, I doubt anyone will quibble. Can you point me towards a few non-optimal strategies? Presumably non-optimal strategies can be simpler, so if the results look good enough...
In an effort to come up with a simple approximation, I am using a steering behaviour which predicts positions a fixed time into the future, and chases the predicted point:
This works surprisingly well, but I wonder if there is a (simple) method to improve on my choice of a constant time into the future to predict?
if I set d to a large constant, the ship will match velocities with the target, but will never match positions (it comes to a stop some distance away from the target). Small values of d (such as the 3.0 shown here) work fine up close, but if the ship is initially far away from the target, it will take a long time to match velocities (resulting in a stern-chase).
It also only works between 2 entities which are moving - in particular it doesn't stop very nicely when given a static target (does loop-the-loops during its approach). Can you think of a simple way to match arbitrary velocities?
chase(ship, target): while 1: d = 3.0 # fixed time interval p1 = ship.position + ship.velocity*d # predict ship position in the future p2 = target.position + target.velocity*d # predict target position in the future diff = p2 - p1 # grab the vector between predicted positions ship.turn_toward(diff) # turn the ship towards the predicted direction if abs(angle(diff) - ship.rotation) < pi/2: ai.ship.thrust(dt) # thrust if we are within 90 degrees of the predicted direction yield
This works surprisingly well, but I wonder if there is a (simple) method to improve on my choice of a constant time into the future to predict?
if I set d to a large constant, the ship will match velocities with the target, but will never match positions (it comes to a stop some distance away from the target). Small values of d (such as the 3.0 shown here) work fine up close, but if the ship is initially far away from the target, it will take a long time to match velocities (resulting in a stern-chase).
It also only works between 2 entities which are moving - in particular it doesn't stop very nicely when given a static target (does loop-the-loops during its approach). Can you think of a simple way to match arbitrary velocities?
Unfortunately I can't be a ton of help since it's a hard problem. Finding good discretizations for the Hamilton-Jacobi-Bellman equation is really nontrivial. Yes, this is pathfinding on a 5d grid essentially, but you might not want the usual "grid." (If you're at one state on a grid and simulate forward by a timestep, who's to say that you end up at another state on the grid?) You could look at Value Iteration and maybe even reinforcement learning as well, since those are both ways to solve the HJBE -- sort of. But there's no really good universal method, unfortunately -- at least that I'm aware of.
(Maybe take a look at this...)
You also might want to check out Steve LaValle's planning book which is freely available online (and on the whole quite good). This I really do recommend.
As for suboptimal strategies... I was basically thinking steering behaviors, like you're already doing. :-
[Edited by - Emergent on October 23, 2008 11:56:14 PM]
(Maybe take a look at this...)
You also might want to check out Steve LaValle's planning book which is freely available online (and on the whole quite good). This I really do recommend.
As for suboptimal strategies... I was basically thinking steering behaviors, like you're already doing. :-
[Edited by - Emergent on October 23, 2008 11:56:14 PM]
Quote:Original post by EmergentThat is an incredible resource - I can't thank you enough! Now I am going to retreat into my shell and read it all ;)
You also might want to check out Steve LaValle's planning book which is freely available online (and on the whole quite good). This I really do recommend.
Quote:Original post by EmergentAnd they turned out to work very well, with a little creative fudging [smile]
As for suboptimal strategies... I was basically thinking steering behaviors, like you're already doing. :-
If we replace my fixed-interval prediction from my earlier post:
d = 3.0
With the following calculated interval: d = len(ship.velocity)/ship.acceleration + len(target.velocity)/target.acceleration
Everything works perfectly, both for moving and static targets. Why, I have no idea - I came up with that equation through trial/error, although it roughly equates to the sum of the time it would take each of them to decelerate to a stop.
So, I've given some more thought to this problem, and some ideas are coming back to me.... Here's another approach to the optimal control problem...
(I'll refer to some equations and conditions here that I don't define; that's because I don't remember their general form; they're in a notebook in my office. For now I'll put a star next to these, and I'll post back with them later.)
Let's solve the min-time "stabilization" problem: We want to drive (x, y, vx, vy) to zero in minimum time. Assuming that the player doesn't maneuver (maybe not a good assumption, but this was the initial assumption, right?), any problem can be reduced to this by working in the frame in which the player is at the origin and stationary.
To give you a mental picture, what we WANT are "switching surfaces" in state space, which define regions; inside a given region the optimal control is some constant. We'll approximate this by finding what the optimal control is at lots of points in state space (not on a grid), ahead of time, and then while the game is running, use the nearest precomputed neighbor. That's the general idea. Here's one way to do it:
1. Find the costate equation.*
2. For each admissable final costate (every costate that satisfies the transversality condition*) and final state (i.e., sweep theta), simulate the state and costate backwards in time, for, say, ten seconds. As you do this, at every instant, the optimal control is the one that minimizes the Hamiltonian.* You simply need to plug each of the six possible controls (2 thrust options; 3 turning options) into the Hamiltonian and use whichever one makes it smallest.
3. Store in some data structure of points (in state space) the information, "at (x,y,vx,vy,theta), the optimal control is 'u.'" Maybe this is a K-D tree. (You store this for all points in all the trajectories that you simulate in #2).
(If this sounds computationally expensive, it's because it is! Do all this offline ahead of time.)
Then, when your ship is actually running, look at its state, and find the closest state in the data structure that we created in #3, and use that control.
The only thing that I've glossed over here is that sometimes a trajectory that starts (really, ends) at one angle might "get to" the same point in state space as a trajectory that starts at another angle, in which case, we want to use whichever trajectory gets there sooner. The problem is that trajectories will never really collide exactly, since this is all numerical, and sampled. So I guess the best way is to do this in a breadth-first style, simulating all trajectories by deltaT, and deciding if any hits earlier points of another trajectory (to some tolerance). If any does, you keep the points from it that you've gotten so far, but don't simulate it any more.
[Edited by - Emergent on October 25, 2008 3:26:13 PM]
(I'll refer to some equations and conditions here that I don't define; that's because I don't remember their general form; they're in a notebook in my office. For now I'll put a star next to these, and I'll post back with them later.)
Let's solve the min-time "stabilization" problem: We want to drive (x, y, vx, vy) to zero in minimum time. Assuming that the player doesn't maneuver (maybe not a good assumption, but this was the initial assumption, right?), any problem can be reduced to this by working in the frame in which the player is at the origin and stationary.
To give you a mental picture, what we WANT are "switching surfaces" in state space, which define regions; inside a given region the optimal control is some constant. We'll approximate this by finding what the optimal control is at lots of points in state space (not on a grid), ahead of time, and then while the game is running, use the nearest precomputed neighbor. That's the general idea. Here's one way to do it:
1. Find the costate equation.*
2. For each admissable final costate (every costate that satisfies the transversality condition*) and final state (i.e., sweep theta), simulate the state and costate backwards in time, for, say, ten seconds. As you do this, at every instant, the optimal control is the one that minimizes the Hamiltonian.* You simply need to plug each of the six possible controls (2 thrust options; 3 turning options) into the Hamiltonian and use whichever one makes it smallest.
3. Store in some data structure of points (in state space) the information, "at (x,y,vx,vy,theta), the optimal control is 'u.'" Maybe this is a K-D tree. (You store this for all points in all the trajectories that you simulate in #2).
(If this sounds computationally expensive, it's because it is! Do all this offline ahead of time.)
Then, when your ship is actually running, look at its state, and find the closest state in the data structure that we created in #3, and use that control.
The only thing that I've glossed over here is that sometimes a trajectory that starts (really, ends) at one angle might "get to" the same point in state space as a trajectory that starts at another angle, in which case, we want to use whichever trajectory gets there sooner. The problem is that trajectories will never really collide exactly, since this is all numerical, and sampled. So I guess the best way is to do this in a breadth-first style, simulating all trajectories by deltaT, and deciding if any hits earlier points of another trajectory (to some tolerance). If any does, you keep the points from it that you've gotten so far, but don't simulate it any more.
[Edited by - Emergent on October 25, 2008 3:26:13 PM]
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement