# Stable Integrators and Oscillatory Motion - What the heck is going on?

This topic is 2557 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

This thread isn't so much for help as it is for curiosity. I understand very well that what I'm about to ask is by no means important or relevant to the bigger picture of game development. However, it just bothers me to no end.

I wanted to test the stability and accuracy of a few integrators with oscillatory motion a(t) = -x(t). I ran tests over 157 periods with a timestep of 0.015 using double precision floating point accuracy. With starting conditions x(0) = 1.0 and v(0) = 0.0, x(t) = cos(t) and v(t) = sin(t).

Why is it that using x(0) and v(0) as starting conditions is unstable (for higher order integrators, error grows proportionately to total time passed), but using x(0) and v(dt/2) is stable? Just to satisfy my curiosity I tried using other combinations of starting conditions (i.e. x(dt/2) v(0), x(dt/2) v(dt/2), and just because those previous two worked, I tried x(dt), v(dt)). As it turns out, pretty much the only way to make RK4 unstable for oscillatory motion was to use x(0) and v(0) as starting conditions (discounting round-off errors). What the heck is going on here?

EDIT: Turns out it was a mistake on my part. But still creepy.

##### Share on other sites
You probably know more about integrators than I do, but please help me understand your question. Are you talking about RK4 only? Otherwise, I don't even know what you mean by using v(dt/2) as a starting condition. And how are you defining/measuring instability?

##### Share on other sites
I was using RK4 for the purposes of the test, but looking at the Leapfrog integrator (which looks like forward Euler but has similar stability characteristics) I assumed that it shouldn't matter what basic integration method you're using.

Stability is the rate at which error grows based on the total time that has passed (versus order, which is the error based on the timestep). With an unstable integrator, undamped oscillatory motion will eventually "explode" (i.e. error continues to grow without bound), but with a stable integrator, undamped oscillatory motion (in theory) will not "explode" (i.e. error appears to have a bound). I'm simply measuring stability by plotting the maximum deviation between simulated and predicted values of x(t) against time, and seeing if the general upward trend is "large" enough to not be explained simply through round-off errors (my personal criteria is that the first 4-5 digits have not changed).

##### Share on other sites
Is it the size of the timestep that you're changing, or the initial conditions? I'd understand if the timestep affected stability, but I wouldn't expect the initial conditions to do so.

Have you done a linear stability analysis of the various integrators you're talking about?

What I mean is this: Letting z[k] = (x[k], v[k]) be the state of the integrator, Euler, leapfrog, and other linear integrators can be written as

z[k+1] = M z[k]

for some 2x2 matrix M. If both of M's eigenvalues lie inside the unit circle, your integrator will damp out the motion; if they're outside, the integrator will blow up, and if they're on the circle, then you'll happily oscillate forever, up to rounding errors.

For instance, if we want to simulate the continuous-time system,

dz/dt = A z

then Euler integration would be,

z[k+1] = z[k] + A z[k] dt = (I + dt A) z[k]

so in this case M = I + dt A. If the eigenvalues of A are y1,y2,...,yn, then the eigenvalues of M are u1=1+dt y1, ...,un= 1 + dt yn. Now if you think about the possible values that y1,...,yn could have had to keep u1,...,un in the unit circle, you'll see that the size of this circle is inversely proportional to dt. This is the role that the timestep plays in determining stability.

Have you done this sort of stability analysis?

##### Share on other sites
Haha, I should mention that I've so far only taken single-variable calculus (no real analysis yet) and I won't be taking linear algebra for two years, so it'll be a while before I learn how to calculate the eigenvalues of a matrix. I also wouldn't expect initial conditions to affect stability, but for some odd reason, they do. Using the following implementation, I did a few tests. I increased the number of periods run through to 49298 just to make the difference more apparent. The starting conditions are changed at the declarations of x and v.
[source lang="java"]double dt = 0.015;
int steps = (int) (Math.PI * 98596 / dt);

double t = 0.0;
double x = 1.0;
double v = Math.sin(-dt/2);

double maxVar = 0.0;
double maxVarDiff = 0.0;
double prevVar = 0.0;
double startVar = 0.0;

double tick = 0.0;

for (int i = 0; i < steps; ++i) {
if (t >= tick) {
// System.out.println(t + "\t" + Math.cos(t) + "\t" + x + "\t" + maxVar);
tick += Math.PI * 2.0;
if (prevVar != 0.0 && maxVar - prevVar > maxVarDiff) {
maxVarDiff = maxVar - prevVar;
}
prevVar = maxVar;
if (startVar == 0.0) {
startVar = maxVar;
}
}

t += dt;
double dx1 = dt * v;
double dv1 = dt * -x;
double dx2 = dt * (v + .5 * dv1);
double dv2 = dt * -(x + .5 * dx1);
double dx3 = dt * (v + .5 * dv2);
double dv3 = dt * -(x + .5 * dx2);
double dx4 = dt * (v + dv3);
double dv4 = dt * -(x + dx3);
x += (dx1 + 2*dx2 + 2*dx3 + dx4) / 6;
v += (dv1 + 2*dv2 + 2*dv3 + dv4) / 6;

double var = Math.pow(Math.cos(t) - x, 2.0);
if (var > maxVar) {
maxVar = var;
}
}

System.out.println("Starting Variance: " + startVar);
System.out.println("Maximum Variance: " + maxVar);
System.out.println("Maximum Variance Diff: " + maxVarDiff);[/source]

Starting Variance: 4.146225112688903E-18 Maximum Variance: 8.548421030412684E-8 Maximum Variance Diff: 6.354040387133647E-12
Replacing double x = 1.0; double v = 0.0; with double x = 1.0; double v = Math.sin(-dt/2); gets you this output:
Starting Variance: 5.6248594490512076E-5 Maximum Variance: 5.6248857190816965E-5 Maximum Variance Diff: 2.627003048891164E-10
Uncommenting the output line at the top of the loop (and changing it back to 157 periods to run through) and plotting sqrt(maxVar) versus time will show that the original accumulates errors linearly with time, while the latter does not.

EDIT:

I think I realized the issue. Wow, silly mistake on my part. Changing the starting condition (i.e. increasing velocity) would increase the amplitude of the oscillation. This additional amplitude is slightly off from the original amplitude, so each variance has a big constant factor added to it in addition to any instability resulting from the simulation. That's why I wasn't seeing an order of magnitude growth in inaccuracy in the version with the slightly changed starting conditions. The fact that the resulting function was shifted over slightly due to having a different phase also made the variance seem to be within acceptable bounds. It's just that compared to the other methods of integration I was using (Leapfrog and Verlet), the variance was still so much smaller that I just assumed the larger variance was a consequence of numerical stability.

##### Share on other sites
Ok, well it sounds like you have a good idea for why you were seeing what you were seeing. Let me just mention two things that might be interesting:

1.) Another illuminating (and simple, really) way to look at your problem is from the state-space perspective. The idea here is that the state of your simple harmonic oscillator is a point in two dimensions (one axis is position, the other is velocity), and that point moves along a vector field. You can picture this; you can draw it on paper. So, what you can do, if you have access to a graphics library, is actually plot this point in 2d, and watch it moving around. You'll see that a stable oscillator moves the point around circles or ellipses, an unstable one spirals out, and a damped one spirals in to the origin. I think that if you were to plot a "flock" of these particles -- one for each integrator you were looking at -- this would give you some insight. In general, I think these kinds of visual exercises are very useful for developing intuition about math.

2.) Calculus isn't really a prerequisite for linear algebra. Learning about one can help inform the other (I think it mainly goes in the "learning linear algebra informs calculus" direction), and linear algebra is one of the most useful areas of math you can study. So if you want to teach yourself some, it really wouldn't hurt, and you probably know most of what you need to to start already. My favorite thing about linear algebra is that it gives you ways of solving problems directly and straightforwardly with a computer: "I do not want to have to wade through this nasty mess of equations. I do not want to sit down and solve them by hand, and make a mistake halfway through. I will notice that all the equations are linear, call them a 'matrix,' summarize them in three symbols, and let a computer do the dirty work -- and it will be reliable."

1. 1
2. 2
Rutin
23
3. 3
JoeJ
20
4. 4
5. 5

• 27
• 40
• 23
• 13
• 13
• ### Forum Statistics

• Total Topics
631737
• Total Posts
3001944
×