question about implement verlet velocity method

Started by
16 comments, last by Rompa 15 years, 3 months ago
I read the "Game Programming Gems 4" chapter 3-3 about the article "Writing a Verlet-Based Physics Engine". I have a question about the implement of velocity verlet method. Pseudo-code in the book: velocity += (acceleration * dt / 2) position += (velocity * dt) position += (acceleration * dt * dt / 2) update acceleration velocity += (acceleration * dt / 2) My question is why the position calculation used the intermediate velocity, not the previous velocity. in my Pseudo-code: position += (velocity * dt) position += (acceleration * dt * dt / 2) velocity += (acceleration * dt / 2) update acceleration velocity += (acceleration * dt / 2) Actually I have traced the code in the book. In the Rigidbody.cpp line 261: MultiplyAccumulate(m_StateT1.m_Velocity, dt * kHalf, m_AccelPrev); MultiplyAccumulate(m_StateT1.m_Position, dt, m_StateT1.m_Velocity); MultiplyAccumulate(m_StateT1.m_Position, kHalf * dt * dt, m_AccelPrev); I modifide the code into: MultiplyAccumulate(m_StateT1.m_Position, dt, m_StateT1.m_Velocity); MultiplyAccumulate(m_StateT1.m_Position, kHalf * dt * dt, m_AccelPrev); MultiplyAccumulate(m_StateT1.m_Velocity, dt * kHalf, m_AccelPrev); The result of collision seems wrong. So my concept is somewhere wrong. Please give me a help, thank you.
Advertisement
"My question is why the position calculation used the intermediate velocity, not the previous velocity."

Because that was your velocity last time. Your previous change in position. You want your change in position NOW.

If I toss a ball up with a velocity of 9.8 meters per second, upward (-9.8 would be downward), let's say I wait one sec before moving it. (Note that gravitational acceleration is -9.8.) I know from physics that the object should go about 4.9 meters in the air. If I go with my initial velocity, my position would be 9.8. If I calculate velocity first, (v + a*dt/2 = 9.8 + -9.8*1/2 = 4.9), I see that velocity is 4.9 and my position is properly updated.

When I move one more second in time, I should see my position be zero again.

"If I toss a ball up with a velocity of 9.8 meters per second, upward (-9.8 would be downward)"

Assume frame time is 1 sec. Initial condition is V = 9.8 m/s & A = -9.8 m/s^2. Let's see the condition after first calculation (condition after 1 sec).

Pseudo-code in the book:
velocity += (acceleration * dt / 2) --> V = 4.9
position += (velocity * dt) --> P = 4.9
position += (acceleration * dt * dt / 2) --> P = 0
update acceleration --> A = -9.8
velocity += (acceleration * dt / 2) --> V = 0

After 1 sec, velocity is 0 & position is 0. The ball has not moved.

in my Pseudo-code:
position += (velocity * dt) --> P = 9.8
position += (acceleration * dt * dt / 2) --> P = 4.9
velocity += (acceleration * dt / 2) --> V = 4.9
update acceleration -->A = -9.8
velocity += (acceleration * dt / 2) --> V = 0

After 1 sec, velocity is 0 & position is 4.9. The ball's position will go zero in next frame.

Please indicate where my Pseudo-code is wrong.
Thank you very much.

P.S. About verlet velocity method
Refer to the bottom of the website
http://www.fisica.uniud.it/~ercolessi/md/md/node21.html
"position += (acceleration * dt * dt / 2) --> P = 0"

If position equaled 4.9 before this (the right value, btw), acceleration equaled -9.8, and dt equaled 1, this would be equivalent to pos = 4.9 - 9.8/2 = 0. Good. But pos is supposed to equal the value BEFORE you do this acceleration step, right? What's wrong?

Well, right now, I'm realizing that the code from your book and your refactor are essentially the same. You both update velocity twice with the same equation. I think I figured out why.

v += .5*a*dt

The first update is to get how much to move position. But, when the ball is at the top of its arch, it's velocity should be zero and this says it's 4.9.

v += .5*a*dt

The second is to properly update velocity. If we combine the two:

v += (a*dt)/2 + (a*dt)/2 or a*dt

Well, vel += a*dt is the correct formula, so we know this only works because you do it twice.

So, going back to your original question about the difference between, basically just doing the second velocity update at the beginning or end of the set of equations. No. You're doing the exact same thing. The only difference is how initial values are treated. I'd stick with your method more as seems to be more accurate.

However, I want to revisit my math in the beginning. It seems I was doing the same thing as the book, but I only updated v once, and I was assuming pos += v, not pos += v*dt. So, now you should know why you AND the book is right and why I wasn't. I want to apologize because I didn't fully understand last time. Good luck.
Quote:Original post by agan0617

After 1 sec, velocity is 0 & position is 0. The ball has not moved.


Actually it has moved since its velocity wasn't equal to zero.

I've never used the equations you've posted but I'll answer your question anyway...
Don't forget that you're computing the state of the ball AFTER one second: you haven't computed its state continuously after the ball has been launched.

So here is the answer:
this is simply due to the fact that your dt is too great for you to have values that are good approximates of the correct solution.
The correct solution is( by integration of continuous functions ):

x( t ) = xo + vo * ( t - to ) + 0.5 * a * ( t - to ) * ( t - to )
v( t ) = vo + a * ( t - to )

So after one second, we have for the state of the ball:

x( 1 ) = 0.0 + 9.8 * 1 + 0.5 * -9.8 * 1 * 1 = 4.9
v( 1 ) = 9.8 + -9.8 * 1 = 0.0

The algorithm( or integration scheme is correct, at least seems to be ) since I obtained the following values

position = 4.8999951
velocity = -4.757934818e-10

from this code:

	double position, velocity, acceleration, dt, time;	position = 0;	velocity = 9.8;	acceleration = -9.8;	time = position;	dt = 1.0e-6;	for(  ; time < 1.0;  )	{		time += dt;		velocity += ( acceleration * dt * 0.5 );		position += ( velocity * dt );		position += ( acceleration * dt * dt * 0.5 );		velocity += ( acceleration * dt * 0.5 ) ;	}		cout << "position = " << position << endl;	cout << "velocity = " << velocity << endl;


And for dt = 1.0e-3:

position = 4.8951
velocity = 1.428634039e-13

And for dt = 4.0e-2( 25 frames per second ):

position = 4.704
velocity = 6.817124447e-15

Well, you've noticed that the "approximation" of the position gets better as dt gets smaller: that's exactly what you should expect from a numerical integration scheme.

As for the question you've asked previously( why the intermediate velocity is used in the position update? i.e why position += ( prev_velocity + acceleration * dt * 0.5 ) * dt and why not position += ( prev_velocity + acceleration * dt ) * dt? ), you'll need reading the litterature on that specific subject. But generally such details are not important: the only thing you really need to know is how the error( on position, and velocity ) behaves with dt...

Well, I assume there's no mistakes in the pseudo-code: you simply didn't choose a dt small enough.

[Edited by - johnstanp on January 1, 2009 12:21:53 PM]
Quote:Original post by agan0617
I read the "Game Programming Gems 4" chapter 3-3 about the article "Writing a Verlet-Based Physics Engine". I have a question about the implement of velocity verlet method.

Pseudo-code in the book:
velocity += (acceleration * dt / 2)
position += (velocity * dt)
position += (acceleration * dt * dt / 2)
update acceleration
velocity += (acceleration * dt / 2)

My question is why the position calculation used the intermediate velocity, not the previous velocity.


To address your question, this is a characteristic of this Verlet integrator. You could update the position before you update the velocity and it would work, though differently. Splinter* was correct...if you update position before you update velocity, then you would be using an old velocity. This would result in the position "lagging" by one time step. Something is always lagging when you use explicit methods (and I won't get into implicit methods here). If you: a) update position before velocity; b) do not do the second step to update position also with acceleration; and, c) do the velocity update just after the position update and in one step rather than 2; and, d) update acceleration at the end, then you would have something called "explicit Euler" or "simple Euler" integration. The Verlet order of operations tends to be more stable than explicit Euler, and is generally preferred for game physics. These formulas and the order of operations are all designed to be good at different things. Some arrangements produce more accurate results. Some produce more stable results that work for larger time steps and a wider variety of models. Even the order of doing the steps can result in dramatic differences in the behavior of the simulation, depending on the thing you are simulating. That's not too technical of an answer, but I hope it is interesting at least!
Graham Rhodes Moderator, Math & Physics forum @ gamedev.net
Quote:Original post by johnstanp
The algorithm( or integration scheme is correct, at least seems to be ) since I obtained the following values
position = 4.8999951

Bad. An integrator used in the game development should be oversensitive, or exact. Clamping the rise in energy is simple, guessing orientation of a null vector isn't.
Quote:And for dt = 4.0e-2( 25 frames per second )

A rate of a game engine (or integrator) should be independent on the rate of the 3D engine. Aka rules are the same for everyone doesn't matter how fast is GFX card.
Quote:Original post by Raghar
Quote:Original post by johnstanp
The algorithm( or integration scheme is correct, at least seems to be ) since I obtained the following values
position = 4.8999951

Bad. An integrator used in the game development should be oversensitive, or exact. Clamping the rise in energy is simple, guessing orientation of a null vector isn't.


Who has ever talked about what an integrator used in the game development should be? Numerical integrators never give exact results: if the solution of a system of equations is analytically unknown, I don't know how an exact solution can be provided...So remind it: numerical integration only give approximate values( when compared to analytical solutions ), and this is why we talk about the behaviour of the "error". We don't even talk about the inevitable error related to the approximation of real numbers.

Quote:
Numerical integration is the approximate computation of an integral using numerical techniques. The numerical computation of an integral is sometimes called quadrature. Ueberhuber (1997, p. 71) uses the word "quadrature" to mean numerical computation of a univariate integral, and "cubature" to mean numerical computation of a multiple integral.

There are a wide range of methods available for numerical integration. A good source for such techniques is Press et al. (1992). Numerical integration is implemented in Mathematica as NIntegrate[f, {x, xmin, xmax}].

The most straightforward numerical integration technique uses the Newton-Cotes formulas (also called quadrature formulas), which approximate a function tabulated at a sequence of regularly spaced intervals by various degree polynomials. If the endpoints are tabulated, then the 2- and 3-point formulas are called the trapezoidal rule and Simpson's rule, respectively. The 5-point formula is called Boole's rule. A generalization of the trapezoidal rule is Romberg integration, which can yield accurate results for many fewer function evaluations.

If the functions are known analytically instead of being tabulated at equally spaced intervals, the best numerical method of integration is called Gaussian quadrature. By picking the abscissas at which to evaluate the function, Gaussian quadrature produces the most accurate approximations possible. However, given the speed of modern computers, the additional complication of the Gaussian quadrature formalism often makes it less desirable than simply brute-force calculating twice as many points on a regular grid (which also permits the already computed values of the function to be re-used). An excellent reference for Gaussian quadrature is Hildebrand (1956).

Modern numerical integrations methods based on information theory have been developed to simulate information systems such as computer controlled systems, communication systems, and control systems since in these cases, the classical methods (which are based on approximation theory) are not as efficient (Smith 1974).


http://mathworld.wolfram.com/NumericalIntegration.html

This is common sense.
Also is common sense: floating point operations are not exact. They too are approximations.

Example: 1/3 is not represented as 0.3333333333333333333...(infinity truncated for brevity). Compilers don't even attempt to optimize floating point equations due to the possibility of altering their inaccuracy. I'd say 4.8999951 when the goal was 4.9 is pretty good!

However, since we're on the topic of accuracy, my implementation of RK4 integration prints this:
Position: 4.9Velocity: -4.76837e-007Press any key to continue . . .
with a constant time step of 100 milliseconds and a simulation of one second. So, it seems it would be possible to be more accurate on position, however my velocity isn't exactly zero. Interestingly enough, a time step of 1000 gives exact answers...but I haven't done extensive testing yet. Point is it won't be exact and it's not designed for it. (The smaller time step is supposed to be more precise, I thought.)
Quote:Original post by Splinter of Chaos
Also is common sense: floating point operations are not exact. They too are approximations.


That's what I was also saying:

Quote:Original post by johnstanp
We don't even talk about the inevitable error related to the approximation of real numbers.


Quote:Original post by Splinter of Chaos
However, since we're on the topic of accuracy, my implementation of RK4 integration prints this:
Position: 4.9Velocity: -4.76837e-007Press any key to continue . . .
with a constant time step of 100 milliseconds and a simulation of one second. So, it seems it would be possible to be more accurate on position, however my velocity isn't exactly zero. Interestingly enough, a time step of 1000 gives exact answers...but I haven't done extensive testing yet. Point is it won't be exact and it's not designed for it. (The smaller time step is supposed to be more precise, I thought.)


I prefer using Runge-Kutta methods( up to the eigth order for fun ): this is why I've never used the Verlet algorithm.
Raghar should have said that the implementation of a physics engine should be independent of the time step chosen by the user...

[Edited by - johnstanp on January 2, 2009 9:54:47 AM]

This topic is closed to new replies.

Advertisement