• Advertisement
Sign in to follow this  

question about implement verlet velocity method

This topic is 3334 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I read the "Game Programming Gems 4" chapter 3-3 about the article "Writing a Verlet-Based Physics Engine". I have a question about the implement of velocity verlet method. Pseudo-code in the book: velocity += (acceleration * dt / 2) position += (velocity * dt) position += (acceleration * dt * dt / 2) update acceleration velocity += (acceleration * dt / 2) My question is why the position calculation used the intermediate velocity, not the previous velocity. in my Pseudo-code: position += (velocity * dt) position += (acceleration * dt * dt / 2) velocity += (acceleration * dt / 2) update acceleration velocity += (acceleration * dt / 2) Actually I have traced the code in the book. In the Rigidbody.cpp line 261: MultiplyAccumulate(m_StateT1.m_Velocity, dt * kHalf, m_AccelPrev); MultiplyAccumulate(m_StateT1.m_Position, dt, m_StateT1.m_Velocity); MultiplyAccumulate(m_StateT1.m_Position, kHalf * dt * dt, m_AccelPrev); I modifide the code into: MultiplyAccumulate(m_StateT1.m_Position, dt, m_StateT1.m_Velocity); MultiplyAccumulate(m_StateT1.m_Position, kHalf * dt * dt, m_AccelPrev); MultiplyAccumulate(m_StateT1.m_Velocity, dt * kHalf, m_AccelPrev); The result of collision seems wrong. So my concept is somewhere wrong. Please give me a help, thank you.

Share this post


Link to post
Share on other sites
Advertisement
"My question is why the position calculation used the intermediate velocity, not the previous velocity."

Because that was your velocity last time. Your previous change in position. You want your change in position NOW.

If I toss a ball up with a velocity of 9.8 meters per second, upward (-9.8 would be downward), let's say I wait one sec before moving it. (Note that gravitational acceleration is -9.8.) I know from physics that the object should go about 4.9 meters in the air. If I go with my initial velocity, my position would be 9.8. If I calculate velocity first, (v + a*dt/2 = 9.8 + -9.8*1/2 = 4.9), I see that velocity is 4.9 and my position is properly updated.

When I move one more second in time, I should see my position be zero again.

Share this post


Link to post
Share on other sites
"If I toss a ball up with a velocity of 9.8 meters per second, upward (-9.8 would be downward)"

Assume frame time is 1 sec. Initial condition is V = 9.8 m/s & A = -9.8 m/s^2. Let's see the condition after first calculation (condition after 1 sec).

Pseudo-code in the book:
velocity += (acceleration * dt / 2) --> V = 4.9
position += (velocity * dt) --> P = 4.9
position += (acceleration * dt * dt / 2) --> P = 0
update acceleration --> A = -9.8
velocity += (acceleration * dt / 2) --> V = 0

After 1 sec, velocity is 0 & position is 0. The ball has not moved.

in my Pseudo-code:
position += (velocity * dt) --> P = 9.8
position += (acceleration * dt * dt / 2) --> P = 4.9
velocity += (acceleration * dt / 2) --> V = 4.9
update acceleration -->A = -9.8
velocity += (acceleration * dt / 2) --> V = 0

After 1 sec, velocity is 0 & position is 4.9. The ball's position will go zero in next frame.

Please indicate where my Pseudo-code is wrong.
Thank you very much.

P.S. About verlet velocity method
Refer to the bottom of the website
http://www.fisica.uniud.it/~ercolessi/md/md/node21.html

Share this post


Link to post
Share on other sites
"position += (acceleration * dt * dt / 2) --> P = 0"

If position equaled 4.9 before this (the right value, btw), acceleration equaled -9.8, and dt equaled 1, this would be equivalent to pos = 4.9 - 9.8/2 = 0. Good. But pos is supposed to equal the value BEFORE you do this acceleration step, right? What's wrong?

Well, right now, I'm realizing that the code from your book and your refactor are essentially the same. You both update velocity twice with the same equation. I think I figured out why.

v += .5*a*dt

The first update is to get how much to move position. But, when the ball is at the top of its arch, it's velocity should be zero and this says it's 4.9.

v += .5*a*dt

The second is to properly update velocity. If we combine the two:

v += (a*dt)/2 + (a*dt)/2 or a*dt

Well, vel += a*dt is the correct formula, so we know this only works because you do it twice.

So, going back to your original question about the difference between, basically just doing the second velocity update at the beginning or end of the set of equations. No. You're doing the exact same thing. The only difference is how initial values are treated. I'd stick with your method more as seems to be more accurate.

However, I want to revisit my math in the beginning. It seems I was doing the same thing as the book, but I only updated v once, and I was assuming pos += v, not pos += v*dt. So, now you should know why you AND the book is right and why I wasn't. I want to apologize because I didn't fully understand last time. Good luck.

Share this post


Link to post
Share on other sites
Quote:
Original post by agan0617

After 1 sec, velocity is 0 & position is 0. The ball has not moved.


Actually it has moved since its velocity wasn't equal to zero.

I've never used the equations you've posted but I'll answer your question anyway...
Don't forget that you're computing the state of the ball AFTER one second: you haven't computed its state continuously after the ball has been launched.

So here is the answer:
this is simply due to the fact that your dt is too great for you to have values that are good approximates of the correct solution.
The correct solution is( by integration of continuous functions ):

x( t ) = xo + vo * ( t - to ) + 0.5 * a * ( t - to ) * ( t - to )
v( t ) = vo + a * ( t - to )

So after one second, we have for the state of the ball:

x( 1 ) = 0.0 + 9.8 * 1 + 0.5 * -9.8 * 1 * 1 = 4.9
v( 1 ) = 9.8 + -9.8 * 1 = 0.0

The algorithm( or integration scheme is correct, at least seems to be ) since I obtained the following values

position = 4.8999951
velocity = -4.757934818e-10

from this code:



double position, velocity, acceleration, dt, time;
position = 0;
velocity = 9.8;
acceleration = -9.8;
time = position;
dt = 1.0e-6;

for( ; time < 1.0; )
{
time += dt;
velocity += ( acceleration * dt * 0.5 );
position += ( velocity * dt );
position += ( acceleration * dt * dt * 0.5 );
velocity += ( acceleration * dt * 0.5 ) ;
}

cout << "position = " << position << endl;
cout << "velocity = " << velocity << endl;














And for dt = 1.0e-3:

position = 4.8951
velocity = 1.428634039e-13

And for dt = 4.0e-2( 25 frames per second ):

position = 4.704
velocity = 6.817124447e-15

Well, you've noticed that the "approximation" of the position gets better as dt gets smaller: that's exactly what you should expect from a numerical integration scheme.

As for the question you've asked previously( why the intermediate velocity is used in the position update? i.e why position += ( prev_velocity + acceleration * dt * 0.5 ) * dt and why not position += ( prev_velocity + acceleration * dt ) * dt? ), you'll need reading the litterature on that specific subject. But generally such details are not important: the only thing you really need to know is how the error( on position, and velocity ) behaves with dt...

Well, I assume there's no mistakes in the pseudo-code: you simply didn't choose a dt small enough.

[Edited by - johnstanp on January 1, 2009 12:21:53 PM]

Share this post


Link to post
Share on other sites
Quote:
Original post by agan0617
I read the "Game Programming Gems 4" chapter 3-3 about the article "Writing a Verlet-Based Physics Engine". I have a question about the implement of velocity verlet method.

Pseudo-code in the book:
velocity += (acceleration * dt / 2)
position += (velocity * dt)
position += (acceleration * dt * dt / 2)
update acceleration
velocity += (acceleration * dt / 2)

My question is why the position calculation used the intermediate velocity, not the previous velocity.


To address your question, this is a characteristic of this Verlet integrator. You could update the position before you update the velocity and it would work, though differently. Splinter* was correct...if you update position before you update velocity, then you would be using an old velocity. This would result in the position "lagging" by one time step. Something is always lagging when you use explicit methods (and I won't get into implicit methods here). If you: a) update position before velocity; b) do not do the second step to update position also with acceleration; and, c) do the velocity update just after the position update and in one step rather than 2; and, d) update acceleration at the end, then you would have something called "explicit Euler" or "simple Euler" integration. The Verlet order of operations tends to be more stable than explicit Euler, and is generally preferred for game physics. These formulas and the order of operations are all designed to be good at different things. Some arrangements produce more accurate results. Some produce more stable results that work for larger time steps and a wider variety of models. Even the order of doing the steps can result in dramatic differences in the behavior of the simulation, depending on the thing you are simulating. That's not too technical of an answer, but I hope it is interesting at least!

Share this post


Link to post
Share on other sites
Quote:
Original post by johnstanp
The algorithm( or integration scheme is correct, at least seems to be ) since I obtained the following values
position = 4.8999951

Bad. An integrator used in the game development should be oversensitive, or exact. Clamping the rise in energy is simple, guessing orientation of a null vector isn't.
Quote:
And for dt = 4.0e-2( 25 frames per second )

A rate of a game engine (or integrator) should be independent on the rate of the 3D engine. Aka rules are the same for everyone doesn't matter how fast is GFX card.

Share this post


Link to post
Share on other sites
Quote:
Original post by Raghar
Quote:
Original post by johnstanp
The algorithm( or integration scheme is correct, at least seems to be ) since I obtained the following values
position = 4.8999951

Bad. An integrator used in the game development should be oversensitive, or exact. Clamping the rise in energy is simple, guessing orientation of a null vector isn't.


Who has ever talked about what an integrator used in the game development should be? Numerical integrators never give exact results: if the solution of a system of equations is analytically unknown, I don't know how an exact solution can be provided...So remind it: numerical integration only give approximate values( when compared to analytical solutions ), and this is why we talk about the behaviour of the "error". We don't even talk about the inevitable error related to the approximation of real numbers.

Quote:

Numerical integration is the approximate computation of an integral using numerical techniques. The numerical computation of an integral is sometimes called quadrature. Ueberhuber (1997, p. 71) uses the word "quadrature" to mean numerical computation of a univariate integral, and "cubature" to mean numerical computation of a multiple integral.

There are a wide range of methods available for numerical integration. A good source for such techniques is Press et al. (1992). Numerical integration is implemented in Mathematica as NIntegrate[f, {x, xmin, xmax}].

The most straightforward numerical integration technique uses the Newton-Cotes formulas (also called quadrature formulas), which approximate a function tabulated at a sequence of regularly spaced intervals by various degree polynomials. If the endpoints are tabulated, then the 2- and 3-point formulas are called the trapezoidal rule and Simpson's rule, respectively. The 5-point formula is called Boole's rule. A generalization of the trapezoidal rule is Romberg integration, which can yield accurate results for many fewer function evaluations.

If the functions are known analytically instead of being tabulated at equally spaced intervals, the best numerical method of integration is called Gaussian quadrature. By picking the abscissas at which to evaluate the function, Gaussian quadrature produces the most accurate approximations possible. However, given the speed of modern computers, the additional complication of the Gaussian quadrature formalism often makes it less desirable than simply brute-force calculating twice as many points on a regular grid (which also permits the already computed values of the function to be re-used). An excellent reference for Gaussian quadrature is Hildebrand (1956).

Modern numerical integrations methods based on information theory have been developed to simulate information systems such as computer controlled systems, communication systems, and control systems since in these cases, the classical methods (which are based on approximation theory) are not as efficient (Smith 1974).


http://mathworld.wolfram.com/NumericalIntegration.html

This is common sense.

Share this post


Link to post
Share on other sites
Also is common sense: floating point operations are not exact. They too are approximations.

Example: 1/3 is not represented as 0.3333333333333333333...(infinity truncated for brevity). Compilers don't even attempt to optimize floating point equations due to the possibility of altering their inaccuracy. I'd say 4.8999951 when the goal was 4.9 is pretty good!

However, since we're on the topic of accuracy, my implementation of RK4 integration prints this:
Position: 4.9
Velocity: -4.76837e-007
Press any key to continue . . .
with a constant time step of 100 milliseconds and a simulation of one second. So, it seems it would be possible to be more accurate on position, however my velocity isn't exactly zero. Interestingly enough, a time step of 1000 gives exact answers...but I haven't done extensive testing yet. Point is it won't be exact and it's not designed for it. (The smaller time step is supposed to be more precise, I thought.)

Share this post


Link to post
Share on other sites
Quote:
Original post by Splinter of Chaos
Also is common sense: floating point operations are not exact. They too are approximations.


That's what I was also saying:

Quote:
Original post by johnstanp
We don't even talk about the inevitable error related to the approximation of real numbers.


Quote:
Original post by Splinter of Chaos
However, since we're on the topic of accuracy, my implementation of RK4 integration prints this:
Position: 4.9
Velocity: -4.76837e-007
Press any key to continue . . .
with a constant time step of 100 milliseconds and a simulation of one second. So, it seems it would be possible to be more accurate on position, however my velocity isn't exactly zero. Interestingly enough, a time step of 1000 gives exact answers...but I haven't done extensive testing yet. Point is it won't be exact and it's not designed for it. (The smaller time step is supposed to be more precise, I thought.)


I prefer using Runge-Kutta methods( up to the eigth order for fun ): this is why I've never used the Verlet algorithm.
Raghar should have said that the implementation of a physics engine should be independent of the time step chosen by the user...

[Edited by - johnstanp on January 2, 2009 9:54:47 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by johnstanp
Quote:

Numerical integration is the approximate computation of an integral using numerical techniques. The numerical computation of an integral is sometimes called quadrature. Ueberhuber (1997, p. 71) uses the word "quadrature" to mean numerical computation of a univariate integral, and "cubature" to mean numerical computation of a multiple integral.

There are a wide range of methods available for numerical integration. A good source for such techniques is Press et al. (1992). Numerical integration is implemented in Mathematica as NIntegrate[f, {x, xmin, xmax}].

The most straightforward numerical integration technique uses the Newton-Cotes formulas (also called quadrature formulas), which approximate a function tabulated at a sequence of regularly spaced intervals by various degree polynomials. If the endpoints are tabulated, then the 2- and 3-point formulas are called the trapezoidal rule and Simpson's rule, respectively. The 5-point formula is called Boole's rule. A generalization of the trapezoidal rule is Romberg integration, which can yield accurate results for many fewer function evaluations.

If the functions are known analytically instead of being tabulated at equally spaced intervals, the best numerical method of integration is called Gaussian quadrature. By picking the abscissas at which to evaluate the function, Gaussian quadrature produces the most accurate approximations possible. However, given the speed of modern computers, the additional complication of the Gaussian quadrature formalism often makes it less desirable than simply brute-force calculating twice as many points on a regular grid (which also permits the already computed values of the function to be re-used). An excellent reference for Gaussian quadrature is Hildebrand (1956).

Modern numerical integrations methods based on information theory have been developed to simulate information systems such as computer controlled systems, communication systems, and control systems since in these cases, the classical methods (which are based on approximation theory) are not as efficient (Smith 1974).


http://mathworld.wolfram.com/NumericalIntegration.html

This is common sense.


I disagree that this is common sense. It is not for everyone, including many people on these forums. It is common sense if you have the appropriate background. Such background information and references can be helpful and I welcome them here.

Share this post


Link to post
Share on other sites
Quote:
Original post by grhodes_at_work
I disagree that this is common sense. It is not for everyone, including many people on these forums. It is common sense if you have the appropriate background. Such background information and references can be helpful and I welcome them here.


I'll be mindful of that...and my apologies for any inconvenience.

Share this post


Link to post
Share on other sites
Quote:
Original post by johnstanp
Quote:
Original post by Raghar
Bad. An integrator used in the game development should be oversensitive, or exact. Clamping the rise in energy is simple, guessing orientation of a null vector isn't.
Who has ever talked about what an integrator used in the game development should be?

This is a game development forum, and he talked about integrator borrowed from Game Programming Gems 4, in addition he talked about writing a physics engine. Bad energy clamping, and energy loss are common problems with physics engines.
Quote:
Numerical integrators never give exact results: if the solution of a system of equations is analytically unknown, I don't know how an exact solution can be provided...So remind it: numerical integration only give approximate values( when compared to analytical solutions ), and this is why we talk about the behaviour of the "error". We don't even talk about the inevitable error related to the approximation of real numbers.
How do you know the analytical solution isn't only an approximation, and the result of the numerical integration isn't mathematically exact? A person who invented the analytic solution could mistake a discrete problem for a continuous function.

Anyway, I used words oversensitive, or exact in the meaning: >= exact Applying bias to the results would cause the integrator to either overestimate result, or have the exact result. (When min(noise) + bias is the correct result.) The result + bias is then clamped into, for engine, a safe value.

Quote:
Original post by Splinter of Chaos
Also is common sense: floating point operations are not exact. They too are approximations.

They are exact implementation of the standard. An exact result is mathematically inaccurate when the standard requires the inaccuracy because of physical restrictions. For rest of the topic see above.

Quote:
Compilers don't even attempt to optimize floating point equations due to the possibility of altering their inaccuracy. I'd say 4.8999951 when the goal was 4.9 is pretty good!

Actually they tried to optimize floating point operations, but programmer's complains forced them to decrease aggressiveness of optimizations. AFAIK PCSX2 doesn't work well when compiled in the most aggressive profile.

Quote:
However, since we're on the topic of accuracy, my implementation of RK4 integration prints this:
Position: 4.9
So lets go back to the topic.

An integration with position = 0 velocity -10 acceleration 20 resulted in this:
Quote:
step 1/4
integrator0 pos 2.5 v 10.0
integrator1 pos 0.0 v 10.0
integrator2 pos 0.0 v 10.0

step 1/10
integrator0 pos 0.9999999999999998 v 10.0
integrator1 pos 0.0 v 10.0
integrator2 pos 6.661338147750939E-16 v 10.000000000000005

step about 1/32
integrator0 pos 0.3125 v 10.0
integrator1 pos 0.0 v 10.0
integrator2 pos -3.969047313034935E-15 v 10.0
The first integrator is an first order integrator, which perfectly satisfies what I said about integrators by definition. Its accuracy is sufficiently crap, thus the floating point noise doesn't matter.

The second integrator is second order, and solves the situation perfectly.

The third integrator has higher order, and deviation from exact solution has been caused by its internal behavior. It has quite a lot of constants which are themselves only approximations.

Obviously because the problem is perfectly solvable by a second order integrator, both second and third integrators have solved it easily.

The 4.704 looks quite bad. It nearly looks like a first order integrator...

re: agan0617
Quote:
MultiplyAccumulate(m_StateT1.m_Position, dt, m_StateT1.m_Velocity);
MultiplyAccumulate(m_StateT1.m_Position, kHalf * dt * dt, m_AccelPrev);
MultiplyAccumulate(m_StateT1.m_Velocity, [red]dt[/red], m_AccelPrev);


Try to change the code into this, it might fare better.

Share this post


Link to post
Share on other sites
Quote:
Original post by Raghar
Quote:
Original post by johnstanp
Quote:
Original post by Raghar
Bad. An integrator used in the game development should be oversensitive, or exact. Clamping the rise in energy is simple, guessing orientation of a null vector isn't.
Who has ever talked about what an integrator used in the game development should be?

This is a game development forum, and he talked about integrator borrowed from Game Programming Gems 4, in addition he talked about writing a physics engine. Bad energy clamping, and energy loss are common problems with physics engines.


What is true for the scientific community( I am not part of it ) is true for game development.
This is pretty obvious, for me and for anyone who has taken serious courses on mechanical fluid dynamics, electricity, etc...Any integration scheme, then any game engine will never give exact results in solving a system of non-linear( or linear ) equations.


Quote:
How do you know the analytical solution isn't only an approximation, and the result of the numerical integration isn't mathematically exact? A person who invented the analytic solution could mistake a discrete problem for a continuous function.


Are you serious in that last sentence?

That's not the point.
An analytical solution is a solution that does have a "mathematical form": numerical ones generally don't...No one has ever found the mathematical solution of the Navier-Stokes equations and many other systems of non-linear equations, and that's why people started to try solving them numerically, because they were not able to find the mathematical( analytical ) solution...
And generally the results of those solvers are compared to experimental ones, or to simple analytical solutions, to see their accuracy...
But an analytical solution is the correct solution unless you're smart enough to prove that the entire scientific community is wrong, and that does happen.

Quote:

Anyway, I used words oversensitive, or exact in the meaning: >= exact Applying bias to the results would cause the integrator to either overestimate result, or have the exact result. (When min(noise) + bias is the correct result.) The result + bias is then clamped into, for engine, a safe value.


Why are you talking about "noise"? It has nothing to do here...
By the way, you're just saying that you misused the term "exact", the very reason behind my first intervention...
The real answer to the question asked by the person who created this topic is: "your time step is too great..." A time step of one second is too large for an ( explicit ) numerical integrator, no matter the order.
This is what you should have answered not that he should expect obtaining accurate results no matter the time step chosen for any integrator or any game engine: this is a mistake. Had you apologized for the mistake, I wouldn't have minded answering back.

[Edited by - johnstanp on January 3, 2009 3:23:19 PM]

Share this post


Link to post
Share on other sites
Quote:
Original post by Raghar
Quote:
Original post by Splinter of Chaos
Also is common sense: floating point operations are not exact. They too are approximations.

They are exact implementation of the standard. An exact result is mathematically inaccurate when the standard requires the inaccuracy because of physical restrictions. For rest of the topic see above.

Quote:
Compilers don't even attempt to optimize floating point equations due to the possibility of altering their inaccuracy. I'd say 4.8999951 when the goal was 4.9 is pretty good!

Actually they tried to optimize floating point operations, but programmer's complains forced them to decrease aggressiveness of optimizations. AFAIK PCSX2 doesn't work well when compiled in the most aggressive profile.


Thanks for the historic background! Although, what do you mean by "standard"? There's the C++ standard, but I don't think it has anything to do with actual hardware, just the language. And if I program in assembly, or inline assembly, I wasn't aware of any "standard" unless you mean the CPU itself. Or is it that the standard allows inaccuracy due to physical restrictions?

Share this post


Link to post
Share on other sites
Thank your for explainning the physics meaning of the code. I guess it's a method used "half-current" velocity to predict current position. Back to my original question, I have only one question now. To imeplent the algorithm of verlet velocity, is my Pseudo-code wrong?
Thank you.

Verlet velocity:
http://www.fisica.uniud.it/~ercolessi/md/md/node21.html

Pseudo-code
position += (velocity * dt)
position += (acceleration * dt * dt / 2)
velocity += (acceleration * dt / 2)
update acceleration
velocity += (acceleration * dt / 2)

Share this post


Link to post
Share on other sites
Quote:
... I wasn't aware of any "standard" unless you mean the CPU itself.
http://en.wikipedia.org/wiki/IEEE_754

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement