Jump to content
  • Advertisement
Sign in to follow this  

Runge Kutta

This topic is 4517 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey, how you' all doing? Well i followed your advice and started on the Runge kutta method for modeling, so i got these equation of the web site that i got from here: k_1 = hf(x_n,y_n) (4) k_2 = hf(x_n+1/2h,y_n+1/2k_1) (5) k_3 = hf(x_n+1/2h,y_n+1/2k_2) (6) k_4 = hf(x_n+h,y_n+k_3) (7) y_(n+1) = y_n+1/6k_1+1/3k_2+1/3k_3+1/6k_4+O(h^5) and i would like you to check out my logic on what they mean: k_1 = hf(x_n,y_n) (4) this is the first equation that we do, and what we got here is: h=step size in second. f(xn,yn) is the function of position (x) from the initial value of x_n ,from the initial time yn.So my question is, the function itself, is it a basic Euler's calcualtion that is done on there or not? and why would i need the the inital time as opposed to the time step in the function itself? Thx

Share this post


Link to post
Share on other sites
Advertisement
I only know very little about this integration method and its inner workings. (plz correct me if I'm mistaken somewhere)

What I'm almost sure of, is that it calculates the value of the function at infitesimal distances away from a known value f(x0) at x=x0, using as many orders of derivatives available at x0.
This is merely a one dimensional Taylor series. It allows you to calculate the value of a function at one point, given the values of the function and ALL its derivatives at another -known- point x0. A Taylor series is of the form:

f(x) = f(x0) + f '(x0)*(x-x0) + f ''(x0)*(x-x0)2/2! + f '''(x0)*(x-x0)3/3! + ... + f(n)(x0)*(x-x0)n/n! + ...

Suppose that you know all the values of the function and its derivatives at x=x0. Replacing x with x0+dx...
f(x0+dx) = f(x0) + f '(x0)*dx + f ''(x0)*dx2/2! + f '''(x0)*dx3/3! + ...

Therefore, you can calculate the "behaviour" of the function around a known point x0. The more derivatives you know at x0, the higher the accuracy of the result. Also, keeping dx as small as possible is a good thing to keep in mind.

I hope it's a little more clear now, how this method actually works.
Why it works, is another subject!

edit:
there's a bug with the apostrophes in superscripts, be careful there... The order of derivative in each term must be equal to the number in the factorial in the denominator (and the exponent of dx)

[Edited by - someusername on January 4, 2006 9:52:54 PM]

Share this post


Link to post
Share on other sites
so all the Runge Kutta method is euler's method wrapped up in Taylors series?
is that correct?
And what is O(h^5) exectly in runge kutta? waht is this o function?
Thx

Share this post


Link to post
Share on other sites
Quote:

And what is O(h^5) exectly in runge kutta? waht is this o function?

I suppose it's the usual O from algorithm theory. If you state that f(n)=O( g(n)), it means that f(n)<=g(n) for all 'n' in the domain. So, this O(h^5) should be a warning that you should also add a quantity that is smaller in order of magnitude than h^5. That quantity is the rest of the (infinite) derivatives

Share this post


Link to post
Share on other sites
well, the calculations for eulers and runge kuttas seems to be almost the same, the function being the same, then u just do more steps in there and that's it, therefor i can adampt my Euler's method to fit Runge Kutta method?
But i am not sure what the O(h^5) in teh formula is. I thinks thats an error factor, can somone confirmt the above statements about the same function, and the O function, or deny them?
Thx
And Thx for ur help u post above was great.

Share this post


Link to post
Share on other sites
The O(h5) is Big-oh notation, a constituent of asymptotic notation (which you'll see used often in algorithmic analysis, for example).

It formally denotes the set of functions

O(f(x)) = {g(x) : f(x) <= c*g(x), x > x0}

for some constant c and x0. Intuitively, this says that for sufficiently large x, c*g(x) is always greater than or equal to f(x), i.e., g(x) is an upper-bound on f(x).

Often, however, the notation is slightly abused and interpreted thusly:

f(x) = O(g(x)) if there exists c and x0 such that f(x) <= c*g(x > x0).

Hence, in the case of O(h5), you may interpret this to be shorthand for c*h5.

The notation is often used to get rid of the details of a function and keep only the important stuff, i.e. the stuff which really dictates it's growth rate at the asymptotic scale (i.e., as the argument goes to infinity). In this sense, Big-oh notation specifies an anonymous function with only it's asymptotic growth rate (the function in the Big-oh notation parantheses) given.

Share this post


Link to post
Share on other sites
hey, i tried to implemet Runge Kutta method in 1 dimension, but i think i made a mistake, not sure where since i followed the formulas exectly.Here is the relevant part of the source
General data that will be used. What i am modeling here is a 1 kg object moving up at 10 m/s from Earth;s surface.
Set Up:

point[0].dPosX=0;
point[0].dPosY=0;
point[0].dPosZ=0;
point[0].dVeloX=0;
point[0].dVeloY=0;
point[0].dVeloZ=0;
point[0].dMass=5.98e24;
point[1].dPosX=6.38e6;
point[1].dPosY=0;
point[1].dPosZ=0;
point[1].dVeloX=10;
point[1].dVeloY=0;
point[1].dVeloZ=0;
point[1].dMass=1;
DistanceFactor=6.38e4;
TimeFactor=0.1;




Main function for calculations:

void CDraw::Runge(){
int x=0;
for(;;){
double k1=0;
double k2=0;
double k3=0;
double k4=0;
k1=TimeFactor*Step(x,iTime);
k2=TimeFactor*Step(x+k1/2,iTime +TimeFactor/2);
k3=TimeFactor*Step(x+k2/2,iTime +TimeFactor/2);
k4=TimeFactor*Step(x+k3,iTime+TimeFactor);
x=x+k1/6+k2/3+k3/3+k4/6;
iTime+=TimeFactor;
cout<<"Time:"<<setfill('0')<<setw(2)<<iTime<<"Velocity x:"<<setfill('0')<<setw(2)<<point[1].dVeloX<<endl;
if(iTime>=1)system("pause");
}
}
double CDraw::Step(double xp,double dt){
point[1].dForceX=0;
point[1].dForceX+=ForceX(point[1],point[0]);
point[1].dVeloX=Velocity(point[1].dVeloX,Acceleration(point[1].dForceX,point[1].dMass),dt);
xp+=point[1].dVeloX*dt;
return xp;
}




Supplemetrary stuff that worked b4 on Euler;s simulations, therefor most likely correct, but just in case:

//Do the time scaling using Time factor
double CDraw::Time(long double dTimeMs){
return (dTimeMs*TimeFactor);
}
//calculate the new velocity. newVelocity=oldVelocioty +AccelerationTime
double CDraw::Velocity(double dLastVelocity,double dAcceleration,double dTime){
if(bFirst){
return dLastVelocity+dAcceleration*0.5*dTime;
}
return dLastVelocity+dAcceleration*dTime;
}
//Do acceleration from f=ma
double CDraw::Acceleration(double dForce,double dMass){
return dForce/dMass;
}
//Calculate force in the X direction
//Fx=forceGravity*changeinX/radius
double CDraw::ForceX(gPOINT point,gPOINT point2){
return (Gravity(point,point2)*(point2.dPosX-point.dPosX)/Pytheg(point,point2));
}
//Calculate force in the X direction
//Fy=forceGravity*changeiny/radius
double CDraw::ForceY(gPOINT point,gPOINT point2){
return (Gravity(point,point2)*(point2.dPosY-point.dPosY)/Pytheg(point,point2));
}
double CDraw::ForceZ(gPOINT point,gPOINT point2){
return (Gravity(point,point2)*(point2.dPosZ-point.dPosZ)/Pytheg(point,point2));
}
//Gravity, Fg=Gmm/r^2
double CDraw::Gravity(gPOINT point,gPOINT point2){
return ((6.67e-11*point.dMass*point2.dMass)/pow(Pytheg(point,point2),2));
}
//r^2=a^2+b^2
//r=sqrt(a^2+b^2)
double CDraw::Pytheg(gPOINT point,gPOINT point2){
return sqrt(pow(point2.dPosX-point.dPosX,2)+pow(point2.dPosY-point.dPosY,2)+pow(point2.dPosZ-point.dPosZ,2));
}




Thx alot for your time, the responses that i got are well appriciated and are very informative, sry for many question , just trying to get a hang of this stuff.

Share this post


Link to post
Share on other sites
Quote:
Original post by someusername
I only know very little about this integration method and its inner workings. (plz correct me if I'm mistaken somewhere)

What I'm almost sure of, is that it calculates the value of the function at infitesimal distances away from a known value f(x0) at x=x0, using as many orders of derivatives available at x0.
This is merely a one dimensional Taylor series. It allows you to calculate the value of a function at one point, given the values of the function and ALL its derivatives at another -known- point x0. A Taylor series is of the form:

f(x) = f(x0) + f '(x0)*(x-x0) + f ''(x0)*(x-x0)2/2! + f '''(x0)*(x-x0)3/3! + ... + f(n)(x0)*(x-x0)n/n! + ...

Suppose that you know all the values of the function and its derivatives at x=x0. Replacing x with x0+dx...
f(x0+dx) = f(x0) + f '(x0)*dx + f ''(x0)*dx2/2! + f '''(x0)*dx3/3! + ...

Therefore, you can calculate the "behaviour" of the function around a known point x0. The more derivatives you know at x0, the higher the accuracy of the result. Also, keeping dx as small as possible is a good thing to keep in mind.

I hope it's a little more clear now, how this method actually works.
Why it works, is another subject!

edit:
there's a bug with the apostrophes in superscripts, be careful there... The order of derivative in each term must be equal to the number in the factorial in the denominator (and the exponent of dx)


Forgive me, I'm going to be a little pedantic here... too much teaching [wink]

(1) Runge-Kutta methods are not "merely" a Taylor series. The formulae are consistent with a Taylor series, however, the formulae used to derive the runge-kutta method involve a set of coefficients that are not uniquely determined by Taylor's series and this provides extra degrees of freedom that can improve the general performance of the method in terms of stability and accuracy.

(2) You don't use any integration method to calculate the solution an inifinitesimal point from the initial conditions. That is how integration works analytically, but you won't get very far taking infinitesimal steps [smile] Instead an integration method takes a (generally) small but finite step.

(3) It is important to remember that higher order does not imply greater accuracy. This is often the case, but certainly not always. It depends upon the smoothness of the function you are integrating.

(4) As nilkn explained, big-O notation is asymptotic; It doesn't tell you the size of the error that you get due to truncating the Taylor series.


Have fun! [smile]




Share this post


Link to post
Share on other sites
thanks for the clarifications. As I said, I only know a little about this method, I just thought I could point out the basic idea and why you need the known point.

Quote:

(3) It is important to remember that higher order does not imply greater accuracy. This is often the case, but certainly not always. It depends upon the smoothness of the function you are integrating.

Indeed, in many areas, increasing the order introduces error. (approximating with polynomials i.e.)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!