Overview
RungeKutta methods Vectorisation of ODEs Adaptive stepsize Subjects not covered Summary
Recap: The midpoint method
First order ODE
dy = f (x , y ) dx Take a trial step to the midpoint: ymid = yn + f (xn ) 2 [Looks like Euler!]
Then use this to evaluate f at the midpoint: yn+1 = yn + f (xmid , ymid )
Error analysis
1
Error in derivative dy + O(2 ) dx y true (xn+1 ) y true (xn ) = f (xmid , y true (xmid )) + O(2 ) S xy =
=
2
Error in ymid ymid = y true (x + /2) + O(2 ) [the Euler error]
Error in f (xmid , ymid ) f (xmid , ymid ) = f (xmid , y true (xmid ) + O(2 )) = f true + O(2 )
4 5
Error per step: O(2 ) = O(3 ) Total error: O(2 )
RungeKutta methods
Midpoint method: Take a trial step to evaluate rhs f (x , y ) at midpoint improved accuracy There are many ways of evaluating f (x , y ) that agree to leading order. Use this to eliminate higher order error terms
Most used: Fourth order RungeKutta
yn+1 = yn + F1 F2 F3 F4 F1 + 2F2 + 2F3 + F4 ) 6 = f (xn , yn ) = f (xn + , yn + F1 ) 2 2 = f (xn + , yn + F2 ) 2 2 = f (xn + , yn + F3 ) Total error: O(4 ) with
Euler term midpoint term modied midpoint term endpoint term
Step error: O(5 )
Note: For y (x ) = f (x ) RK4 gives Simpsons rule!
Why RungeKutta?
RK4 is preferable over Euler (midpoint) if improved accuracy allows us to take 4 (2) times longer steps to compensate for more function evaluations. This is usually the case (but not always) RungeKutta does not guarantee stability, but
1 2 3
Accuracy is sometimes more important (short runs) Instability can be delayed by higher accuracy RungeKutta methods work for all types of ODEs. They can be made into black-box ODE solvers
Reduction to rst order ODEs
We can transform any ODE into a set of coupled rst order equations, like we did with the second order equation
Example (N th order equation)
dNy = f (y , t ) dt N dy 1 dt dy2
dt
dyN
dt
= y2 = y3 . . .
[ y (t )] [ y (t )]
= f (y1 , t )
Example (Kepler equations)
r = r 2 r + 2r r = 0
GM r2
dy 1 dt dy2 dt dy3 dt dy4 dt
dr dt dr dt d dt d dt
= y2 2 = y1 y3 = y3
2 y4 = 2 yy 1
GM 2 y1
ODEs in vector form
We can write the Euler algorithm (and midpoint and RK4) in vector form:
dy1 dx dy2 dx dyn dx
= f1 (x , y1 , . . . , yn ) = f2 (x , y1 , . . . , yn ) ... = fn (x , y1 , . . . , yn )
dy = f (x , y) dx
Implement f (x , y) as a vector function and make algorithm call that function
Pendulum
The equation of motion for a simple pendulum is d 2 g = sin 2 dt
Step 1: Dimensionless time variable
= g/ t = d 2 = sin d2
Step 2: Reduce to rst order equations
d d d d
= = sin
dy = d
sin
y2 sin y1
MatLab implementation
function dy = pendulum(t,y) dy(1) = y(2); dy(2) = -sin(y(1));
Integrating pendulum equations
Euler method
y(1,:) = [theta0 omega0]; t = 0:dt:tmax; N = length(t); for n=1:N-1 y(n+1,:) = y(n,:) + dt*pendulum(t(n),y(n,:)); end
Midpoint method
for n=1:N-1 ymid = y(n,:) + 0.5*dt*pendulum(t(n),y(n,:)); y(n+1,:) = y(n,:) + dt*pendulum(t(n)+0.5*dt,ymid); end
Slow vs fast motion
Example: Kepler problem
Near perihelion: fast motion r (t ), (t ) vary rapidly need small time steps Near aphelion: slow motion can aord bigger time steps
Solution: Let computer decide what stepsize is needed!
Adaptive stepsize
Let computer decide stepsize depending on what precision we want Estimate error at current t Increase t when error estimate too small Decrease t when error estimate too big
How to estimate the error?
Two approaches: Take two steps with t /2, compare with one step with t Find dierence between higher-order step and lower-order (eg RK4 and midpoint)
Adaptive stepsize
What is too small, too big?
A xed number is not useful: in Kepler problem is in radians but r could be in metres! Try to keep r r relative tolerance can become zero, so we also need absolute tolerance > min Use physical insight! The optimal step size can be estimated using Taylor expansion: topt = t rel
1/N
N = order of algorithm
Fixed vs adaptive stepsize
Subjects not covered
Sti sets of equations Implicit methods Richardson extrapolation Boundary value problems next!
Summary
RungeKutta methods: systematic improvement in accuracy
Midpoint method = second order RungeKutta Fourth-order RungeKutta very widely used Higher order used in adaptive stepsize algorithms
Adaptive stepsize
Allows us to vary the stepsize locally during integration Based on estimating the error in the algorithm
Vectorising coupled ODEs and using function handles allows us to write universal ODE solvers