0% found this document useful (0 votes)
4 views22 pages

D Savostyanov@essex Ac Uk

ode methods

Uploaded by

sin3divcx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views22 pages

D Savostyanov@essex Ac Uk

ode methods

Uploaded by

sin3divcx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

MA209 Part 2: Methods for ODEs

Dr Dmitry Savostyanov
d.savostyanov@essex.ac.uk

Learning objectives
▶ To introduce the modelling cycle
▶ To distinguish and compare analytic and numerical methods
▶ To introduce and develop accurate numerical methods for first–order ODEs
▶ To apply numerical techniques to solve higher–order ODEs and systems of ODEs

Further reading
▶ Cleve Moler, Numerical Computing with MATLAB Chapter 7
uk.mathworks.com/content/dam/mathworks/mathworks-dot-com/moler/odes.pdf
▶ Alfio Quarteroni et al, Scientific Computing with MATLAB and Octave Sections 8.3, 8.7-8.11

Dmitry Savostyanov University of Essex 29th Jan 2024


MA209 Part 2 Real → Ideal 2 / 21

Mathematical modelling
y ▶ Idealistic — we use ideal concepts (like numbers and
y(0) functions) to describe real world objects.
▶ Simplistic — we start with simple assumptions about
the object of study, not with an attempt to describe it
precisely.
▶ Focused — we attempt to reproduce a specific
phenomenon, not all complex behaviour of the real
y(t) object.
All models are wrong, but some are useful.

Dmitry Savostyanov University of Essex 29th Jan 2024


MA209 Part 2 Observations → Model 3 / 21

The nature of derivatives


y ▶ y(t) (coordinate)
dy
y(0) ▶ ẏ(t) = y ′ (t) = (velocity)
dt
d2 y
▶ ÿ(t) = y (t) =
′′
(acceleration)
dt2
⃗g
⃗Fair
Newton’s 2nd law of motion
y(t) ▶ m⃗ a = ⃗F = m⃗g − k⃗v (the ‘law’)
▶ y ′′ (t) = −g − m
k ′
y (t) (differential equation)
⃗v ▶ y(0) = y0 (initial condition)
▶ y ′ (0) = v(0) = 0 (initial condition)

Dmitry Savostyanov University of Essex 29th Jan 2024


MA209 Part 2 Analytic methods 4 / 21

Example 1 (Free fall)


y ′′ (t) = −g, y(0) = y0 , y ′ (0) = 0
▶ integrate directly ▶ integrate directly again

y ′ (t) = −gt + c1 t2
y(t) = −g · + c2
2
▶ use the initial condition
▶ use the other initial condition

y (0) = 0 ⇒ −g · 0 + c1 = 0
02
⇒ c1 = 0 y(0) = y0 ⇒ −g · + c2 = y0
2
⇒ c2 = y0

gt2
y(t) = y0 −
2

Dmitry Savostyanov University of Essex 29th Jan 2024


MA209 Part 2 Analytic methods 5 / 21

Example 2 (Air–suspended fall)


▶ compute the integrating factor
y ′′ (t)+αy ′ (t) = −g, y(0) = y0 , y ′ (0) = 0 Z 
I.F. = exp α dt = exp(αt) = eαt

▶ integrate directly
▶ multiply both part with the I.F.,

y (t) + αy(t) = −gt + c1
y ′ (t)eαt + αeαt y(t) = (αy0 − gt) eαt
| {z }
▶ use the initial conditions: 
αt ′
y(t)e = αy0 eαt − gteαt
y ′ (0) + αy(0) = g · 0 + c1
0 + αy0 = c1

y (t) + αy(t) = αy0 − gt

Dmitry Savostyanov University of Essex 29th Jan 2024


MA209 Part 2 Analytic methods 6 / 21

Example 2 (continued)

y ′′ (t) + αy ′ (t) = −g, y(0) = y0 , y ′ (0) = 0


αt ′

y(t)e = αy0 e − gteαt
αt

▶ integrate by parts ▶ apply initial condition


Z
 g·0 g
y(t)eαt = αy0 eαt − gteαt dt y(0) = y0 − + 2 + c2 e−α·0
α α
 Z  g
eαt eαt eαt y0 = y0 + 2 + c2
= αy0 · −g t· − 1· dt α
α α α g
gteαt
ge αt c2 = − 2
= y0 eαt − + 2 + c2 α
α α 
gt g gt g
y(t) = y0 − + 2 + c2 e−αt y(t) = y0 − + 2 1 − e−αt
α α α α

Dmitry Savostyanov University of Essex 29th Jan 2024


MA209 Part 2 Model → Reality 7 / 21

Interpretation and verification


▶ free fall y
gt2
y⋆ (t) = y0 − y0
2
▶ air–suspended fall
α=1
gt g 
y(t) = y0 − + 2 1 − e−αt
α α α=2
▶ consistency check: when α → 0
α=3
α t 2 2 α=5
e−αt = 1 − αt + − O(α3 t3 )
2
 
gt g α2 t2
y(t) = y0 − + 2 αt − + O(α3 t3 )
α α 2
gt2 t
→ y0 − = y⋆ (t)
2
Dmitry Savostyanov University of Essex 29th Jan 2024
MA209 Part 2 Euler’s method 8 / 21

y y Air–suspended fall
y0 ▶ Laws of physics → ODE
y2 m⃗ a = m⃗g − k⃗v
k
y ′′ (t) = −g − y ′ (t)
y3 m
′ k
y (t) = −gt − (y(t) − y0 )
| m {z }
y4 f(t,y)
⃗v
▶ ODE → numerical method
y(t + ∆t) − y(t)
≈ y ′ (t) = f(t, y(t))
∆t
y(t + ∆t) ≈ y(t) + ∆t · f(t, y(t))
t
yn+1 = yn + h · f(tn , yn )

Dmitry Savostyanov University of Essex 29th Jan 2024


MA209 Part 2 Euler’s method 9 / 21

(forward) Euler’s method


▶ collocation: write equations at t = tn y
y0 y1
y ′ (t) = f(t, y) ⇒ y ′ (tn ) = f(tn , y(tn ))

▶ approximate derivative with forward difference: y(t1 )

y(tn+1 ) − y(tn )
≈ y ′ (tn ) = f(tn , y(tn ))
tn+1 − tn y(t2 )
y2
▶ replace approximate equations with exact ones

yn+1 − yn
= f(tn , yn ) y3
tn+1 − tn y(t3 )

▶ compute yn+1 = yn + (tn+1 − tn )f(tn , yn )


| {z } t
∆t or τ or h 0 h 2h 3h 4h
▶ expect yn ≈ y(tn )
Dmitry Savostyanov University of Essex 29th Jan 2024
MA209 Part 2 Euler’s method 10 / 21

Accuracy analysis

y ′ (t) = f(t, y), y(t0 ) = y0 y


y0 y1
▶ Taylor expansion for y(t) around t = t0 gives
y(t1 )
′ y ′′ (ξ)
y(t1 ) = y(t0 ) + y (t0 )(t1 − t0 ) + (t1 − t0 )2
| {z } 2 | {z }
h h
= y(t0 ) + f(t0 , y0 )h + O(h )2
y(t2 )
y2
▶ The first step of Euler’s method produces

y1 = y0 + f(t0 , y0 )h y3
y(t3 )
▶ Accuracy of one step is |y1 − y(t1 )| ≲ O(h2 )
▶ After N steps, |yN − y(tN )| ≲ NO(h2 ) = Th = O(h) t
0 h 2h 3h 4h

Dmitry Savostyanov University of Essex 29th Jan 2024


MA209 Part 2 Heun’s methods 11 / 21

Heun’s method aka improved Euler y


k1
▶ Gradient k1 is too slow. y0
▶ Gradient k2 is too steep. for n=1:N
k1 = f(t(n),y(n));
▶ Let’s take their average: k2 = f(t(n)+h,y(n)+h*k1);
y1
k2
y(n+1) = y(n) + h*(k1+k2)/2; k2
end
yn+1 = yn + 12 (k1 + k2 )h
Accuracy analysis
▶ y(t1 ) = y(t0 ) + y ′ (t0 ) h + 1
y ′′ (t0 ) h2 + O(h3 )
| {z } 2 | {z }
dt |t=t =( ∂t + ∂y · dt )t=t =( ∂t + ∂y f)t=t
f(t0 ,y0 ) ∂f dy
= df ∂f ∂f ∂f
0 0 0

▶ k1 = f(t0 , y0 )
▶ k2 = f(t0 +h, y0 +hk1 ) = f(t0 , y0 )+ ∂f
∂t h+ ∂f
∂y hk1 +O(h2 ) t
t=t0 t=t0 0 h
Finally, y(t1 ) = y0 + k1 h + 12 (k2 − k1 )h +O(h3 ), so |y1 − y(t1 )| ≲ O(h3 )
| {z } |yN − y(tN )| ≲ O(h2 )
y1
Dmitry Savostyanov University of Essex 29th Jan 2024
MA209 Part 2 Heun’s methods 12 / 21

Heun’s method generalised y


k1
▶ k1 = f(tn , yn ) y0
▶ k2 = f(tn + ah, yn + bhk1 )
k2
▶ yn+1 = yn + (c1 k1 + c2 k2 )h k2 y1
Accuracy analysis
 
▶ y(t1 ) = y(t0 ) + f(t0 , y0 )h + 1
2
∂f
∂t 0 + ∂f
∂y 0 f(t0 , y0 ) h2 + O(h3 )
▶ k1 = f(t0 , y0 )
▶ k2 = f(t0 + ah, y0 + bhk1 ) = f(t0 , y0 ) + ∂f
∂t 0 ah + ∂y 0 bhk1 + O(h )
∂f 2
 
▶ y1 = y0 + c1 k1 h + c2 k1 + ∂f
∂t 0 ah + ∂y 0 bhk1 h + O(h )
∂f 3

  

k1 h = c1 k1 h + c2 k1 h 
c1 + c2 =1 
a =b t
0 h
1 2
2h = c2 ah2 ⇔ c2 a = 12 ⇔ c1 =1− 1

1 
 

2a

2 k1 h
2
= c2 bk1 h2 c2 b = 12 c2 1
= 2a
Dmitry Savostyanov University of Essex 29th Jan 2024
MA209 Part 2 Runge–Kutta methods 13 / 21

The classical Runge–Kutta method y


k1 k2
▶ k1 = f(tn , yn ) for n=1:N
k1 = f(t(n), y(n));
▶ k2 = f(tn + 12 h, yn + 12 hk1 ) k2 = f(t(n)+h/2, y(n)+h*k1/2);
k3 = f(t(n)+h/2, y(n)+h*k2/2);
▶ k3 = f(tn + 12 h, yn + 12 hk2 ) k4 = f(t(n)+h, y(n)+h*k3); k4
y(n+1) = y(n)+h*(k1+2*k2+2*k3+k4)/6;
▶ k4 = f(tn + h, yn + hk3 ) end k3
▶ yn+1 = yn + 16 (k1 +2k2 +2k3 +k4 )h

Comparison of accuracy for different methods


algorithm cost (= # func. eval.) accuracy accuracy
(one step) (one step) (final)
Euler’s method 1 O(h2 ) O(h)
Heun’s method 2 O(h3 ) O(h2 ) t
Heun’s third–order rule 3 O(h4 ) O(h3 ) 0 h
classical Runge–Kutta method 4 O(h5 ) O(h4 )

Dmitry Savostyanov University of Essex 29th Jan 2024


MA209 Part 2 Adaptive choice of step–size 14 / 21

Heun’s = modified Euler’s method y


▶ k1 = f(tn , yn ) k1 z1
y0
▶ k2 = f(tn + 12 h, yn + 12 hk1 ) k2 z
2

▶ yn+1 = yn + k2 h |y1 − y(t1 )| ≲ O(h3 ) k2 y(t1 )


y1
h
Euler’s method with step–size 2
▶ k1 = f(tn , yn ) zn+1
z }| {
▶ k2 = f(tn + 12 h, yn + 12 hk1 )
▶ |z2 − y(t1 )| ≲ O(h2 )
zn+2 = zn + 12 k1 h + 12 k2 h
| {z }
zn+1

Error estimate: e1 = |y1 − y(t1 )| < |y1 − z2 | + |z2 − y(t1 )| ≃ 2|y1 − z2 | = ẽ1
| {z } | {z } | {z } t
0 h h
≲O(h3 ) ≲O(h2 ) ≲O(h2 ) 2

If ẽ1 is large, reduce h and repeat the step. If ẽ1 is small, increase h and continue.
Dmitry Savostyanov University of Essex 29th Jan 2024
MA209 Part 2 Adaptive choice of step–size 15 / 21

Heun’s method with adaptive step–size y


▶ Implementation caveats:
▶ Since h varies, the number of steps N is
not known in advance. Hence, vectors t
and y can not be pre-allocated and need to
be appended at each step.
▶ Extra care is needed for the last point tn to
land in the desired position T.
▶ A test with a large air suspension parameter
shows that grid points are dense in some
areas and sparse in others.
How can we explain it?
▶ If we apply the same algorithm with air
suspension small or absent, we can expect
grid points to be distributed more
uniformly along the line. (why?) t
Dmitry Savostyanov University of Essex 29th Jan 2024
MA209 Part 2 Adaptive choice of step–size 16 / 21

Runge–Kutta–Fehlberg method (RK45) y


▶ Ideas of implementation:
▶ Use 6 gradients per step to calculate two
approximations: main one with accuracy
O(h6 ) and test one with accuracy O(h5 )
for each step.
▶ Evaluate mismatch and use it to estimate
the error and adapt the step–size.
▶ Reason for the name: overall, the
algorithms have accuracy O(h4 ) and
O(h5 ), hence RK45.
▶ Due to high order of the method, a very
few steps are required — algorithm is more
computationally efficient.
▶ A free fall problem can be solved exactly
(why?) t
Dmitry Savostyanov University of Essex 29th Jan 2024
MA209 Part 2 Linear multi–step methods 17 / 21

Adams–Bashforth two–step method

y ′ (t) = f(t, y), y(t0 ) = y0 y


y0 y1
▶ y(tn+1 ) = y(tn ) + y ′ (tn )h + 12 y ′′ (tn )h2 + O(h3 )
′ ′
y(t1 )
▶ y ′′ (tn ) = y (tn )−y
h
(tn−1 )
+ O(h)
y3
▶ First we do one step of Euler:
y(t2 )
y1 = y0 + f(t0 , y0 )h +O(h2 )

▶ Then we switch to Adams–Bashforth two–step method:


y2
y(t3 )
yn+1 = yn + 32 f(tn , yn )h − 12 f(tn−1 , yn−1 )h +O(h3 )

▶ Error estimate at the end is


t
0 h 2h 3h 4h
|yN −y(tN )| ≲ O(h )+(N−1)O(h ) ≈ (T +1)h = O(h )
2 3 2 2

Dmitry Savostyanov University of Essex 29th Jan 2024


MA209 Part 2 Linear multi–step methods 17 / 21

Adams–Bashforth two–step method

y ′ (t) = f(t, y), y(t0 ) = y0 y


y0
▶ y(tn+1 ) = y(tn ) + y ′ (tn )h + 12 y ′′ (tn )h2 + O(h3 )
′ ′
▶ y ′′ (tn ) = y (tn )−y
h
(tn−1 )
+ O(h)
▶ First we do one step of Euler:

y1 = y0 + f(t0 , y0 )h +O(h2 )

▶ Then we switch to Adams–Bashforth two–step method:

yn+1 = yn + 32 f(tn , yn )h − 12 f(tn−1 , yn−1 )h +O(h3 )

▶ Error estimate at the end is


t
0 h 2h 3h 4h 5h 6h 7h 8h
|yN −y(tN )| ≲ O(h )+(N−1)O(h ) ≈ (T +1)h = O(h )
2 3 2 2

Dmitry Savostyanov University of Essex 29th Jan 2024


MA209 Part 2 Systems of ODEs 18 / 21

Lotka–Volterra prey–predator model x(t) and y(t)


▶ Differential equations
5

x ′ (t) = ax − bxy, x(0) = x0 ,
y ′ (t) = −cy + dxy, y(0) = y0 . 4

▶ Euler’s method:
 3
xn+1 = xn + (axn − bxn yn )h,
yn+1 = yn + (−cyn + dxn yn )h. 2

▶ Matlab code snippet:


1
for n=1:N
x(n+1)=x(n)+( a*x(n) -c*x(n)*y(n))*h;
y(n+1)=y(n)+(-b*y(n) +d*x(n)*y(n))*h;
end t
1 2 3 4 5 6 7 8 9 10

Dmitry Savostyanov University of Essex 29th Jan 2024


MA209 Part 2 Systems of ODEs 19 / 21

Lotka–Volterra prey–predator model y

▶ Differential equations
 5
x ′ (t) = ax − bxy, x(0) = x0 ,
y ′ (t) = −cy + dxy, y(0) = y0 . 4

▶ Euler’s method:
 3
xn+1 = xn + (axn − bxn yn )h,
yn+1 = yn + (−cyn + dxn yn )h.
2
▶ Matlab code snippet:
for n=1:N 1
x(n+1)=x(n)+( a*x(n) -c*x(n)*y(n))*h;
y(n+1)=y(n)+(-b*y(n) +d*x(n)*y(n))*h;
end
x
1 2 3 4 5
Dmitry Savostyanov University of Essex 29th Jan 2024
MA209 Part 2 Systems of ODEs 20 / 21

Higher–order ODEs y

▶ Differential equations y0 y1
 
y ′′ (t) = −g − αy ′ (t), v ′ (t) = −g − αv(t), y(t1 )

y ′ (t) = v(t), y ′ (t) = v(t).

with y(0) = y0 and v(0) = v0 = 0. y(t2 )


y2
▶ Euler’s method:

vn+1 = vn + (−g − αvn )h, y3
yn+1 = yn + vn h y(t3 )

▶ Matlab code snippet:


t
for n=1:N 0 h 2h 3h 4h
v(n+1) = v(n)+h*(-g-alpha*v(n));
y(n+1) = y(n)+h*v(n);
end
Dmitry Savostyanov University of Essex 29th Jan 2024
MA209 Part 2 Summary 21 / 21

Numerical methods for ordinary differential equations


▶ Approximation: A smooth movement is replaced with a sequence of straight ‘jumps’ from
current grid point to the next one. A continuous function y(t) is approximated on a
numerical grid {tn }N
n=0 with a discrete set of values yn ≈ y(tn ), n = 0, 1, . . . , N.
▶ Accuracy: Basic forward Euler’s method follows the gradient direction at current point and
hence correctly approximates the linear trend of the solution y(t). More advanced methods,
e.g. Heun and Runge–Kutta, combine several gradients computed within one step–size, to
approximate higher terms of Taylor expansion of y(t) and hence provide better accuracy.
Multi–step methods use gradient directions from several previous steps instead.
▶ Step adaptation: Several numerical methods can be run in parallel and the results can be
cross–compared to estimate the numerical error and reduce or increase the step–size
accordingly.
▶ Applications: The methods are designed for first–order ODEs but can be applied also to
system of first–order ODEs and to higher–order ODEs recast as such systems.

Dmitry Savostyanov University of Essex 29th Jan 2024

You might also like