Institute for Analysis and Scientific Computing
Lothar Nannen, Edoardo Bonetti, Stephanie Blaha
                        Numerical analysis of ODEs – Exercise sheet 4
Due date: 3.4.2025                                                                                    March 26, 2025
Exercise 16 (Programming example):
Expand the program from last week by adding adaptive step-size control via embedded Runge-
Kutta methods. Apply the scheme RK5(4) of Dormand and Price(as given in the lecture notes) to
the predator-prey model. Compare the choice of step-size with the behavior of the solution.
Exercise 17:
In Lemma 2.40, we applied Richardson extrapolation of a one-step method to obtain an approxi-
mation of higher order.
   Apply the method of (two-step) Richardson extrapolation to the explicit Euler method.
 (a) Which method do you obtain? Formulate the Butcher tableau of the method.
 (b) Independently of the analysis of Section 2.7, prove that this method has consistency order
     p = 2.
Exercise 18:
Implicit Runge-Kutta methods do not directly fit the setting of “implicit methods” as introduced
in Section 3.2. This is because as all the increments are defined implicitly, instead of only having
to solve for the approximation yℓ+1 . Nevertheless, we prove convergence of implicit Runge-Kutta
methods of correct order by considering it as an explicit timestepping method.
 (a) We look at the map
                                 K : R+ × Rn × R+ → Rn×m : y 7→ K(t, y, h)
     where the matrix K(t, y, h) collects the stages implicitly defined by
                                                                                         m
                                                                                         X           
                      [K(t, y, h)]j := kj      with        kj = f t + hcj , y + h              Ajℓ kℓ .
                                                                                         ℓ=1
     Show that for 0 < h < H with H sufficiently small, the map is well-posed and Lipschitz
     continuous with respect to y, i.e., there exists L > 0 such that
        ∀t ∈ [t0 , T ], 0 < h < min(H, T − t0 ), ∀y, ye ∈ Rn ,       ∥K(t, y, h) − K(t, ye, h)∥∞ ≤ L∥y − ye∥.
     Note: see Proposition 3.10
 (b) The implicit RK-method can be written as an explicit one-step method:
                                                          m
                                                          X
                                       yℓ+1 = yℓ + hℓ           bj kj (tℓ , yℓ , hℓ ),
                                                          j=1
                                                      1
                                Pm
      i.e. using Φ(t, y, h) =       j=1 bj kj (t, y, h).
      Using (a), show that under suitable conditions the method is stable in the sense of (2.14).
 (c) Assuming that the method has consistency order p ≥ 1 in the sense of Definition 2.7 conclude
     that under suitable conditions the following error estimate holds:
                                                max ∥y(tℓ ) − yℓ ∥ ≤ Chp∆ .
                                               ℓ=0,...N
Exercise 19:
Implicit Runge-Kutta methods lead to a nonlinear system of equations, which can be very costly to
solve; for an autonomous differential equations y ′ (t) = f (y(t)), one can use the following simplified
method.
    Let A ∈ Rm×m be a strictly lower-triangular (Aij = 0 for i ≤ j) matrix, let B ∈ Rm×m be a
lower-triangular (Bij = 0 for i < j) matrix, and define the stages ki ∈ Rd of the method as follows:
                              i
                              X                              i−1
                                                              X          
            ki = h ∇f (yℓ )         (Bij − Aij )kj + f yℓ + h     Aij kj        for i = 1, . . . , m   (1)
                              j=1                                j=1
Here ∇f ∈ Rd×d is the Jacobian matrix of f .
 (a) Show that, for a linear function f (y) = Λy, the equations (1) define the stages of an implicit
     Runge-Kutta method.
 (b) Show that, for this method, only m linear systems of equations have to be solved (in particular,
     no nonlinear system of equations has to be solved.)
 (c) What is the overall cost for solving these linear systems of equations if Bii = β for all
     i = 1, . . . , m?
 (d) Show that, if Bii = β > 0 for all i = 1, . . . , m and ∇f (yℓ ) has only negative eigenvalues, the
     linear systems of equations are uniquely solvable for all h > 0.
Exercise 20:
For r ∈ N, h > 0, and 0 ≤ x0 ≤ x1 < x2 · · · < xr ≤ h. For arbitrary g ∈ C r+1 ([0, h]), let p ∈ Πr be
the solution of
                        p(x0 ) = g(x0 ),     p′ (xj ) = g ′ (xj ), j = 1, . . . , r.               (2)
a)   Show that this problem is uniquely solvable.
b)   Prove the following upper bound for the error:
                                                           hr+1
                                 sup |p(x) − g(x)| ≤            sup g (r+1) (x) .                      (3)
                                x∈[0,h]                     r! x∈[0,h]