NSPDE
NSPDE
x
2
  +
  
2
y
2
  = 2 ,   (x, y)  A
with   = 0 on  A, the boundary of A, has exact solution
(x, y) = 1 y
2
32
k=1
(1)
k
cosh
_
 (2k + 1)x
2
_
 cos
_
 (2k + 1)y
2
_
(2k + 1)
3
cosh
_
 (2k + 1)
2
_
  .
The merit of this exact solution is questionable for three important reasons.
  The evaluation of  (x, y) at any given point (x, y) requires the computation of a sum.   In this
case  several   hundred  terms  are  required  to  achieve  acceptable  numerical   accuracy,   but  often
the number of terms can run into many thousands.
  More subtly, the terms in this sum are not trivial to calculate despite the fact that each term
is expressed in terms of simple functions.   For example, how does one calculate the quotient
cosh
_
 (2k + 1)x
2
_
cosh
_
 (2k + 1)
2
_
7
8   CHAPTER  1.   NUMERICAL  AND  MATHEMATICAL  PRELIMINARIES
when k is large?  Certainly one cannot compute the numerator and denominator separately and
divide!   This action will very quickly induce numerical overow for large k since cosh(kx) 
as  k  whenever  x is non-zero.
  A minor modication to the boundary conditions means a new analytical solution.   By contrast,
such a change has little impact on the construction of a numerical solution to this PDE.
In conclusion, the commonly held belief that a numerical solution is inferior to an analytical solution
is a often false.   This example illustrates that in many circumstances the reverse may be the case.   The
important contributions made by analysis in the theory of ordinary and partial dierential equations
more  often  lies  in  the  derivation  of  conditions  to  ensure  the  existence  and  uniqueness  of  solutions
rather than in their analytical construction.   It is only after existence and uniqueness of solution are
guaranteed that is makes sense to use a numerical procedure to construct the solution of the ODE
or PDE.
1.2   Classication  problem
The specication of a problem in terms of a dierential equation comes in two parts.   First, there is the
dierential  equation  itself  which  may  be  viewed  as  an  algebraic  relationship  connecting  a  function
and  its  derivatives.   Second,   there  are  two  dierent  types  of   additional   conditions  which  must  be
supplied in order to select the desired solution from the many possible solutions.   These additional
conditions are called initial conditions  and boundary conditions.
Initial   conditions  are  conditions  to  be  satised  everywhere   for  a  xed  value  of   an  independent
variable, usually interpreted as time and denoted by  t.
Boundary  conditions are conditions to be satised everywhere  on the boundary  A of the region
A on which the dierential equation is dened.
Dierent  types  of  PDE  require  dierent  combinations  of  initial  and  boundary  conditions,   and  this
in  turn  means  a  dierent  numerical   procedure.   Consider,   for  example,   the  family  of  second  order
partial dierential equations
A
 
2
u
x
2
 +B
  
2
u
x
  y +C
  
2
u
y
2
  +D
 u
x
 +E
  u
y
  +F u = g   (1.1)
in which  A = A(x, y, u
x
, u
y
, u),    , F  = F(x, y, u
x
, u
y
, u).   Equations of this type can be classied in
one of three ways depending on the value of  B
2
4AC.
1.2.1   Elliptic  equation
Equation (1.1) is of Elliptic type whenever B
2
4AC  < 0.   Elliptic partial dierential equations give
rise to Boundary Value Problems (BVPs), that is, only boundary conditions must be supplied to
1.2.   CLASSIFICATION  PROBLEM   9
complete the specication of the solution of an elliptic PDE. Elliptic equations arise typically in the
formulation of static (time independent) problems in, for example, Potential theory, Fluid Mechanics
and Elasticity.   Poissons equation
x
2
  +
  
2
y
2
  = g(x, y) ,   (x, y)  A   (1.2)
is  the  archetypal  elliptic  equation.   Here A  is  a  simply  connected  region  with  boundary  curve  A.
The task is to nd a solution of equation (1.2) satisfying the boundary conditions
_
_
   =   f
1
(x, y)   (x, y)  C
1
n
  =   f
2
(x, y)   (x, y)  C
2
n
 +f
3
(x, y)   =   f
4
(x, y)   (x, y)  C
3
where  f
1
(x, y),    , f
4
(x, y)  are  known  functions  and  A  = C
1
  C
2
  C
3
.   The  case C
1
  =  A  is  the
Dirichlet  Problem,   the  case C
2
  =  A  is  the  Neumann  Problem  and  the  case C
3
  =  A  is  the  Robin
Problem.   It is easy to show that the Dirichlet problem for equation (1.2) has a unique solution but
that the solution of the Neumann problem is not unique, and may not even exist.
1.2.2   Parabolic  equation
Equation (1.1) is of Parabolic  type whenever  B
2
4AC = 0.   Parabolic partial dierential equations
give  rise  to  Initial  Boundary  Value  Problems  (IBVPs),   that  is,   initial  conditions  are  required  to
start  the  solution  at  t  =  0  and  thereafter  boundary  conditions  must  be  supplied  to  complete  the
specication of the solution for t > 0.   Parabolic equations arise typically in problems involving heat,
magnetic elds and ow processes based on randomness.   The Diusion equation
t
  =
  
2
x
2
  +g(x, t) ,   (t, x)  (0, ) (a, b)   (1.3)
is the archetypal parabolic equation where (a, b)  R.   A solution of equation (1.3) is sought satisfying
the respective initial and boundary conditions
(0, x) = (x) ,
_
a
(t)(t, a) +
a
(t)
a
  =   f
1
(t)   t > 0
b
(t)(t, b) +
b
(t)
b
  =   f
2
(t)   t > 0
where  
a
(t),  
b
(t),  
a
(t),  
b
(t),  f
1
(t) and  f
2
(t) are given functions of  t.   Again there are well estab-
lished existence and uniqueness results for parabolic equations of type (1.3).
1.2.3   Hyperbolic  equation
Equation (1.1) is of Hyperbolic type whenever B
2
4AC  > 0.   Hyperbolic partial dierential equations
give rise to Initial Value Problems (IVPs), that is, initial conditions are required to start the solution
10   CHAPTER  1.   NUMERICAL  AND  MATHEMATICAL  PRELIMINARIES
at t = 0.   Hyperbolic equations arise typically in problems involving the propagation of disturbances,
for example, in Wave Mechanics.   The Wave equation
t
2
  =
  
2
x
2
  +g(x, t) ,   (t, x)  (0, ) (a, b)   (1.4)
is  the  archetypal   hyperbolic  equation.   A  solution  of  equation  (1.4)  is  sought  satisfying  the  initial
conditions
(0, x) = (x) ,
  (0, x)
t
  = V (x)
where (x) and  V (x) are given functions of  x.
1.3   Solution  procedures
Numerical procedures to solve dierential equations largely decompose into either techniques which
aim to nd the value of the solution at a nite number of nodes within the domain of denition of the
equation and its boundary, or alternatively, techniques which expand the solution as a sum of basis
functions dened on the domain of denition of the equation,  and then aim to nd the coecients
in this expansion.   The algorithms to be examined in these lectures are now described briey.
1.3.1   Finite  dierences
Finite dierence procedures approximate the derivatives appearing in a PDE by sums and dierences
of function values at a set of discrete points, usually uniformly spaced with respect to each independent
variable.   The idea is most eectively illustrated in one dimension.   Let x and h be real numbers where
h is usually regarded as a small positive step length, then
f
(x)   =
  f(x +h) f(x h)
2h
  +O(h
2
) ,
f
(x)   =
  f(x +h) 2f(x) +f(x h)
h
2
  +O(h
2
)
are  two  expressions  which are  often  used  to  compute  the  rst  and  second  order  derivatives  of  f  at
x from knowledge of  f(x  h),   f(x) and  f(x + h).   The motivation for these results is based on the
realisation  that  if   f   is  suitably  dierentiable  in  a  neighbourhood  of   x  then  f   has  a  Taylor  Series
expansion in the vicinity of  x so that
f(x +h)   =   f(x) +hf
(x) +
  h
2
2
  f
(x) +
  h
3
6
  f
(x) +O(h
4
) ,
f(x h)   =   f(x) hf
(x) +
  h
2
2
  f
(x) 
  h
3
6
  f
(x) +O(h
4
) .
Trivially,
f(x +h) f(x h)   =   2hf
(x) 
  h
3
3
  f
(x) +O(h
5
)
f(x +h) 2f(x) +f(x h)   =   2h
2
f
(x) +O(h
4
)
1.3.   SOLUTION  PROCEDURES   11
from which it follows that
f
(x)   =
  f(x +h) f(x h)
2h
  +O(h
2
) ,
f
(x)   =
  f(x +h) 2f(x) +f(x h)
h
2
  +O(h
2
) .
These  are  known  as  the  central   dierence  formulae  for  the  rst  and  second  derivatives  of   f.   The
formulae  are  second  order  accurate  by  which  we  mean  that  the  error  incurred  by  approximating
f
(x) or f
(x) +O(h
3
) ,
f(x 2h) + 3f(x) 4f(x h)   =   2hf
(x) +O(h
3
) ,
leading to the left-handed and right-handed formulae
f
(x)   =
  4f(x +h) f(x + 2h) 3f(x)
2h
  +O(h
2
) ,
f
(x)   =
  f(x 2h) + 3f(x) 4f(x h)
2h
  +O(h
2
) .
Example   Use the table
x   0.5   0.6   0.7
sin x   0.47943   0.56464   0.64422
to estimate the rst derivative of sinx at each mesh point.   Compare your estimates with the exact
answer.   Estimate the second derivative of sinx at  x = 0.6.
Solution   Take  h = 0.1 then
f
(0.5)   =
  4(0.56464) 3(0.47943) (0.64422)
0.2
     0.88052   (Correct ans) 0.87758
f
(0.6)   =
  (0.64422) (0.47943)
0.2
     0.82395   (Correct ans) 0.82533
f
(0.7)   =
  3(0.64422) + (0.47943) 4(0.56464)
0.2
     0.76765   (Correct ans) 0.76484
Further
f
(0.6) =
  (0.64422) 2(0.56464) + (0.47943)
0.01
   0.56300   (Correct ans)  0.56464
12   CHAPTER  1.   NUMERICAL  AND  MATHEMATICAL  PRELIMINARIES
Numerical  formulation  of  nite  dierence  formulae
Let interval [a, b] be subdivided into n smaller intervals of uniform size h = (ba)/n by the dissection
point  x
k
 = a +kh,  k = 0, 1, . . . n.   Let  f
k
 = f(x
k
) be the value of  f  at the point  x
k
  and suppose that
f
0
, f
1
, . . . f
n
  are known.   The nite dierence formulae for the rst and second order derivatives of  f
at an interior point  x
k
  of [a, b] are
f
(x
k
)   =
  f
k+1
f
k1
2h
  +O(h
2
) ,
f
(x
k
)   =
  f
k+1
2f
k
 +f
k1
h
2
  +O(h
2
) .
The  corresponding  nite  dierence  formulae  for  the  derivative  of   f  at  the  left  hand  side  and  right
hand side of the interval [a, b] are
f
(x
0
)   =
  4f
1
f
2
3f
0
2h
  +O(h
2
) ,
f
(x
n
)   =
  f
n2
 + 3f
n
4f
n1
2h
  +O(h
2
) .
Example   Use  the  method  of   nite  dierences   with  ve  nodes   to  estimate  the  solution  of   the
ordinary dierential equation
d
2
u
dx
2
  = x,   u(0) = u(1) = 0 .
Solution   Let  u
0
 =  u(0) = 0,   u
1
 =  u(1/4),   u
2
 =  u(1/2),   u
3
 =  u(3/4) and  u
4
 =  u(1) = 0, then the
unknown values  u
1
,  u
2
 and  u
3
 are determined by the equations
d
2
u
dx
2
x=1/4
= 1/4 ,
  d
2
u
dx
2
x=1/2
= 1/2 ,
  d
2
u
dx
2
x=3/4
= 3/4 .
With  h = 1/4, the nite dierence formulae give
u
2
2u
1
 +u
0
h
2
  = 1/4 ,
  u
3
2u
2
 +u
1
h
2
  = 1/2 ,
  u
4
2u
3
 +u
2
h
2
  = 3/4 .
Since  u
0
 = u
4
 = 0 and  h = 1/4 then it now follows that
u
2
2u
1
  =   1/64
u
3
2u
2
 +u
1
  =   1/32
2u
3
 +u
2
  =   3/64.
_
2   1   0
1   2   1
0   1   2
_
_
_
_
u
1
u
2
u
3
_
_
 =
_
_
1/64
1/32
3/64
_
_
  .
The  exact  solution  of   this  problem  is   u(x)   =  (x
3
 x)/6  which  in  turn  gives  the  exact  solutions
u
1
  = 5/128,   u
2
  = 1/16  and  u
3
  = 7/128.   In  fact,   these  are  also  the  values   taken  by  the
estimated  solutions.   Why  are  the  numerical   solutions  exact!   The  answer  lies  in  the  fact  that  the
numerical error in estimating  d
2
f/dx
2
as a central dierence depends on the fourth derivative of  f
which is zero for the exact solution in this case.
1.3.   SOLUTION  PROCEDURES   13
1.3.2   Finite  elements
In a nite element procedure the solution is expanded in terms of a set of basis functions superimposed
on a family of elements which in turn span the region in which the solution of the dierential equation
is  sought.   The  idea  is  illustrated  easily  by  considering  the  representation  of  a  function  of  a  single
variable.
Let  f(x)  be  a  function  dened  on  the  interval   [a, b]  and  let  a  =  x
0
  <  x
1
  <    x
n
  =  b  be  a  set  of
user-specied nodes.   In this one-dimensional case, the intervals [x
0
, x
1
], [x
1
, x
2
]    [x
n1
, x
n
] dene
the  elements  which  span  [a, b].   These  elements  are  overlayed  with  a  family  of   user-supplied  basis
functions.   The  most  common  family  of  basis  functions  is  the  set  of  triangular  (or  tent-like)  basis
functions
u
k
(x) =
_
_
x x
k1
x
k
x
k1
x  [x
k1
, x
k
]
x
k+1
x
x
k+1
x
k
x  (x
k
, x
k+1
]
0   otherwise .
These are illustrated in Figure 1.1 with  u
k
(x) highlighted as the shaded triangle.
1
x
x
1
  x
k1
  x
k
  x
k+1
  x
n1
Figure 1.1:   Basis functions  u
0
(x), u
1
(x),     , u
n
(x) with member  u
k
(x) shaded.
With respect to these basis functions, the function f(x) is represented by the nite element expansion
f(x) =
n
k=0
f
k
 u
k
(x) .
in which  f
0
,    , f
n
 is a sequence of coecients to be determined.   It is not immediately obvious how
these  should  be  chosen  in  an  optimal   way.   The  Least  Squares   criterion  is  used  frequently  for  this
purpose.   The objective of this criterion is to minimise
E(f
0
, f
1
,    , f
n
) =
_
  L
0
_
f(x) 
 
f(x)
_
2
dx =
_
  L
0
_
f(x) 
n
k=0
f
k
 u
k
(x)
_
2
dx,
by an appropriate choice of the coecients  f
0
,    , f
n
.   The minimum of  E(f
0
, f
1
,    , f
n
) is attained
when  f
0
,    , f
n
 are solutions of the equations
E
f
j
= 2
_
  L
0
_
f(x) 
n
k=0
f
k
 u
k
(x)
_
u
j
(x) dx = 0 ,   j = 0,    , n,
14   CHAPTER  1.   NUMERICAL  AND  MATHEMATICAL  PRELIMINARIES
which in turn asserts that  f
0
,    , f
n
 satisfy the (n + 1) simultaneous equations
n
k=0
f
k
_
  L
0
u
j
(x) u
k
(x) dx =
_
  L
0
f(x) u
j
(x) dx,   j = 0,    , n.
Numerical work involving the nite element procedure necessarily requires the evaluation of various
integrals with integrands formed from products of basis functions and their derivatives.   Triangular
basis functions are particularly popular because these basis functions yield integrals that are straight
forward  to  evaluate.   However,   the  use  of  triangular  basis  functions  is  necessarily  restricted  to  low
order partial dierential equations (2nd order) as will become obvious on further investigation of the
approach.
1.3.3   Spectral  expansion
The spectral approach to the solution of a dierential equation appears supercially similar to the
nite element approach in the respect that the solution is expressed as a sum over basis functions.
For example, in one dimension the function  f(x) is represented by the nite sum
f(x) =
n
k=0
f
k
 
k
(x)
where  
0
,   
1
,     are  an  innite  family  of  spectral  functions,   the  rst  (n + 1)  of  which  are  used  to
approximate  f(x).   The term spectral comes from the fact that the basis functions  
k
  are almost
invariably chosen to be the eigenfunctions of a Sturm-Liouville problem.   In particular, the members
of   the  family  form  a  complete  basis  for  the  solution  space  and  are  also  mutually  orthogonal   with
respect to a suitable weight function.   For example, the Fourier series over [0, L] corresponds to the
spectral expansion
n/21
k=n/2
c
k
 e
2ikx/L
,   x  [0, L] .
Chebyshev polynomials and Legendre polynomials are other common choices of basis functions.
There are various ways to construct a spectral approximation to the solution of a dierential equation.
For  example,  one  approach  requires  the  spectral  solution  to  minimise  some  user-supplied  criterion,
for example, that the weighted least squares estimate of f(x)
(x)   =
  f(x +h) f(x h)
2h
  
  h
2
6
  f
() ,
f
(x)   =
  f(x +h) 2f(x) +f(x h)
h
2
  
  h
2
12
 f
iv
() .
Question  2.   Use the Taylor series expansion of  f(x +h) up to and including terms of order  h
4
to
show that for suciently dierentiable functions  f(x)
df
dx
  =
  4f(x +h) f(x + 2h) 3f(x)
2h
  +O(h
2
) .
Deduce a similar expression for  df/dx in terms of  f(x h) and  f(x 2h).
Question  3.   Use the Taylor series expansion of  f(x +h) to derive the formulae
d
3
u
dx
3
  =
  u(x + 2h) 2u(x +h) + 2u(x h) u(x 2h)
2h
3
  +O(h
2
) ,
d
4
u
dx
4
  =
  u(x + 2h) 4u(x +h) + 6u(x) 4u(x h) +u(x 2h)
h
4
  +O(h
2
) .
Question  4.   Construct a nite dierence approximation for
(x) =
  d
dx
_
a(x)
  df
dx
_
that is second order accurate, that is,  (x) is estimated to order  h
2
in terms of  f(x  h),   a(x  h),
f(x),  a(x),  f(x +h), and  a(x +h).   Prove your assertion.
Question  5.   Given the value of function  f  at the points  x h and  x +h where (0 <  < 1), nd
a rst order dierence formula for the derivative of  f  at  x.
Question 6.   Given the value of function f  at xh,  x and x+h where (0 <  < 1), nd a second
order dierence formula for the derivative of  f  at  x.
Question  7.   Given  the  value  of   function  f   at   the  points   x  2h,   x   h,   x  and  x +  h  where
(0 <   1), derive the nite dierence formula
f
(x) =
  6f
1
 + ( 3)( + 1)( + 2)f
0
2(
2
4)f
1
 +(
2
1)f
2
( + 1)( + 2)h
2
  +O(h
2
).
Question  8.   Derive the higher order central dierence formulae
du
dx
  =
  u(x + 2h) + 8u(x +h) 8u(x h) +u(x 2h)
12h
  +O(h
4
) ,
d
2
u
dx
2
  =
  u(x + 2h) + 16u(x +h) 30u(x) + 16u(x h) u(x 2h)
12h
2
  +O(h
4
).
1.3.   SOLUTION  PROCEDURES   17
Why do you suppose that these are not used instead of the less accurate second order formulae?
Question  9.   For the triangular basis functions
u
k
(x) =
_
_
x x
k1
x
k
x
k1
x  [x
k1
, x
k
]
x
k+1
x
x
k+1
x
k
x  (x
k
, x
k+1
]
0   otherwise .
in the nite element procedure, compute the values of
(a)
_
  L
0
u
j
(x)u
k
(x) dx,   (b)
_
  L
0
du
j
dx
du
k
dx
  dx,
(c)
_
  L
0
u
i
(x)u
j
(x)u
k
(x) dx,   (d)
_
  L
0
u
i
du
j
dx
du
k
dx
  dx.
Question 10.   Fourier analysis in one dimension represents the solution f(x) of a dierential equa-
tion by the partial sum
f(x) =
n/21
k=n/2
c
k
 e
2ikx/L
,   x  [0, L] .
If  f
j
  =
 
f(jL/n), write down the equations connecting  f
0
,    , f
n1
  with the coecients  c
k
.   Demon-
strate that these equations have solution
c
k
 =
  1
n
n1
j=0
f
j
 e
2ijk/n
.
18   CHAPTER  1.   NUMERICAL  AND  MATHEMATICAL  PRELIMINARIES
Chapter  2
Finite  Dierence  procedure  in  one
dimension
The  nite  dierence  procedure  is   illustrate  using  examples   with  known  solutions.   Consider   the
boundary value problem
u
xx
 = f(x) ,   u(0) = U
0
  u(1) = U
1
in which  f(x) is a prescribed function and  U
0
 and  U
1
 are given.   The nite dierence approximation
to  the  solution  of  this  dierential  equation  requires  a  uniform  dissection  of  [0, 1]  by  (n + 1)  evenly
spaced  nodes  0 =  x
0
  <  x
1
  <      <  x
n
  = 1  where  x
k
  =  k/n.   For  convenience,  let  h = 1/n  so  that
x
k
 = kh.   In the usual way let  u
k
  denote  u(x
k
) then
u
xx
(x
k
) =
  u
k+1
2u
k
 +u
k1
h
2
  +O(h
2
)
at each interior point of [0, 1].   The boundary conditions are satised by asserting that  u
0
 = U
0
  and
u
n
 = U
1
.   The nite dierence solution for u(x) is now obtained by enforcing the original equation at
each interior node of [0, 1], that is,
u
k+1
2u
k
 +u
k1
h
2
  +O(h
2
) = f
k
 ,   k = 1, 2    , (n 1)
where  f
k
 = f(x
k
).   When arranged in sequence, these equations become
u
2
2u
1
 +u
0
  =   h
2
f
1
 +O(h
4
)
u
3
2u
2
 +u
1
  =   h
2
f
2
 +O(h
4
)
u
4
2u
3
 +u
2
  =   h
2
f
3
 +O(h
4
)
.
.
.   =
  .
.
.
u
n
2u
n1
 +u
n2
  =   h
2
f
n1
 +O(h
4
) .
However,  u
0
 and  u
n
 are known and so the rst and last equations are re-expressed in the form
u
2
2u
1
  =   h
2
f
1
U
0
 +O(h
4
)
2u
n1
 +u
n2
  =   h
2
f
n1
U
1
 +O(h
4
) .
19
20   CHAPTER  2.   FINITE  DIFFERENCE  PROCEDURE  IN  ONE  DIMENSION
Thus the unknown values  u
1
,  u
2
,     u
n1
 are the solution of the system of linear equations
_
_
2   1   0   0             0
1   2   1   0             0
0   1   2   1   0          
.
.
.
  .
.
.
  .
.
.
  .
.
.
  .
.
.
  .
.
.
    
0                  0   1   2
_
_
_
_
u
1
u
2
.
.
.
.
.
.
u
n1
_
_
=
_
_
h
2
f
1
U
0
h
2
f
2
.
.
.
.
.
.
h
2
f
n1
U
1
_
_
+O(h
4
)   (2.1)
The  coecient  matrix  of  this  system  has  a  main  diagonal  in  which  each  entry  is 2  and  sub-  and
super-diagonals in which each entry is 1.   All other entries are zero.   Matrices of this type are called
tri-diagonal  matrices.   The  solution  of   the  equation  TX  =  B  is  now  examined  when  T  is  a  tri-
diagonal matrix.
2.1   Tri-diagonal  algorithm
Let  T  be the general tri-diagonal matrix
_
_
a
1
  b
1
  0   0             0
c
2
  a
2
  b
2
  0             0
0   c
3
  a
3
  b
3
  0          
.
.
.
  .
.
.
  .
.
.
  .
.
.
  .
.
.
  .
.
.
    
0                  0   c
n1
  a
n1
_
_
then we describe the steps by which the tri-diagonal system of equations  TX  =  F  is solved for  X.
First, the row operation  R
2
 R
2
(c
2
/a
1
)R
1
 is used to remove  c
2
 to give the tri-diagonal matrix
T
(1)
=
_
_
a
1
  b
1
  0   0             0
0  a
2
  b
2
  0             0
0   c
3
  a
3
  b
3
  0          
.
.
.
  .
.
.
  .
.
.
  .
.
.
  .
.
.
  .
.
.
    
0                  0   c
n1
  a
n1
_
_
where a
2
 and
 
f
2
 are dened by
a
2
 a
2
  c
2
a
1
b
1
 = a
2
,   f
2
 f
2
  c
2
a
1
f
1
 =
 
f
2
.
This  algorithm  is  recursive.   The  second  row  aects  only  a
2
  and  f
2
,   the  second  entry  of   T
(1)
can
now  be  used  to  remove  c
3
  provided a
2
  is  non-zero.   This  will   always  be  true  provided  T  is  non-
singular.   After (n 1) repetitions of this algorithm, in which the  k
th
repetition operates on the  k
th
2.1.   TRI-DIAGONAL  ALGORITHM   21
and  (k + 1)
th
rows  of   T
(k)
,   the  original   equations  TX  =  F  will   be  reduced  to  the  new  system  of
equations  T
(n1)
X =
 
F  where
T
(n1)
=
_
_
a
1
  b
1
  0   0             0
0  a
2
  b
2
  0             0
0   0  a
3
  b
3
  0          
.
.
.
  .
.
.
  .
.
.
  .
.
.
  .
.
.
  .
.
.
    
0                  0   0 a
n1
_
_
.
Since T
(n1)
is now an upper triangular matrix, then T
(n1)
X =
 
F  can be solved for X by backward
substitution.   the computational eort in this algorithm is directly proportional to N, the number of
equations in the system.   Another way to think of the tri-diagonal algorithm is that every non-singular
tri-diagonal matrix T  can be expressed as the product LU where L is a lower triangular matrix and U
is an upper triangular matrix.   Recall that in general L holds the row operations used in the gaussian
elimination  algorithm,   and  that  U  is  the  gaussian  reduced  form  of   T.   Therefore,   all   the  non-zero
entries of  L are located in its main diagonal and sub-diagonal.   Similarly, all the non-zero entries of
U  are located in its main diagonal and super-diagonal.
2.1.1   Measurement  of  numerical  error
There  are  two  measures  of   numerical   accuracy  in  common  use.   To  make  these  measures  specic,
suppose that the numerical solution and exact solution of  u
k=0
(u
k
  u
k
)
2
_
1/2
(2.2)
This  summation  is  approximately  proportional   to  the  square  root   of   the  integral   of   the  squared
dierence  of   the  exact   and  numerical   solutions.   The  second  measure,   usually  called  the  mean-
absolute-error or  L
1
error, is dened to be
E
n
(u   u) =
  1
n + 1
n
k=0
| u
k
  u
k
|   (2.3)
This  summation  is  approximately  proportional   to  the  integral   of   the  absolute  dierence  between
the  exact  and  numerical   solutions.   Both  measures  usually  paint  the  same  picture,   but  it  is  well
known that  the  L
1
error is  more  discriminating.   For convenience we shall  always use  the  L
2
norm
for measuring errors and use the notation  E
n
  to denote the value of the  L
2
norm when applied to a
dissection containing 2
n
intervals of uniform length  h.
Example   Let  u(x) be the solution of the boundary value problem
u
xx
 = sin(x) ,   u(0) = u(1) = 0
22   CHAPTER  2.   FINITE  DIFFERENCE  PROCEDURE  IN  ONE  DIMENSION
in which   is an arbitrary non-zero parameter.   This corresponds to the choices  f(x) = sin(x) and
U
0
 = U
1
 = 0.   The exact solution to this dierential equation is
u(x) =
  xsin() sin(x)
2
  .
Table 2.1 gives the L
1
and L
2
norms of the numerical solution to this dierential equation for various
choices of  n when   = 1 and   = 50 respectively.
 = 1    = 50
n   E
n
  E
n1
/E
n
  E
n
  E
n1
/E
n
4   0.0002242      0.0121297   
8   0.0000561   3.904   0.0121549   0.960
16   0.0000140   4.001   0.0001235   0.990
32   0.0000035   4.002   0.0000663   98.35
64   0.0000009   4.000   0.0000151   1.860
128   0.0000002   4.000   0.0000037   4.401
256   0.0000001   4.000   0.0000009   4.090
512   0.0000000   4.000   0.0000002   4.020
1024   0.0000000   4.000   0.0000001   4.010
Table 2.1:
Comparison  of   L
2
er-
rors   in   the   solution
of   u
xx
  =  sin(x)   for
the   boundary   condi-
tions  u(0)  =  u(1)  =  0
when   = 1 and when
 = 50.
Points  to  note   First,   an  accurate  solution  for     =  1  and    =  50  can  be  obtained  by  using  a
suitably ne mesh.   Of course, in practice the nite numerical precision available in the computation
will intervene and provide a practical limit to the maximum available accuracy.   Second, the ratio of
errors has limiting value 4 independent of the value of .   As  n is doubled,  h is halved, and the error
decreases by a factor of 4, that is, the error is proportional to h
2
.   This is in complete agreement with
the presumed accuracy of the nite dierence approximation of the second derivative.
2.1.2   Loss  of  signicance  -  Pivoting
To  appreciate  how  loss  of  numerical  accuracy  can  occur  in  the  solution  of  simultaneous  equations,
consider the use of a computer holding 3 signicant gures to solve the equations
0.0001 x
1
  +   1.00 x
2
  =   1.00
1.00 x
1
  +   1.00 x
2
  =   2.00
[The actual solution is  x
1
 
= 1.00010 and  x
2
 
= 0.99990 to 5 decimal places.]
The matrix formulation of the problem is
_
  0.0001   1.00
1.00   1.00
__
  x
1
x
2
_
 =
_
  1.00
2.00
_
2.1.   TRI-DIAGONAL  ALGORITHM   23
Using  exact  arithmetic the row operation  R
2
 R
2
10000R
1
 yields
_
  0.0001   1.00
0.0000   9999.0
__
  x
1
x
2
_
 =
_
  1.00
9998.0
_
  .
However, on a computing machine with 3 signicant gures, the second row of these equations cannot
be held to the required accuracy and is approximated.   The numerical approximation is
_
  0.0001   1.00
0.0000   1.00 10
4
__
  x
1
x
2
_
 =
_
  1.00
1.00 10
4
_
  .
The solution of this system is x
2
 = 1.00 followed by x
1
 = 0.00.   A catastrophic error has occurred due
to  nite  precision  arithmetic.   The  correct  procedure  is  to  interchange  rows  1  and  2  of  the  original
matrix to obtain
_
  1.00   1.00
0.0001   1.00
__
  x
1
x
2
_
 =
_
  2.00
1.00
_
Now perform the row operation  R
2
 R
2
0.0001R
1
 to obtain in  exact  arithmetic
_
  1.00   1.00
0.00   0.9999
__
  x
1
x
2
_
 =
_
  2.00
0.9998
_
As previously, on a computing machine with 3 signicant gures, the second row of these equations
cannot be held to the required accuracy and is approximated.   The numerical approximation is
_
  1.00   1.00
0.00   1.00
__
  x
1
x
2
_
 =
_
  2.00
1.00
_
which  in  turn  yields  the  approximate  solutions  x
2
  =  1.00  followed  by  x
1
  =  1.00.   The  process  of
interchanging rows is called pivoting.   In general,  the row in the matrix with the largest entry (in
modulus) in the column for which the row is to be used to eliminate other entries is chosen as the
pivotal row.
Example   Use Gaussian elimination with partial pivoting to solve the equations
x
1
  +   2x
2
     3x
3
  =   2
x
1
     2x
2
  +   x
3
  =   1
2x
1
  +   x
2
     x
3
  =   0
.
24   CHAPTER  2.   FINITE  DIFFERENCE  PROCEDURE  IN  ONE  DIMENSION
Solution The augmented matrix is
_
_
1   2   3   2
1   2   1   1
2   1   1   0
_
_
  
_
_
2   1   1   0
1   2   1   1
1   2   3   2
_
_
  R
1
 R
3
_
2   1   1   0
0   
3
2
1
2
  1
0
  5
2
5
2
  2
_
_
R
2
 R
2
 +
  1
2
R
1
R
3
 R
3
 +
  1
2
R
1
_
2   1   1   0
0
  5
2
5
2
  2
0   
3
2
1
2
  1
_
_
  R
2
 R
3
_
2   1   1   0
0
  5
2
5
2
  2
0   0   2
  11
5
_
_
  R
3
 R
3
 +
  3
5
R
2
The solution is now obtained by back substitution in the order  x
3
 = 1.1,  x
2
 = 0.3 and  x
1
 = 0.7.
2.2   Equations  with  non-constant  coecients
The  numerical   solution  of   ordinary  dierential   equations  with  non-constant   coecients  follows  a
similar pattern to the constant coecient case.   The procedure is again illustrated by example.   Let
u(x) be the solution of the boundary value problem
a(x) u
xx
 +b(x) u
x
 +c(x) u = f(x) ,   u(x
0
) = U
0
,   u(x
n
) = U
1
where  a(x), b(x), c(x)  and  f(x)  are  known  functions  of   x,   the  constants  U
0
  and  U
1
  are  given,   and
nally,   x
0
  and  x
n
  are  the  left   hand  and  right   hand  endpoints   of   an  interval   which  is   uniformly
dissected by the points x
k
 = x
0
 +kh where h = (x
n
x
0
)/n.   At the k
th
internal node of the interval
[x
0
, x
n
], the nite dierence representation of the dierential equation is
a
k
_
 u
k+1
2u
k
 +u
k1
h
2
  +O(h
2
)
_
+b
k
_
 u
k+1
u
k1
2h
  +O(h
2
)
_
+c
k
u
k
 = f
k
where  a
k
  =  a(x
k
),   b
k
  =  b(x
k
),   c
k
  =  c(x
k
) and  f
k
  =  f(x
k
).   This equation is now multiplied by 2h
2
and the terms re-arranged to give
(2a
k
 +hb
k
)u
k+1
(4a
k
2h
2
c
k
)u
k
 + (2a
k
hb
k
)u
k1
 = 2h
2
f
k
 +O(h
4
)
2.2.   EQUATIONS  WITH  NON-CONSTANT  COEFFICIENTS   25
for  k = 1, 2,    (n  1).   As previously,  the rst and last equations take advantage of the boundary
conditions to get the nal system of equations
(4a
1
2h
2
c
1
)u
1
 + (2a
1
 +hb
1
)u
2
  =   2h
2
f
1
(2a
0
hb
0
)U
0
 +O(h
4
)
.
.
.   =
  .
.
.
(2a
k
hb
k
)u
k1
(4a
k
2h
2
c
k
)u
k
 + (2a
k
 +hb
k
)u
k+1
  =   2h
2
f
k
 +O(h
4
)
.
.
.   =
  .
.
.
(2a
n1
hb
n1
)u
n2
(4a
n1
2h
2
c
n1
)u
n1
  =   2h
2
f
k
(2a
n
 +hb
n
)U
1
 +O(h
4
)
.
Example   Let  u(x) be the solution of the boundary value problem
x
2
u
xx
 +xu
x
 +u = x,   u() = u(1) = 0
in which    (0, 1).   This equation corresponds to the choices  a(x) =  x
2
,   b(x) =  f(x) =  x,   c(x) = 1
and  U
0
 = U
1
 = 0.   The exact solution to this dierential equation is
u(x) =
  1
2
_
x +
 sin(log x/)  sin(log x)
sin(log )
_
.
Table 2.2 gives the L
1
and L
2
norms of the numerical solution to this dierential equation for various
choices of  n when   = 0.01 and when   = 0.001.
 = 0.01    = 0.001
n   E
n
  E
n1
/E
n
  E
n
  E
n1
/E
n
4   1.8429306      4.4799446   
8   0.6984934   0.141   0.0312476   0.127
16   0.2161714   2.638   0.3327640   143.4
32   0.0752496   3.231   0.5445338   0.094
64   0.0206367   2.873   0.8809907   0.611
128   0.0045843   3.646   3.9401618   0.618
256   0.0010199   4.502   0.5773118   0.224
512   0.0002431   4.495   0.1126978   6.825
1024   0.0000599   4.195   0.0236637   5.123
4096   0.0000037   4.056   0.0011824   4.762
16384   0.0000002   4.004   0.0000719   4.292
65536   0.0000000   4.000   0.0000045   4.023
Table 2.2:
Comparison   of   L
2
errors
in  the  solution  of   x
2
u
xx
 +
xu
x
 +u = x for the bound-
ary   conditions   u()   =
u(1) = 0 when  = 0.01 and
when   = 0.001.
As  previously,   it  is  clear  that  the  nite  dierence  approach  based  on  central   dierence  formulae
has  error  O(h
2
)  provided  h  is  suitably  small.   The  dicult  in  this  application  stems  from  the  fact
26   CHAPTER  2.   FINITE  DIFFERENCE  PROCEDURE  IN  ONE  DIMENSION
that  the  dierential   equation  has  a  singular  solution  at  x  =  0.   The  closer    approaches  zero,   the
more troublesome will be the numerical solution.   All numerical procedures to solve this dierential
equation  will   experience  the  same  diculty.   For  example,   it  is  clear  that  in  the  vicinity  of   x  =  
that  u
xx
/u has order  
2
- very large when   is small.   Consequently the error in the nite dierence
approximation to the derivatives of  u around  x =   will experience a true numerical error of order
(h/)
2
.   This is only small when h is very small.   The standard way to eliminate problems of this sort
is to re-express the dierential equation in terms of another independent variable, say  z = log(x) in
this particular example.
2.3   Gradient  boundary  conditions
To  date,   all  boundary  conditions  have  specied  the  value  of  the  solution  on  the  boundary.   Often,
however, it is the derivative of the solution that is prescribed on a boundary.   Suppose it is required
to solve an ODE in [0, 1] with a gradient boundary condition at  x = 1.   We explore several ways to
do this.
2.3.1   First  order  approximation
The rst approach is to write
u(x +h) = u(x) +hu
x
(x) +
  h
2
2
  u
xx
(x) +O(h
3
)
and observe that this equation can be re-expressed in the form
u
x
(x) =
  u(x +h) u(x)
h
  +O(h) .
The nite dierence representation of a gradient boundary condition at a left or right endpoint is
Left hand boundary   Right hand boundary
u
x
(x
0
) =
  u
1
u
0
h
  +O(h)   u
x
(x
n
) =
  u
n
u
n1
h
  +O(h)
Therefore to use the nite dierence algorithm to solve the dierential equation
a(x) u
xx
 +b(x) u
x
 +c(x) u = f(x) ,   u(x
0
) = U
0
,   u
x
(x
n
) = G
1
where  a(x), b(x), c(x)  and  f(x)  are  known  functions  of   x,   and  the  constants  U
0
  and  G
1
  are  given,
one  enforces  the  ordinary  dierential   equation  at  each  interior  node  of  [x
0
, x
n
]   and  uses  the  nite
dierence representation of the boundary condition at x = x
n
.   Note that one further equation is now
2.3.   GRADIENT  BOUNDARY  CONDITIONS   27
required  because  u
n
  is  no  longer  known  and  has  to  be  determined  as  part  of  the  solution  process.
The nite dierence equations are
(4a
1
2h
2
c
1
)u
1
 + (2a
1
 +hb
1
)u
2
  =   2h
2
f
1
(2a
1
hb
1
)U
0
 +O(h
4
)
.
.
.   =
  .
.
.
(2a
k
hb
k
)u
k1
(4a
k
2h
2
c
k
)u
k
 + (2a
k
 +hb
k
)u
k+1
  =   2h
2
f
k
 +O(h
4
)
.
.
.   =
  .
.
.
(2a
n1
hb
n1
)u
n2
(4a
n1
2h
2
c
n1
)u
n1
 + (2a
n
 +hb
n
)u
n
  =   2h
2
f
k
 +O(h
4
)
u
n1
 +u
n
  =   hG
1
 +O(h
2
)
.
To  appreciate  how  the  inferior  accuracy  of   the  last  equation  degrades  the  accuracy  of   the  entire
solution, it is sucient to nd the numerical solution of
x
2
u
xx
 +xu
x
 +u = x,   u() = u
(1) = 0
with   = 0.01.   This equation corresponds to the choices  a(x) =  x
2
,   b(x) =  f(x) =  x,   c(x) = 1 and
U
0
 = G
1
 = 0 and has exact solution
u(x) =
  1
2
_
x +
 sin(log /x)  cos(log x)
cos(log )
_
.
Table 2.3 illustrates the error structure of the numerical solution based on the nite dierence method.
n   E
n
  E
n1
/E
n
4   3.7369443   
8   3.4181676   1.246
16   3.1350210   1.093
32   2.5472682   1.090
64   1.4788262   1.231
128   0.5404178   1.722
256   0.1713286   2.736
512   0.0598533   3.154
1024   0.0235584   2.862
2048   0.0101889   2.541
4096   0.0046959   2.312
8192   0.0022481   2.169
32768   0.0005433   2.089
Table 2.3:
Comparison of L
2
errors in the
solution of x
2
u
xx
+xu
x
+u = x
where   u(0.01)   =  u
(1)   =  0.
The gradient boundary condi-
tion  is   implemented  to   O(h)
accuracy.
It is clear from Table 2.3 that the accuracy of the nite dierence scheme is degraded to the accuracy
of its worst equation - in this case the boundary condition equation which is O(h) accurate.   We need
O(h
2
) accurate forward and backward dierence formulae for gradients.
28   CHAPTER  2.   FINITE  DIFFERENCE  PROCEDURE  IN  ONE  DIMENSION
2.3.2   Second  order  approximation
The  O(h
2
) accurate forward and backward nite dierence formulae for  u
x
(x) are respectively
3u(x) + 4u(x +h) u(x + 2h)   =   2hu
x
(x) +O(h
3
) ,
u(x 2h) 4u(x h) + 3u(x)   =   2hu
x
(x) +O(h
3
) .
Thus the  O(h
2
) accurate nite dierence representation of a gradient at a left or a right endpoint is
Left hand boundary   Right hand boundary
u
x
(x
0
) =
 3u
0
 + 4u
1
u
2
2h
  +O(h
2
)   u
x
(x
n
) =
  u
n2
4u
n1
 + 3u
n
2h
  +O(h
2
)
The new nite dierence formulation of the original problem only diers from the original formulation
in  respect   of   one  equation,   namely  that   contributed  by  the  boundary  condition  at   x  =  1.   The
numerical form is
(4a
1
2h
2
c
1
)u
1
 + (2a
1
 +hb
1
)u
2
  =   2h
2
f
1
(2a
0
hb
0
)U
0
 +O(h
4
)
.
.
.   =
  .
.
.
(2a
k
hb
k
)u
k1
(4a
k
2h
2
c
k
)u
k
 + (2a
k
 +hb
k
)u
k+1
  =   2h
2
f
k
 +O(h
4
)
.
.
.   =
  .
.
.
(2a
n1
hb
n1
)u
n2
(4a
n1
2h
2
c
n1
)u
n1
 + (2a
n
 +hb
n
)u
n
  =   2h
2
f
k
 +O(h
4
)
u
n2
4u
n1
 + 3u
n
  =   2hG
1
 +O(h
3
)
.
Unfortunately,   the  matrix  of   the  resulting  system  of   equations  is  no  longer  tri-diagonal,   but  this
causes  no  diculty  since  the  matrix  can  be  converted  to  tri-diagonal   form  by  a  row  operation  in
which the penultimate row of the matrix is used to eliminate  u
n2
 from the last row of the matrix.
n   E
n
  E
n1
/E
n
  n   E
n
  E
n1
/E
n
4   3.7567566      512   0.0261418   4.060
8   3.4135978   1.116   1024   0.0064812   4.097
16   3.1071541   1.101   2048   0.0016165   4.033
32   2.4869724   1.099   4096   0.0004039   4.009
64   1.3708621   1.249   8192   0.0001009   4.002
128   0.4348676   1.814   16384   0.0000252   4.001
256   0.1071061   3.152   32768   0.0000063   4.000
Table 2.4:   L
2
errors in the solution of x
2
u
xx
+xu
x
+u = x with boundary conditions u(0.01) =
u
x
(1) = 0 are compared.   The gradient boundary condition is implemented to O(h
2
) accuracy.
2.3.   GRADIENT  BOUNDARY  CONDITIONS   29
Table 2.4 illustrates the behaviour of the numerical solution based on the nite dierence method in
which the gradient boundary condition is implemented to O(h
2
) accuracy.   Clearly O(h
2
) accuracy is
now recovered by using the second order accurate implementation of the boundary condition.
2.3.3   Fictitious  nodes
The  most  direct  way  to  derive  a  second  order  accurate  boundary  condition  is  to  use  the  central
dierence formulae
u
x
(x
0
) =
  u
1
u
1
2h
  +O(h
2
) ,   u
x
(x
n
) =
  u
n+1
u
n1
2h
  +O(h
2
) .
The diculty with these approximations is that they introduce ctitious solutions  u
1
  and  u
n+1
  at
the  ctitious  nodes  x
1
  (for  u
x
(x
0
))  and  x
n+1
  (for  u
x
(x
n
))  respectively.   Both  nodes  are  ctitious
because  they  lie  outside  the  region  in  which  the  dierential   equation  is  valid.   The  introduction
of  a  ctitious  node  creates  another  unknown  to  be  determined.   The  procedure  is  as  follows.   The
dierential equation is assumed to be valid at the node  x
n
 to obtain
a
n
_
 u
n+1
2u
n
 +u
n1
h
2
  +O(h
2
)
_
+b
n
G
1
 +c
n
u
n
 = f
n
,
  u
n+1
u
n1
2h
  = G
1
 +O(h
2
) .
The task is now to eliminate  u
n+1
  between both equations.   To do this,  multiply the rst equation
by  h
2
and the second equation by 2ha
n
 and subtract to get
2a
n
u
n1
 + (h
2
c
n
2a
n
)u
n
 = h
2
f
n
(2a
n
 +hb
n
)hG
1
 +O(h
3
) .
This equation is now used as the nal equation of the tri-diagonal system which becomes
(4a
1
2h
2
c
1
)u
1
 + (2a
1
 +hb
1
)u
2
  =   2h
2
f
1
(2a
0
hb
0
)U
0
 +O(h
4
)
.
.
.   =
  .
.
.
(2a
k
hb
k
)u
k1
(4a
k
2h
2
c
k
)u
k
 + (2a
k
 +hb
k
)u
k+1
  =   2h
2
f
k
 +O(h
4
)
.
.
.   =
  .
.
.
(2a
n1
hb
n1
)u
n2
(4a
n1
2h
2
c
n1
)u
n1
 + (2a
n
 +hb
n
)u
n
  =   2h
2
f
k
 +O(h
4
)
2a
n
u
n1
 + (h
2
c
n
2a
n
)u
n
  =   h
2
f
n
(2a
n
 +hb
n
)hG
1
 +O(h
3
)
.
Table 2.5 compares the error structure of the backward dierence and ctitious node implementations
of the gradient boundary condition when   = 0.01.
30   CHAPTER  2.   FINITE  DIFFERENCE  PROCEDURE  IN  ONE  DIMENSION
Backward dierence   Fictitious nodes
n   E
n
  E
n1
/E
n
  E
n
  E
n1
/E
n
4   3.7567566      3.7573398   
8   3.4135978   1.116   3.4144871   1.234
16   3.1071541   1.101   3.1108516   1.100
32   2.4869724   1.099   2.4917194   1.098
64   1.3708621   1.249   1.3755215   1.248
128   0.4348676   1.814   0.4371938   1.811
256   0.1071061   3.152   0.1078104   3.146
512   0.0261418   4.060   0.0263256   4.055
1024   0.0064812   4.097   0.0065275   4.095
2048   0.0016165   4.033   0.0016281   4.033
4096   0.0004039   4.009   0.0004068   4.009
8192   0.0001009   4.002   0.0001017   4.002
16384   0.0000252   4.001   0.0000254   4.001
32768   0.0000063   4.000   0.0000064   4.000
Table 2.5:
Comparison of L
2
errors in the so-
lution of x
2
u
+xu
+u = x for the
boundary   conditions   u(0.01)   =
u
tan x = 1 , u(0) = 0 , u
(/4) = 1 .
The exact solution is  u(x) = log cos x.
Solution   Take  h = /(4n) then  a(x) = 1,  b(x) = tan x,  c(x) = 0 and  f(x) = 1.   The boundary
conditions  are  U
0
  =  0  and  G
1
  = 1.   The  nite  dierence  equations  based  on  the  ctitious  nodes
2.3.   GRADIENT  BOUNDARY  CONDITIONS   31
procedure are therefore
4u
1
 + (2 +hb
1
)u
2
  =   2h
2
f
1
 +O(h
4
)
.
.
.   =
  .
.
.
(2 hb
k
)u
k1
4u
k
 + (2 +hb
k
)u
k+1
  =   2h
2
f
k
 +O(h
4
)
.
.
.   =
  .
.
.
(2 hb
n1
)u
n2
4u
n1
 + (2 +hb
n
)u
n
  =   2h
2
f
k
 +O(h
4
)
2u
n1
2u
n
  =   h
2
f
n
 + (2 +hb
n
)h +O(h
3
)
.
Suppose  initially  that  n  =  40  -  the  initial  choice  of   n  is  arbitrary  but  should  be  sensible.   Let  the
nite dierence solution computed at these nodes be  u
(1)
0
  ,    , u
(1)
n
  .   Now take  n = 2n and recompute
the new solution.   Let this solution be u
(2)
0
  ,    , u
(2)
2n
.   The solution u
(1)
k
  may be compared directly with
the solution  u
(2)
2k
  since both apply at the same node.   The root mean square
E
n
 =
_
  1
n + 1
n
k=0
( u
(1)
k
  u
(2)
2k
  )
2
_
1/2
is a measure of the error incurred in computing the original solution.   In this case E
40
 is determined.
The procedure may be continued by doubling n again - E
80
 is now computed.   The algorithm continues
to double n and terminates whenever E
n
  <  where  is a user supplied error bound.   Moreover, since
we already know that the procedure is O(h
2
), we expect the ratio E
n1
/E
n
 to have a limiting value of
4 so that every iteration of this loop approximately reduces E
n
 by a factor of 4.   Table 2.6 illustrates
the convergence properties of  E
n
.
n   E
n
  E
n1
/E
n
40   0.0001026   
80   0.0000261   3.922
160   0.0000066   3.960
320   0.0000017   3.980
640   0.0000004   3.990
Table 2.6:
Estimated   L
2
norm  of   the   errors
incurred   in   the   solution   of   u
xx
 
tan xu
x
 = 1 with boundary condi-
tions  u(0) = 0 and  u
x
(/4) = 1.
Richardson  extrapolation  takes  advantage  of  the  error  structure  of  the  solution.   Suppose  u
(1)
and
u
(2)
are two approximations of the exact solution  u corresponding to dissection renements  h
1
  and
h
2
 respectively.   Since the error is  O(h
2
) then
u
(1)
=   u +ch
2
1
 +O(h
3
1
)
u
(2)
=   u +ch
2
2
 +O(h
3
2
)
_
_
  u =
  h
2
2
u
(1)
h
2
1
u
(2)
+O(h
2
2
h
3
1
h
2
1
h
3
2
)
h
2
2
h
2
1
.
The  error  in  the  combination  of   u
(1)
and  u
(2)
dened  by  u  is  more  accurate  than  either  of   u
(1)
or
32   CHAPTER  2.   FINITE  DIFFERENCE  PROCEDURE  IN  ONE  DIMENSION
u
(2)
.   In the case in which  h
2
 = h
1
/2, it is easy to see that the extrapolated solution is
u =
  4 u
(2)
u
(1)
3
  +O(h
3
) .
To test this idea, the Richardson extrapolation is used to improve the estimated solution and the L
2
norm  is  computed  with  respect  to  the  exact  solution  so  as  to  check  the  improved  convergence.   To
be specic,  u
(1)
and  u
(2)
are combined to give the more accurate solution  u
(1)
which is then used for
the norm computation.   In the next stage,  u
(2)
and  u
(3)
are used to give the more accurate solution
 u
(2)
which is then used for the norm computation.   The procedure is repeated.   Table 2.7 records the
results of these calculations.
n   E
n
  E
n1
/E
n
40   0.1166 10
5
80   0.1493 10
6
7.810
160   0.1890 10
7
7.902
320   0.2378 10
8
7.946
640   0.3029 10
9
7.851
Table 2.7:
Estimates  the  L
2
norm  of  the  errors  incurred  in  the
solution  of   u
xx
  tan xu
x
  = 1  with  boundary  con-
ditions   u(0)   =  0  and  u
x
(/4)   = 1.   The  norm  is
computed  from  the  exact  solution  and  the  Richard-
son extrapolation of the nite dierence solution.
2.4   Non-linear  equations
The application of the nite dierence algorithm to non-linear equations is similar to that of linear
equations  in  the  respect  that  the  non-linear  equation  is  enforced  at  each  node  of  the  dissection  by
replacing each derivative appearing in the equation with its nite dierence representation.   However,
the resulting equations are no longer linear.   The general solution strategy is based on iteration.   The
algebraic equations resulting from the replacement of derivatives by nite dierences are expressed
in a xed-point format that is known to converge.
2.4.1   Fixed  point  procedure
One  dimension
The general idea of a xed point procedure is rst illustrated for the solution of  f(x) = 0 assuming
the equation has a solution near  x
0
.   The aim is to construct a xed point scheme which converges
to the true solution, assumed to be near  x
0
.   Let   be an arbitrary constant then the identity
x = x +f(x) = g(x)
2.4.   NON-LINEAR  EQUATIONS   33
may  be  used  to  construct  the  iterative  scheme  x
k+1
  =  g(x
k
)  with  initial   value  x
0
.   If  this  scheme
converges, then the limit point is a solution of f(x) = 0.   The usual way to promote convergence is to
choose   so that  g(x) is a contraction mapping near the xed point.   If  x
0
  is a good approximation
to  this  xed  point,   then  choose    so  that  g
(x
0
)  =  1 + f
(x
0
)  =  0,   that  is,     = 1/f
(x
0
).   The
iteration is now
x
k+1
 = x
k
  f(x
k
)
f
(x
0
)
 ,   x
0
 given.
Note that this algorithm is similar to the Newton-Raphson algorithm, the latter simply updating  
as improved estimates of the xed point are found.
Many  dimensions
Let F(x) = 0 be a system of n equations in n unknowns, and suppose that X
0
 is an approximation to
this solution.   The generalisation of the one dimensional algorithm to many dimensions begins with
the identity
X = X +AF(X) = G(X)
in which  X  is a vector of dimension  n and  A is an arbitrary constant  n n matrix.   This identity in
turn leads to the iterative algorithm  X
k+1
 = X
k
 +AF(X
k
).   To make this a contraction mapping in
the vicinity of  X
0
, it is enough to choose  A such that
(X +AF(X))
X
j
X=X
0
= 0      I +AJ(X
0
) = 0
where J(X
0
) is the Jacobian of F(X) evaluated at X
0
.   Clearly A = J
1
(X
0
) and the nal iterative
scheme is
X
k+1
 = X
k
J
1
(X
0
)F(X
k
) ,   X
0
 given.
Provided X
0
 is a good guess of the original solution then this algorithm should converge to the exact
solution.   The  Newton-Raphson  variant  of  this  scheme  is  obtained  by  upgrading  A  whenever  each
new estimate of the zero becomes available.   Of course, J
1
(X
0
) is never computed in this algorithm;
instead we solve the simultaneous equations
J(X
0
)(X
k+1
X
k
) = F(X
k
)   X
0
 given.
Example   Find the solutions of the simultaneous equations
xy sin x y
2
= 0 ,   x
2
y
3
tan y 1 = 0 .
using the Newton-Raphson algorithm.
Solution   Here  F(X) = [xy sin x y
2
, x
2
y
3
tan y 1] and the Jacobian of this vector is
J(X) =
_
_
  y cos x   x 2y
2xy
3
3x
2
y
2
sec
2
y
_
_
34   CHAPTER  2.   FINITE  DIFFERENCE  PROCEDURE  IN  ONE  DIMENSION
With starting values  X  = [0.0, 1.0] the rst 10 iterations assuming a xed  A and with  A updated
as iteration proceeds (Newton-Raphson) are given in Table 2.8.
A not updated   Newton-Raphson
n   x   y   x   y
1   0.3372779   0.8372779   0.3372779   0.8372779
2   0.3686518   0.8247921   0.3762218   0.8235464
3   0.3748875   0.8230965   0.3767514   0.8235034
4   0.3762579   0.8230912   0.3767515   0.8235034
.
.
.
  .
.
.
  .
.
.
  .
.
.
  .
.
.
15   0.3767514   0.8235032   0.3767515   0.8235034
16   0.3767515   0.8235034   0.3767515   0.8235034
Table 2.8:
Estimated  solutions  of   the
simultaneous   equations
xy  sinx   y
2
=  0   and
x
2
y
3
  tan y    1   =   0
using   a   xed   A   and   a
continually   updated   A
(Newton-Raphson).
Example   Use the nite dierence algorithm to solve the boundary value problem
u
xx
 = u
2
,   u(0) = 0,   u(1) = 1 .
Solution   In the usual notation, the nite dierence approximation to this equation is
u
k+1
2u
k
 +u
k1
h
2
u
2
k
 = 0 ,   k = 1,    , (n 1)
where  u
0
 = 0 and  u
n
 = 1.   These values appear in the rst and last equations and so  F  is
F
1
  =   u
2
2u
1
h
2
u
2
1
.
.
.   =
  .
.
.
F
k
  =   u
k+1
2u
k
 +u
k1
h
2
u
2
k
.
.
.   =
  .
.
.
F
n1
  =   1 2u
n1
 +u
n2
h
2
u
2
n1
.
The Jacobian matrix is  J
ik
 = F
i,k
.   In this example, it is clear that  J  is tri-diagonal.   To be specic,
J =
_
_
2 2h
2
u
1
  1   0   0             0
1   2 2h
2
u
2
  1   0             0
0   1   2 2h
2
u
3
  1   0          
.
.
.
  .
.
.
  .
.
.
  .
.
.
  .
.
.
  .
.
.
    
.
.
.
  .
.
.
  .
.
.
  1   2 2h
2
u
k
  1     
.
.
.
  .
.
.
  .
.
.
  .
.
.
  .
.
.
  .
.
.
    
0                  0   1   2 2h
2
u
n1
_
_
2.4.   NON-LINEAR  EQUATIONS   35
The  algorithm  is   started  by  guessing  the  initial   solution.   Often  a  suitable  guess   is   obtained  by
assuming any sensible function of  x connecting the boundary data - say  u(x) =  x in this instance.
The renement of the mesh is set by choosing N  and the initial estimate of u
0
, u
1
,    , u
n
 determined
from the guessed solution.   The value of  n is then doubled and the computation repeated.   The root
mean square error norm
E
n
 =
_
  1
n + 1
n
k=0
( u
(1)
k
  u
(2)
2k
  )
2
_
is then used to decide when the algorithm should terminate.   With a stopping criterion of E
n
  < 10
8
,
Table 2.9 shows the result of twelve iterations of the Newton-Raphson algorithm.
N   E
N
  E
N1
/E
N
4   0.0001037   
8   0.0000262   3.959
16   0.0000066   3.992
32   0.0000016   3.998
64   0.0000004   4.000
Table 2.9:
Estimated  solutions  of   the  u
xx
  =  u
2
with  boundary  conditions   u(0)   =  0
and   u(1)   =   1.   The   accuracy   is
based on 12 iterations of the Newton-
Raphson algorithm.
36   CHAPTER  2.   FINITE  DIFFERENCE  PROCEDURE  IN  ONE  DIMENSION
Exercises
Question  11.   Use the  O(h
2
) method of nite dierences to solve the boundary value problem
y
+y = 0, y(0) = 1, y
(1) = 0
taking  h  =  1/4.   [Use  the  backward  dierence  representation  of  a  rst  derivative  to  deal   with  the
gradient boundary condition.]
Show  that   y(x)   =  sec 1 cos(1  x)  is  the  exact  solution  of   this  equation  and  use  it  to  check  the
accuracy of your answers.
Question 13.   Use the O(h
2
) method of nite dierences with h = 1/3 to solve the boundary value
problem
x
2
y
xy
 +y = x
2
,   y(1) = 0,   y(2) = 4 .
Verify that the exact solution to this boundary value problem is
y(x) = x
2
x +
  xlog x
log 2
and use this exact solution to check the accuracy of your numerical calculations.
Question 14.   Use the O(h
2
) method of nite dierences with h = 1/3 to solve the boundary value
problem
x
2
y
xy
 +y = x
2
,   y
(1) = 0,   y(2) = 1 .
Use both forward dierences and ctitious nodes to deal with y
2
u(x, y)
xy
  
  u(x +h, y +h) u(x +h, y h) u(x h, y +h) +u(x h, y h)
4h
2
  ?
[Think about it before expanding, also see degree exam 2000!]
Question  17.   Consider  the  dierential  equation  x
2
u
xx
 + xu
x
 + u =  x  with  boundary  conditions
u() = u(1) = 0.   Use the substitution x = e
t
to reformulate the problem as one for u(t).   Now discre-
tise in  t (using second order central dierences).   Using  N  = 2, 4, ... (just a few by hand/calculator
or quite a few using a PC) compare results with those given in the lecture.   Why is this better?
Question 18.   Write down the central dierence/Newton Raphson method for the non-linear equa-
tion
u
xx
 + 2uu
x
 +u = sin 2x,   u(0) = u() = 0.
[Can you spot one exact solution?]
38   CHAPTER  2.   FINITE  DIFFERENCE  PROCEDURE  IN  ONE  DIMENSION
Chapter  3
Elliptic  equations  in  two  dimensions
The nite dierence algorithm can be applied to the solution of partial dierential equations.   The
procedure will be demonstrated with respect to partial dierential equations in two dimensions.   Let
D  R
2
be such that  f  = f(x, y) has Taylors series expansion
f(x +h, y +k)   =   f(x, y) +hf
x
 +kf
y
 +
 1
2
_
h
2
f
xx
 + 2hkf
xy
 +k
2
f
yy
_
+
1
6
_
h
3
f
xxx
 + 3h
2
kf
xxy
 + 3hk
2
f
xyy
 +k
3
f
yyy
_
+  
at (x, y)  D  for suitable values of  h and  k.   Suppose that D  can be divided into rectangles by the
nodes  (x
0
 + ih, y
0
 + jk)  where  i  =  0, 1,    n  and  j  =  0, 1,    , m.   In  particular,   the  usual   central
dierence approximations are valid and give
f(x
i
, y
j
)
x
  =
  f
i+1,j
f
i1,j
2h
  +O(h
2
)
f(x
i
, y
j
)
y
  =
  f
i,j+1
f
i,j1
2k
  +O(k
2
)
2
f(x
i
, y
j
)
x
2
  =
  f
i+1,j
2f
i,j
 +f
i1,j
h
2
  +O(h
2
)
2
f(x
i
, y
j
)
y
2
  =
  f
i,j+1
2f
i,j
 +f
i,j1
k
2
  +O(k
2
)
where  f
i,j
  =  f(x
0
 + ih, y
0
 + jk).   The  only  new  feature  to  arise  in  two  dimensions  is  the  mixed
derivative  f
xy
.   It may be shown that
2
f(x
i
, y
j
)
xy
  =
  f
i+1,j+1
f
i1,j+1
f
i+1,j1
 +f
i1,j1
4hk
  +O(hk) .
To  maintain  simplicity,   it  will   henceforth  be  assumed  that   h  =  k,   that  is,   the  resolutions  of   the
mesh in the  x and  y directions are identical so that the region in which the solution is sought is now
partitioned into squares of side  h by the nodes.   The simplied nite dierence formulae for the rst
39
40   CHAPTER  3.   ELLIPTIC  EQUATIONS  IN  TWO  DIMENSIONS
and second order partial derivatives of  f  become
f(x
i
, y
j
)
x
  =
  f
i+1,j
f
i1,j
2h
  +O(h
2
)
f(x
i
, y
j
)
y
  =
  f
i,j+1
f
i,j1
2h
  +O(h
2
)
2
f(x
i
, y
j
)
x
2
  =
  f
i+1,j
2f
i,j
 +f
i1,j
h
2
  +O(h
2
)
2
f(x
i
, y
j
)
y
2
  =
  f
i,j+1
2f
i,j
 +f
i,j1
h
2
  +O(h
2
)
2
f(x
i
, y
j
)
xy
  =
  f
i+1,j+1
f
i1,j+1
f
i+1,j1
 +f
i1,j1
4h
2
  +O(h
2
) .
3.1   Computational  molecule
The computational molecule is a convenient way to describe the complex mix of variables necessary
to  construct  the  nite  dierence  representation  of   a  dierential   operator.   For  example,   Laplaces
equation at (x
i
, y
j
) in two dimensions has nite dierence form
u
xx
 +u
yy
  =
  u
i+1,j
2u
i,j
 +u
i1,j
h
2
  +
  u
i,j+1
2u
i,j
 +u
i,j1
h
2
  +O(h
2
)
=
  u
i+1,j
 +u
i,j+1
4u
i,j
 +u
i,j1
 +u
i1,j
h
2
  +O(h
2
) = 0 .
The  centre  of  the  molecule  is  positioned  at  the  node  at  which  the  operator  is  to  be  expressed  and
the entries of the molecule indicate the proportion of the surrounding nodes to be mixed in order to
obtain the nite dierence representation of the operator.   Assuming the molecular orientation
(i 1, j + 1)   (i, j + 1)   (i + 1, j + 1)
(i 1, j)   (i, j)   (i + 1, j)
(i 1, j 1)   (i 1, j 1)   (i 1, j 1)
the Laplace operator corresponds to the computational molecule
u
xx
 +u
yy
 =
  1
h
2
0   1   0
1   4   1
0   1   0
Similarly, the mixed derivative operator corresponds to the computational molecule
u
xy
 =
  1
4h
2
1   0   1
0   0   0
1   0   1
3.1.   COMPUTATIONAL  MOLECULE   41
Example   Construct the nite dierence scheme to solve the Poisson equation
u
xx
 +u
yy
 = f(x, y)
in the domain D = [(x, y)  (0, 1) (0, 1)] subject to the boundary condition  u = 0 on  D.
Solution   The computational molecule at each interior node gives the set of equations
u
i+1,j
 +u
i,j+1
4u
i,j
 +u
i,j1
 +u
i1,j
 = h
2
f
i,j
where  (i, j)  (1,    , n  1)  (1,    , n  1).   One  must  solve  a  set  of  (n  1)
2
linear  equations  for
the  solution  at  each  internal   node  of D.   The  fundamental   dierence  between  the  one-dimensional
and  higher  dimensional   nite  dierence  schemes  lies  in  the  fact  that  any  node  has  more  than  two
neighbours  contributing  to  the  numerical   solution  (in  fact  4  in  the  case  of   the  Poisson  equation).
Consequently,   the  tri-diagonal   structure  of   the  one-dimensional   problem  is  lost  since  nodes  close
geographically  to  each  other  can  no  longer  be  numbered  sequentially.   An  enumeration  scheme  for
the unknowns  u
i,j
  is required.   The choice is ultimately arbitrary, but one obvious possibility is
v
i+(n1)(j1)
 = u
i,j
 ,   (i, j)  (1,    , n 1) (1,    , n 1) .
The vector  V   of unknowns has dimension (n  1)
2
and enumerates the nodes from left to right (in
the direction of increasing  x) and from bottom to top (in the direction of increasing  y).   The vector
V   is  the  solution  of  the  linear  system  AV   =  F  where  A  is  a  constant  matrix  and  F  is  a  vector  of
dimension  (n  1)
2
with  k
th
entry  F
k
  =  h
2
f
i,j
.   The  k
th
row  of   A  will  contain  the  elements  of  the
Laplacian computational module appropriately positioned.   The dierent possibilities are
(a)   If  i = 1 then  A
k,k
 = 4 and  A
k,k+1
 = 1.
(b)   If 1 < i < (n 1) then  A
k,k1
 = 1,  A
k,k
 = 4 and  A
k,k+1
 = 1.
(c)   If  i = n 1 then  A
k,k1
 = 1 and  A
k,k
 = 4.
For each value of  k, the value of  j  is examined.   If  j = 1 then  A
k,k+n1
 = 1, while if 1 < j  < (n 1)
then  A
k,kn+1
 = A
k,k+n1
 = 1, and nally if  j = n 1 then  A
k,kn+1
 = 1.   The structure of  A may
be represented eciently by the block matrix form
A =
_
_
T   I   0   0             0
I   T   I   0             0
0   I   T   I   0          
.
.
.
  .
.
.
  .
.
.
  .
.
.
  .
.
.
  .
.
.
    
0                  0   I   T
_
_
42   CHAPTER  3.   ELLIPTIC  EQUATIONS  IN  TWO  DIMENSIONS
where  I  is the (n 1) (n 1) identity matrix and  T  is the (n 1) (n 1) tri-diagonal matrix
T  =
_
_
4   1   0   0             0
1   4   1   0             0
0   1   4   1   0          
.
.
.
  .
.
.
  .
.
.
  .
.
.
  .
.
.
  .
.
.
    
0                  0   1   4
_
_
.
Suppose now that Gaussian elimination is used to solve these equations.   It is clear that the process
generates  non-zero  entries  where  none  entry  existed  previously.   Thus  Gaussian  elimination  is  no
longer appropriate and another approach, one that can take advantage of the sparsity of A, is required.
3.2   Iterative  methods
Iterative  methods  are  most  appropriate  for  the  solution  of  equations  AX  =  B  when  A  is  a  sparse
matrix.   A  matrix  A  is  sparse  whenever  it  is  benecial   to  store  only  the  non-zero  elements  of   A.
Various  procedures  are  used  to  store  sparse  matrices;   some  are  general   and  some  are  specic  to
particular matrix types (e.g.   tri-diagonal matrices).
3.2.1   A  general  representation  of  a  sparse  matrix
The non-zero elements of a general sparse matrix are often stored as a vector together with the row
and  column  indices  in  which  they  appear.   However,   the  way  in  which  this  is  achieved  involves  a
degree of subtlety.   The entries of  A are stored sequentially row-by-row together with the associated
column index.   An auxiliary vector, say r, of dimension one more than that of the number of rows of
A stores the number of non-zero elements in the  k
th
row of  A indirectly in the dierence  r
k+1
r
k
.
This representation allows ecient multiplication by the matrix  A.
Example   Find the sparse representation of the matrix
A =
_
_
0.0   1.0   2.3   0.0   0.0
2.0   0.0   0.3   3.1   0.0
0.0   1.0   0.4   0.0   1.5
0.0   0.0   0.0   3.4   0.0
0.2   0.0   2.0   0.0   0.0
_
_
3.2.   ITERATIVE  METHODS   43
Solution   The representation consists of the vectors
a   =   [1.0, 2.3, 2.0, 0.3, 3.1, 1.0, 0.4, 1.5, 3.4, 0.2, 2.0]
c   =   [2, 3, 1, 3, 4, 2, 3, 5, 4, 1, 3]
r   =   [0, 2, 5, 8, 9, 11]
In practice, these would be enclosed in a structure.
3.2.2   The  algorithms
General  iterative  methods  for  the  solution  of   AX  =  B  usually  begin  by  rewriting  this  equation  in
the iterative form
PX + (AP)X = B
where  P  is an arbitrary non-singular matrix of the same dimension as  A.   Thereafter the solution of
AX = B  is derived by iterating
PX
k+1
 = B (AP)X
k
 ,   X
0
 given.   (3.1)
The ecacy of this iterative algorithm requires that  P  possess two important properties of  P.
  Equation  (3.1)  must  be  easy  to  solve  for  X
k+1
.   For  example,   this  is  true  if   P  is  a  diagonal
(Gauss-Jacobi),   an  upper   triangular   matrix  (solution  by  backward  substitution)   or   a  lower
triangular matrix (Gauss-Seidel) for which the solution is obtained by forward substitution.
  The matrix I P
1
A must be a contraction mapping.   This requires the eigenvalues of I P
1
A
to lie within the unit circle.   Clearly a particularly good choice for P  occurs when P
1
is a good
estimate for the inverse of  A.
Let  A = D +L + U  where  D is the main diagonal of  A, and  L and  U  denote respectively the lower
and upper triangular sections of  A.   Two important choices for  P  are:-
Algorithm   P   H = I P
1
A
Gauss-Jacobi   D   D
1
(L +U)
Gauss-Seidel   D +L   (D +L)
1
U
The  iterative  scheme  (3.1)  will   converge  to  the  solution  of   AX  =  B  from  any  starting  value  X
0
provided || I P
1
A|| < 1, that is, (I P
1
A) is a contraction mapping in the vector space of the
solution.   This requires that all the eigenvalues of (I P
1
A) lie within the unit circle.   The proof of
this result relies on the fact that every matrix is unitarily similar to an upper triangular matrix.
44   CHAPTER  3.   ELLIPTIC  EQUATIONS  IN  TWO  DIMENSIONS
Gauss-Jacobi   algorithm   The  Gauss-Jacobi  algorithm  corresponds  to  the  choice  P  =  D.   With
this choice of  P, it follows that  H = (I P
1
A) = D
1
(L +U) and the iterative scheme is
X
n+1
 = D
1
B D
1
(L +U)X
n
,   X
0
  given.
Prior to implementing this algorithm, the equations are often re-ordered so as to maximise the product
of the elements of  D.   In this way, || D
1
(L +U) || is minimised in a straight forward way.
Gauss-Seidel algorithm   The Gauss-Seidel algorithm corresponds to the choice P  = D+L.   With
this choice of  P, it follow that  H = (I P
1
A) = (D +L)
1
U  and the iterative scheme is
X
(n+1)
= (D +L)
1
B (D +L)
1
UX
(n)
,   X
(0)
given  .
This scheme will converge provided (D +L)
1
U  is a contraction mapping.
Example   Use the Gauss-Jacobi and Gauss-Seidel algorithms to solve the system of equations
8x   +   y      z   =   8
2x   +   y   +   9z   =   12
x      7y   +   2z   =   4 .
Solution   The original equations are re-order to give
8x   +   y      z   =   8
x      7y   +   2z   =   4
2x   +   y   +   9z   =   12
_
_
  
x   =   1 y/8 +z/8
y   =   4/7 +x/7 + 2z/7
z   =   4/3 2x/9 y/9
The re-ordered equations form the basis of the Gauss-Jacobi and Gauss-Seidel iterative scheme, which
in this instance are respectively
Gauss-Jacobi
x
(n+1)
1
  =   1 
  x
(n)
2
8
  +
  x
(n)
3
8
x
(n+1)
2
  =
  4
7
 +
 1
7
x
(n)
1
  +
 2
7
x
(n)
3
x
(n+1)
3
  =
  4
3
 
 2
9
x
(n)
1
  
  x
(n)
2
9
Gauss-Seidel
x
(n+1)
1
  =   1 
  x
(n)
2
8
  +
  x
(n)
3
8
x
(n+1)
2
  =
  4
7
 +
 1
7
x
(n+1)
1
  +
 2
7
x
(n)
3
x
(n+1)
3
  =
  4
3
 
 2
9
x
(n+1)
1
  
 1
9
x
(n+1)
2
.
Both schemes take the starting values  x
(0)
1
  = x
(0)
2
  = x
(0)
3
  = 0.   Convergence is rapid with the solution
3.2.   ITERATIVE  METHODS   45
x
1
 = x
2
 = x
3
 = 1 being obtained approximately after 3 iterations.
Iteration   x
1
  x
2
  x
3
  x
1
  x
2
  x
3
0   0   0   0   0   0   0
1   1
  4
7
4
3
  1
  5
7
65
63
2
  53
56
23
21
26
27
253
252
1781
1764
   1
3.2.3   Comparison  of  Gauss-Jacobi  and  Gauss-Seidel
Although  no  general   comparisons  between  the  Gauss-Jacobi   and  the  Gauss-Seidel   algorithms  are
known, comparisons exist for some particular classes of matrices which occur often in practice.   Per-
haps the best known of these results is the Stein-Rosenberg theorem.   The theorem applies to matrices
A  with  the  property  that  either  each  diagonal  entry  of   A  is  positive  and  all  o-diagonal  entries  of
A are non-positive, or alternatively, each diagonal entry of  A is negative and all o-diagonal entries
of   A  are  non-negative.   For  this  class  of   matrices,   the  Gauss-Jacobi   and  Gauss-Seidel   algorithms
either both converge or both diverge.   If both converge then Gauss-Seidel will converge faster than
Gauss-Jacobi.   Alternatively, if both diverge then Gauss-Seidel will diverge faster than Gauss-Jacobi.
One point to note, however, is that Gauss-Seidel is intrinsically a serial algorithm while Gauss-Jacobi
is  intrinsically  a  parallel  algorithm.   The  optimal  choice  of  algorithm  may  therefore  depend  on  the
properties of the computing machine.
Example   Determine whether or not the Gauss-Jacobi algorithm can be made to converge for the
matrix
A =
_
_
1   3   0
2   2   1
0   1   2
_
_
  .
Solution   It is necessary to compute the eigenvalues of D
1
(L + U),  that is,  solve the equation
det(D
1
(L +U) +I) = 0.   Since  D is non-singular then   satises det(L +U +D) = 0.   The rows
of  A are re-ordered by interchanging rows 1 and 2 to maximise the likelihood that D
1
(L + U) is
a contraction matrix.   The eigenvalues   satisfy
2   2   1
1   3   0
0   1   2
= 12
3
4 + 1 = 0 .
The stationary values of this cubic function satisfy 36
2
4 = 0 from which it follows that the cubic has
a minimum turning point at (1/3, 1/9) and a maximum turning point at (1/3, 17/9).   Therefore the
46   CHAPTER  3.   ELLIPTIC  EQUATIONS  IN  TWO  DIMENSIONS
cubic has exactly one real solution and it lies in the interval (1, 1/3).   The remaining solutions are
complex conjugate pairs.   The product of the three solutions is 1/12 and so the complex conjugate
pair  solutions  must  lie  within  a  circle  of  radius  1/2  about  the  origin.   Therefore  all  eigenvalues  lie
within the unit circle and so the Gauss-Jacobi algorithm based on the re-arranged matrix
_
_
2   2   1
1   3   0
0   1   2
_
_
will converge.   You should show that the Gauss-Jacobi algorithm based on the original matrix diverges.
3.2.4   Diagonal  dominance
An nn matrix A is said to be diagonally dominated if the sum (in modulus) of all the o-diagonal
elements in any row is less than the modulus of the diagonal element in that row.   Let V  be a vector
of dimension  n, then the supremum  norm of  V  is dened by
|| V || =  max
1jn
| V
j
|
In eect, this norm measures the distance between two vectors as the modulus of the largest dierence
between their paired elements.   If a matrix is diagonally dominated, it can be shown that the Gauss-
Jacobi and Gauss-Seidel schemes are guaranteed to converge in the supremum norm.   To prove these
results, it is convenient to dene  E = D
1
L and  F  = D
1
U  then  E  is a strictly lower triangular
matrix and  F  is a strictly upper triangular matrix.
Gauss-Jacobi
Suppose A is diagonally dominated.   Since D
1
(L+U) = E+F, then it is necessary to demonstrated
that || D
1
(L+U)V || = || (E +F)V || < || V || for any vector  V .   The denition of  D,  L and  U  from
A ensures that  D
1
A = I +D
1
L +D
1
U  = I (E +F).   Therefore  E +F  = I D
1
A so that
[(E +F)V ]
i
 =
n
q=1
[(E +F)]
iq
V
q
 = 
n
q=1,q=i
a
iq
 V
q
a
ii
for all vectors  V .   Thus
 [(E +F)V ]
i
q=1,q=i
a
iq
V
q
a
ii
q=1,q=i
 a
iq
a
ii
 | V
q
| <   max
1qn,q=i
| V
q
|
The supremum norm is calculated by considering all values of i.   Since || (E +F)V || < || V ||, then it
follows immediately that (E +F) is a contraction mapping if  A is a diagonally dominated matrix.
If   diagonal   dominance  is  weakened  by  allowing  the  sum  of   the  absolute  values  of   the  o-diagonal
elements in any row to equal the modulus of the diagonal element in that row, then it can no longer
3.2.   ITERATIVE  METHODS   47
be asserted that D
1
(L+U) is a contraction mapping, and the eigenvalues of D
1
(L+U) must now
be calculated.
Gauss-Seidel
Suppose  A is diagonally dominated.   Observe rst that
(D +L)
1
U  = [D(I +D
1
L)]
1
U  = (I +D
1
L)
1
D
1
U  = (I E)
1
F .
It must be shown that || (D +L)
1
UV || = || (I E)
1
FV || < || V || for any vector  V .   Since  E  is a
strictly lower triangular  n n matrix then  E
n
= 0 and it now follows that
(I E)
1
= I +E +E
2
   +E
n1
.
The individual elements of (I E)
1
may be compared to get
| (I E)
1
|   =   | I +E +E
2
+   +E
n1
|
<   1 +| E | +| E |
2
+   +| E |
n1
=   (I | E |)
1
.
Recall from the analysis of the Gauss-Jacobi algorithm that (| E | +| F | )| V |  | V |.   This inequality
may be re-expressed in the form | F || V |  (I | E | ) | V | which in turn gives (I | E | )
1
| F || V | 
| V |.   Therefore, given any vector  V , it follows that
|| (I E)
1
FV || < || V || .
and so (D +L)
1
U  is a contraction mapping if  A is a diagonally dominated matrix.
Example   Investigate the convergence of the Gauss-Seidel algorithm for the coecient matrices
(i)
_
_
3   1   1
1   2   0
1   2   4
_
_
  (ii)
_
_
3   2   1
1   2   0
1   2   4
_
_
  .
Solution   The rst matrix exhibits strict diagonal dominance and so the Gauss-Seidel algorithm is
guaranteed to converge.   On the other hand, diagonal dominance is not strict for the second matrix
and so it is necessary to compute the eigenvalues of (D+L)
1
U.   Suppose (D+L)
1
UV  = V  then
[(D +L) +U]V  = 0 and so the eigenvalues are solutions of
det[(D +L) +U] =
3   2   1
   2   0
   2   4
= 12
2
(2 + 1) = 0 .
48   CHAPTER  3.   ELLIPTIC  EQUATIONS  IN  TWO  DIMENSIONS
The eigenvalues are   = 0 (twice) and   = 1/2.   Since all eigenvalues lie within the unit circle then
the Gauss-Seidel algorithm converges on this occasion.
Example   Construct the nite dierence scheme for the solution of the partial dierential equation
u
xx
 +u
yy
 +a(x, y)u
x
 +b(x, y)u
y
 +c(x, y)u = f
with Dirichlet boundary conditions.   Investigate conditions under which the resulting system of linear
equations will be diagonally dominated.
Solution   The central dierence representation of the dierential equation is
u
i1,j
 +u
i,j1
4u
i,j
 +u
i,j+1
 +u
i+1,j
h
2
  +a
i,j
u
i+1,j
u
i1,j
2h
  +b
i,j
u
i,j+1
u
i,j1
2h
  +c
i,j
u
i,j
 = f
i,j
 .
This equation is multiplied by 2h
2
and like terms collected together to obtain
(2 ha
i,j
)u
i1,j
 + (2 hb
i,j
)u
i,j1
(8 2h
2
c
i,j
)u
i,j
 + (2 +hb
i,j
)u
i,j+1
 + (2 +ha
i,j
)u
i+1,j
 = 2h
2
f
i,j
 .
Strict diagonal dominance requires that
| 8 2h
2
c
i,j
| > | 2 ha
i,j
| +| 2 hb
i,j
| +| 2 +hb
i,j
| +| 2 +ha
i,j
| .
For suitably small  h, this inequality becomes
8 2h
2
c
i,j
  > 2 ha
i,j
 + 2 hb
i,j
 + 2 +hb
i,j
 + 2 +ha
i,j
     c
i,j
  < 0 .
Thus the nite dierence representation of the partial dierential equation will be strictly diagonally
dominated provided  c(x, y) < 0 in the region in which the solution is sought.
Example   Construct the nite dierence scheme for the solution of the partial dierential equation
u
xx
 +u
yy
 = 2 ,   (x, y)  D,   u = 0 on  D,
where D  =  (1, 1)  (1, 1).   Iterate  the  nite  dierence  scheme  for  chosen  h,   and  use  the  exact
solution
u(x, y) = 1 y
2
32
k=0
(1)
k
cosh
_
(2k+1)x
2
_
cos
_
(2k+1)y
2
_
(2k + 1)
3
cosh
_
(2k+1)
2
_
for the boundary value problem to investigate the convergence of the nite dierence scheme.
Solution   The nite dierence scheme to solve  u
xx
 +u
yy
 = 2 is
u
i1,j
 +u
i,j1
4u
i,j
 +u
i,j+1
 +u
i+1,j
 = 2h
2
,
which may be re-expressed in the form
u
i,j
 =
  1
4
_
u
i1,j
 +u
i,j1
 +u
i,j+1
 +u
i+1,j
_
+
  h
2
2
  (3.2)
3.2.   ITERATIVE  METHODS   49
with  h  =  2/n,   x
i
  = 1 + ih  and  y
j
  = 1 + jh.   Equation  (3.2)  may  be  used  as  the  basis  of  the
iterative algorithm
u
(k+1)
i,j
  =
  1
4
_
u
(k)
i1,j
 +u
(k)
i,j1
 +u
(k)
i,j+1
 +u
(k)
i+1,j
_
+
  h
2
2
  ,   u
(0)
i,j
  = 0   (3.3)
where the initial solution is taken to be  u = 0, which satises the boundary condition.   For example,
when  n = 2 then  u
(1)
1,1
  =  u
(2)
1,1
  =    =  h
2
/2 and the iterative algorithm clearly terminates after two
iterations.   To control the number of Gauss-Seidel iteration, the root mean square error norm
E
n
 =
_
_
  1
(n + 1)
2
n
i,j=0
 u
new
i,j
  u
old
i,j
2
_
_
1/2
=
  1
n + 1
_
_
n
i,j=0
 u
new
i,j
  u
old
i,j
2
_
_
1/2
is used to terminate Gauss-Seidel iteration whenever  E
n
  < 10
6
.   The result of this procedure, now
measuring the numerical error against the true solution, is given in Table 3.1.
n   ITER   E
n
  E
n1
/E
n
2   2   0.0893708   
4   19   0.0207940   4.298
8   70   0.0048225   4.312
16   245   0.0011738   4.109
32   837   0.0003828   3.067
64   2770   0.0004880   0.784
128   8779   0.0016882   0.289
Table 3.1:
Comparison of L
2
errors in the
solution  of   u
xx
  +  u
yy
  =  2
when using Gauss-Seidel itera-
tion  and  the  termination  con-
dition  E
n
  < 10
6
.
Clearly the Gauss-Seidel algorithm is using many more iterations to produce an inferior result as the
mesh is rened by increasing  n.   This is a counter-intuitive result.   The diculty lies in the stopping
criterion.   The  use  of  an  absolute  termination  condition  can  be  problematic  when  an  iterative  pro-
cedure converges very slowly as happens here.   The usual way to ameliorate this hazard is to use a
termination condition which becomes increasingly demanding as the number of iterations increases.
One popular choice is to make the termination condition inversely proportional to the number of iter-
ations performed.   The same calculation repeated with the termination condition E
n
  < 10
5
/ITER,
where  ITER counts the number of iterations of the Gauss-Seidel algorithm, gives
50   CHAPTER  3.   ELLIPTIC  EQUATIONS  IN  TWO  DIMENSIONS
n   ITER   E
n
  E
n1
/E
n
  ITER/n
2
2   2   0.0893708      0.500
4   20   0.0207934   4.298   1.250
8   83   0.0048172   4.316   1.297
16   336   0.0011487   4.194   1.313
32   1344   0.0002802   4.100   1.313
64   5379   0.0000696   4.024   1.313
128   21516   0.0000178   3.901   1.313
Table 3.2:
Comparison of L
2
errors in the
solution  of   u
xx
  +  u
yy
  =  2
when using Gauss-Seidel itera-
tion  and  the  termination  con-
dition  E
n
  < 10
5
/ITER.
The fact that  ITER/n
2
is eectively constant is simply another manifestation of the fact that the
numerical accuracy is proportional to  h
2
, that is 1/n
2
.
Multi-grid  procedure
The  single  grid  procedure  can  be  improved  by  using  a  multi-grid  approach  in  which  the  mesh  is
progressively rened by halving h, or equivalently, doubling n.   The solution for n can be used as the
starting conguration for an iteration based on 2n.   Values of the solution at the intermediate points
introduced  by  the  multi-grid  approach  can  be  obtained  by  linear  interpolation  of  the  existing  grid
points.   Specically
u
2i+1,j
 =
  u
2i,j
 +u
2i+2,j
2
  ,   u
i,2j+1
 =
  u
i,2j
 +u
i,2j+2
2
  .
Table 3.3 illustrates the use of this procedure for the problem under discussion.
n   ITER   E
n
  E
n1
/E
n
  ITER/n
2
2   2   0.0893708      0.500
4   20   0.0207933   4.298   1.250
8   80   0.0048171   4.317   1.250
16   316   0.0011487   4.193   1.234
32   1252   0.0002802   4.100   1.223
64   4989   0.0000697   4.022   1.218
128   19923   0.0000179   3.892   1.216
Table 3.3:
Comparison of L
2
errors in the
solution  of   u
xx
  +  u
yy
  =  2
when  using  Gauss-Seidel   iter-
ation,   a  multi-grid  procedure
and the termination condition
|Error| < 10
5
/ITER.
3.3.   NON-RECTANGULAR  DOMAINS   51
3.3   Non-rectangular  domains
The nite dierence procedure can be used directly to solve the Dirichlet boundary value problem on
irregular domains with boundary curves formed by connecting line segments oriented at right angles
to each other, for example, an I beam.   Other non-rectilinear regions can also be treated easily by
the nite dierence procedure if their boundary curves are coordinate lines.   For example, equations
expressed in polar coordinates over the interior of a circle.   Laplaces equation in polar coordinates is
2
u
r
2
  +
 1
r
u
r
  +
  1
r
2
2
u
2
  = f(r, ) .   (3.4)
This equation may be solved using the nite dierence procedure by taking  r
i
 =  ih and  
j
  =  jk  in
which  h = R/n and  k = 2/m.   Each derivative appearing in equation (3.4) may be replaced by its
nite dierence representation to get
u
i+1,j
2u
i,j
 +u
i1,j
h
2
  +
  1
r
i
u
i+1,j
u
i1,j
2h
  +
  1
r
2
i
u
i,j+1
2u
i,j
 +u
i,j1
k
2
  = f
i,j
except  at  the  origin.   The  nite  dierence  equation  contributed  by  the  origin  may  be  constructed
by  returning  to  the  original  cartesian  version  of  the  Laplace  molecule.   This  molecule  can  be  used
provided nodes of   fall on the coordinate axes, that is, when  m is a multiple of 4.
Finally, domains that exhibit little or no regularity are most eectively treated using nite element
methods.
Example   (May 2001) Consider the boundary value problem
u
xx
 +u
yy
 = 2 ,   (x, y)  D
where D  is  the  interior  of  the  rectangle  with  vertices  at  (0, 0),   (0, 1),   (1/2.0)  and  (1/2, 1),   and  the
solution is required to satisfy the boundary conditions
u(x, 0) = 1 ,   u(x, 1) = 0 ,   u(0, y) = 1 y ,   u
x
(1/2.y) = u
2
(1/2, y) .
(i)   Construct  a  nite  dierence  representation  of   this  problem  for  general   h  using  the  ctitious
nodes procedure to describe the gradient boundary condition.
(ii)   Write out the specic equations to be solved in the case in which  h = 1/4.
(iii)   Restructure these equations in a form suitable for iteration.
Solution
(i)   Take  x
i
 = ih and  y
j
 = jh where  h = 1/2n.   In this case  i = 0, 1,    , n and  j = 0, 1    , 2n.   The
Laplace molecule gives
u
i1,j
 +u
i+1,j
 +u
i,j1
 +u
i,j+1
4u
i,j
 = 2h
2
52   CHAPTER  3.   ELLIPTIC  EQUATIONS  IN  TWO  DIMENSIONS
where  i = 1,    , (n 1) and  j = 1,    , (2n 1).   The boundary conditions yield
u
0,j
  =   1 jh   j = 0,    , 2n
u
i,0
  =   1   i = 0,    , n
u
i,2N
  =   0   i = 0,    , n
The boundary condition on  x = 1/2 requires more work.   The ctitious nodes procedure gives
u
n+1,j
u
n1,j
  =   2hu
2
n,j
u
n1,j
 +u
n+1,j
 +u
n,j1
 +u
n,j+1
4u
n,j
  =   2h
2
_
_
  j = 1,    , 2n 1 .
The ctitious value  u
n+1,j
  is now eliminated between these equations to obtain
2u
n1,j
 + 2hu
2
n,j
 +u
n,j1
 +u
n,j+1
4u
n,j
 = 2h
2
j = 1,    , 2n 1 .
This is the nal form for the boundary condition on  x = 1/2.
(ii)   Now take  h = 1/4 or  n = 2.   Let  u
1
,    , u
6
 be the unknowns dened in the diagram
1
0
0
  1/2
u = 1
u = 0
u
x
 = u
2
u = 1 y
   
   
   
u
1
  u
2
u
3
  u
4
u
5
  u
6
3.3.   NON-RECTANGULAR  DOMAINS   53
The nite dierence for  h = 1/4 are therefore
u
2
 +
 3
4
 + 1 +u
3
4u
1
  =   
1
8
2u
1
 + 2hu
2
2,j
 + 1 +u
4
4u
2
  =   
1
8
u
4
 +
 1
2
 +u
1
 +u
5
4u
3
  =   
1
8
2u
3
 + 2hu
2
4
 +u
2
 +u
6
4u
4
  =   
1
8
u
6
 +
 1
4
 +u
3
 + 0 4u
5
  =   
1
8
2u
5
 + 2hu
2
6
 +u
4
4u
6
  =   
1
8
_
u
2
 +u
3
4u
1
  =   
15
8
4u
1
 +u
2
2
 + 2u
4
8u
2
  =   
9
4
u
4
 +u
1
 +u
5
4u
3
  =   
5
8
4u
3
 +u
2
4
 + 2u
2
 + 2u
6
8u
4
  =   
1
4
u
6
 +u
3
4u
5
  =   
3
8
4u
5
 +u
2
6
 + 2u
4
8u
6
  =   
1
4
(iii)   The  j
th
equation is solved for  u
j
  to get the basic formulation of the numerical problem, which
is subsequently rewritten in iterative forma and leads to the Gauss-Seidel algorithm
u
1
  =
  1
4
_
u
2
 +u
3
 +
 15
8
_
u
2
  =
  2
8 u
2
_
2u
1
 +u
4
 +
 9
8
_
u
3
  =
  1
4
_
u
1
 +u
4
 +u
5
 +
 5
8
_
u
4
  =
  2
8 u
4
_
u
2
 + 2u
3
 +u
6
 +
 1
8
_
u
5
  =
  1
4
_
u
3
 +u
6
 +
 3
8
_
u
6
  =
  2
8 u
6
_
u
4
 + 2u
5
 +
 1
8
_
_
u
(k+1)
1
  =
  1
4
_
u
(k)
2
  +u
(k)
3
  +
 15
8
_
u
(k+1)
2
  =
  2
8 u
(k)
2
_
2u
(k+1)
1
  +u
(k)
4
  +
 9
8
_
u
(k+1)
3
  =
  1
4
_
u
(k+1)
1
  +u
(k)
4
  +u
(k)
5
  +
 5
8
_
u
(k+1)
4
  =
  2
8 u
(k)
4
_
u
(k+1)
2
  + 2u
(k+1)
3
  +u
(k)
6
  +
 1
8
_
u
(k+1)
5
  =
  1
4
_
u
(k+1)
3
  +u
(k)
6
  +
 3
8
_
u
(k+1)
6
  =
  2
8 u
(k)
6
_
u
(k+1)
4
  + 2u
(k+1)
5
  +
 1
8
_
54   CHAPTER  3.   ELLIPTIC  EQUATIONS  IN  TWO  DIMENSIONS
Exercises
Question  19.   Two less common molecules for the computation of  u
xx
 +u
yy
  are
(a)
  1
2h
2
1   0   1
0   4   0
1   0   1
(b)
  1
6h
2
1   4   1
4   20   4
1   4   1
.
Determine the order of the error term associated with each of these molecules.   Does this change if
the scheme is used to solve Laplaces equation and not a more general partial dierential equation of
which the Laplacian is a component part.
Question  20.   Prove that Jacobi iteration is convergent but Gauss-Seidel iteration is divergent for
the equations
_
_
1   2   4
1/8   1   1
1   4   1
_
_
_
_
x
1
x
2
x
3
_
_
 =
_
_
1
3
7
_
_
  .
Question  21.   The system of equations
_
_
3   2   1
2   3   2
1   2   3
_
_
_
_
x
1
x
2
x
3
_
_
 =
_
_
6
7
6
_
_
  ,
is not diagonally dominant.   Calculate the spectral radius of the iteration matrix for both the Gauss-
Jacobi  and  the  Gauss-Seidel  algorithms.   What  can  you  say  about  the  convergence  of  these  proce-
dures?
Question  22.   The coecient matrix of a system of linear equations  AX = B  is dened by
A =
_
_
1   c   1
c   1   c
c
2
c   1
_
_
  ,
where  c  is  real  and  non-zero.   Find  the  range  of  values  for  which  Gauss-Seidel  iteration  converges.
Will the Jacobi iteration converge when  c = 2?
Question  23.   Prove that pivoting is unnecessary in the  LU  factorisation of  A if  A
T
is diagonally
dominated.   To establish this result you may nd it useful to establish the block matrix identity
_
_
   W
T
V   C
_
_
 =
_
_
1   0
V
  I
_
_
_
_
1   0
0   C 
  V W
T
_
_
_
_
   W
T
0   I
_
_
3.3.   NON-RECTANGULAR  DOMAINS   55
Question  24.   Let  A be a symmetric and positive denite  n n matrix.
(a)   Show that each diagonal entry of  A is positive.
(b)   Show that  H = (D +L)
1
U  satises the property
D
1/2
HD
1/2
= (I +L
1
)
1
L
T
1
where  L
1
 is a strictly lower triangular matrix to be identied and  D and  L are dened from  A
in the usual way.
(c)   Show further that
D
1/2
AD
1/2
= I +L
1
 +L
T
1
  .
(d)   By considering the eigenvalues of  H
1
, show that the Gauss-Seidel algorithm converges for pos-
itive denite matrices.
Question  25.   The  torsion  of  an  elastic  beam  of  square  cross-section  requires  the  solution  of  the
boundary value problem
xx
 + 
yy
 + 2 = 0,   (x, y)  (1, 1) (1, 1),
with  = 0 on the boundary of the square.   First, write out the discretised scheme using a step length
h = 1/2.   Now use the symmetry of the problem to reduce the number of unknowns to three.   Finally,
solve the equations to show that (0, 0)  0.562 [exact solution is 0.589].
Question  26.   (May 1999) Consider the equation
u
xx
 +u
yy
 = u
3
,   (x, y)  D
where D  is   the  interior   of   the  square  with  vertices   (1, 1), (1, 1), (1, 1)   and  (1, 1),   and  u  is
required to satisfy the boundary conditions
u
x
 = 0   x = 1 ,   u = c   y = 1
where c is a constant.   You may take advantage of symmetry and assume that the solution is an even
function in  x and  y.   Use this information to discretise the partial dierential equations and obtain
equations for  u
1
,    , u
6
 as illustrated in the diagram.
56   CHAPTER  3.   ELLIPTIC  EQUATIONS  IN  TWO  DIMENSIONS
1
-1
-1   1
u = c
u = c
u
x
 = 0 u
x
 = 0         
      
u
1
  u
2
  u
3
u
4
  u
5
  u
6
You may again implement the gradient boundary conditions by either a backward dierence approx-
imation of appropriate order, or by introducing ctitious nodes beyond  x = 1
Having  obtained  the  (nonlinear)  equations  for  the  unknowns  u
1
,    , u
6
,  devise  an  iterative  scheme
to  solve  them.   Ensure  that  your  scheme  is  as  robust  as  possible,  that  is,  try  to  ensure  that  it  will
converge for any initial guess and for any choice of  c.
Chapter  4
Parabolic  equations
4.1   Introduction
The canonical parabolic equation in one dimension is the diusion equation
u
t
 = u
xx
,   (x, t)  (0, 1) (0, )
with initial condition  u(x, 0) = f(x) and boundary conditions
u(0, t) = g
0
(t) ,   u(1, t) = g
1
(t) .
By contrast with elliptic equations where the same step length could often be used in each direction,
this is impossible in a parabolic equation since time and space are dierent physical variables,  and
therefore  cannot  be  equated.   Let   x
i
  =  ih  be  a  dissection  of   [0, 1]   with  h  =  1/n,   then  the  nite
dierence representation of the diusion equation is
du
i
dt
  =
  u
i+1
2u
i
 +u
i1
h
2
  ,   u
0
 = g
0
(t) ,   u
n
 = g
1
(t) .
Thus the solution of the diusion equation is constructed with respect to time by integrating a family
of   (n  1)  ordinary  dierential   equations  in  which  the  i
th
unknown  is  the  time  course  of   u(x
i
, t).
These equations are solved with the initial condition  u
i
(0) = f(x
i
).   In order to proceed further, it is
necessary to develop ecient procedures to integrate large families of ordinary dierential equations.
4.1.1   Eulers  method
Let  y(t) be the solution of the initial value problem
dy
dt
  = f(y, t) ,   t > 0 ,   y(0) = y
0
.
Let  t
j
 = jk where  k is the discretisation of time and let  y
j
 = y(t
j
), then Eulers method asserts that
y
j+1
 = y
j
 +kf(y
j
, t
j
) .
57
58   CHAPTER  4.   PARABOLIC  EQUATIONS
The scheme is derived from the Taylor series expansion
y(t +k) = y(t) +k
dy(t)
dt
  +
  k
2
2
d
2
y(t)
dt
2
  +O(k
3
)
by replacing  dy/dt with  f(y, t) and ignoring subsequent terms.   Eulers scheme is an explicit algo-
rithm  because  y(t + k)  appears  only  on  one  side  of  the  scheme.   Let  u
i,j
  =  u(x
i
, t
j
),   then  Eulers
scheme applied to the diusion equation gives
u
i,j+1
 = u
i,j
 +r (u
i+1,j
2u
i,j
 +u
i1,j
 )
where  r = k/h
2
is called the Courant  Number.   This nite dierence scheme may be rewritten
u
i,j+1
r u
i+1,j
(1 2r) u
i,j
r u
i1,j
 = 0
which in turn leads to the computational molecule
u
t
u
xx
 =
0   1   0
r   (1 2r)   r
0   0   0
Example
Solve the diusion equation  u
t
 =  u
xx
  where  x  (0, 1) with initial condition  u(x, 0) = sin(2x) and
boundary conditions  u(0, t) = u(1, t) = 0.   The exact solution is  u(x, t) = e
4
2
t
sin(2x)
Solution
k values for  n = 8   k values for  n = 16   k values for  n = 32
t   u(1/4, t)   10
2
10
3
10
4
10
2
10
3
10
4
10
2
10
3
10
4
0.01   0.67   0.63   0.68   0.69   0.56   0.62   0.63   0.59   0.66   0.66
0.02   0.45   0.39   0.47   0.47   0.34   0.42   0.42   0.36   0.44   0.45
0.03   0.31   0.24   0.32   0.32   0.21   0.28   0.29   0.22   0.29   0.30
0.04   0.21   0.15   0.22   0.22   0.13   0.19   0.19   0.13   23.6   0.20
0.05   0.14   0.10   0.15   0.15   0.08   0.13   0.13   0.08        0.14
0.06   0.09   0.06   0.10   0.11   0.05   0.09   0.09   0.05        0.09
0.07   0.06   0.04   0.07   0.07   0.03   0.06   0.06   0.03        0.06
0.08   0.04   0.02   0.05   0.05   0.02   0.04   0.04   0.02        0.04
0.09   0.03   0.01   0.03   0.03   0.01   0.03   0.03   0.01        0.03
0.10   0.02   0.01   0.02   0.02   0.01   0.02   0.02   0.03        0.02
0.11   0.01   0.01   0.01   0.02   0.00   0.01   0.01   1.54        0.01
0.12   0.01   0.00   0.01   0.01   0.00   0.01   0.01   59.6        0.01
0.13   0.01   0.00   0.01   0.01   0.00   0.01   0.01             0.01
0.14   0.00   0.00   0.00   0.01   0.00   0.00   0.00             0.00
0.15   0.00   0.00   0.00   0.00   0.00   0.00   0.00             0.00
Note how the ecacy of the method depends on the size of  r - small  h entails even smaller  k.
4.1.   INTRODUCTION   59
4.1.2   Richardsons  method
Richardsons method replaces the time derivative by its central dierence formula
du
i
dt
  =
  u
i,j+1
u
i,j1
2k
  +O(k
2
) .
The result is the Richardson nite dierence scheme
u
i,j+1
u
i,j1
 =
  2k
h
2
 (u
i+1,j
2u
i,j
 +u
i1,j
 ) = 2r (u
i+1,j
2u
i,j
 +u
i1,j
 )
with computational molecule
u
t
u
xx
 =
0   1   0
2r   4r   2r
0   1   0
One diculty with the Richardson scheme arises at j = 0 since the backward solution u
i,1
 does not
exist.   One way around this diculty is to use the Euler scheme for the rst step of the Richardson
scheme.   To  investigate  the  eectiveness  of   the  Richardson  algorithm,   the  scheme  is  started  with
j  =  1.   The  initial   condition  is  used  to  obtain  values  of   u
i,0
  and  the  exact  solution  is  used  to  get
values  for  u
i,1
.   Values  for  u
i,j
  with  j  > 1  are  obtained  by  iterating  the  scheme.   This  arrangement
overstates the eectiveness of the Richardson scheme since the solutions at j = 0 and j = 1 are exact.
k values for  n = 8   k values for  n = 16   k values for  n = 32
t   u(1/4, t)   10
2
10
3
10
4
10
2
10
3
10
4
10
2
10
3
10
4
0.01   0.67   0.49   0.66   0.68   0.44   0.60   0.62   0.46   0.64     
0.02   0.45   0.30   0.45   0.47   0.28   0.41   0.42   0.30   23.4     
0.03   0.31   0.27   0.31   0.32   0.22   0.28   0.29   0.23          
0.04   0.21   0.10   0.21   0.22   0.11   0.16   2.08   0.12          
0.05   0.14   0.19   0.14   0.15   0.13             0.13          
0.06   0.09   0.04   0.09   0.10   0.00             0.02          
0.07   0.06   0.22   0.06   0.07   0.13             0.12          
0.08   0.04   0.21   0.03   0.05   0.10             0.09          
0.09   0.03   0.38   0.00   0.03   0.21             1.14          
0.10   0.02   0.49   0.02   0.02   0.26             73.7          
0.11   0.01   0.75   0.05   0.01   0.40                         
0.12   0.01   1.05   0.08   0.00   0.46                         
0.13   0.01   1.53   0.12   0.01   1.62                         
0.14   0.00   2.20   0.19   0.01   48.1                         
0.15   0.00   3.18   0.34   0.02                              
Table 4.1:   Calculations illustrating the use of Richardsons algorithm to solve  u
t
 = u
xx
with initial condition  u(x, 0) = sin(2x) and boundary conditions  u(0, t) = u(1, t) = 0.
60   CHAPTER  4.   PARABOLIC  EQUATIONS
Table 4.1 illustrates the use of Richardsons algorithm to solve the initial boundary problem discussed
previously.   Richardsons  algorithm  is  unstable  for  the  choices  of   h  and  k  in  this  table.   In  fact,   it
will be shown later that the algorithm is unstable for all choices of  h and  k.   This result, however, is
not obvious a priori   and is counter-intuitive in the respect that the treatment of the time derivative
is  more  accurate  than  the  Euler  scheme  and  so  one  would  expect  the  Richardson  algorithm  to  be
superior to the Euler algorithm.
4.1.3   Dufort-Frankel  method
The Dufort-Frankel method is a modication of the Richardson method in which  u
i,j
  is replaced by
its time averaged value, that is,
u
i,j
 =
  u
i,j+1
 +u
i,j1
2
  +O(k
2
)
With this modication of the Richardson algorithm, we get the Dufort-Frankel algorithm
u
i,j+1
u
i,j1
2k
  =
  1
h
2
_
u
i+1,j
2 
 1
2
 (u
i,j+1
 +u
i,j1
) +u
i1,j
_
.
Taking  r = k/h
2
, the Dufort-Frankel algorithm becomes
(1 + 2r)u
i,j+1
(1 2r)u
i,j1
2r u
i+1,j
2ru
i1,j
 = 0
with computational molecule
u
t
u
xx
 =
0   1 + 2r   0
2r   0   2r
0   1 + 2r   0
Table  4.2  illustrates  the  result  of   using  the  Dufort-Frankel   algorithm  to  solve  the  original   partial
dierential equation by iterating
u
i,j+1
 =
  1 2r
1 + 2r
 u
i,j1
 +
  2r
1 + 2r
 u
i+1,j
 +
  2r
1 + 2r
 u
i1,j
taking  u
i,0
  from the boundary condition and  u
i,1
  from the exact solution.   The numerical scheme is
clearly more stable with no evidence of numerical blow-up.
4.1.   INTRODUCTION   61
k values for  n = 8   k values for  n = 16   k values for  n = 32
t   u(1/4, t)   10
2
10
3
10
4
10
2
10
3
10
4
10
2
10
3
10
4
0.01   0.67   0.41   0.66   0.68   0.34   0.60   0.62   0.35   0.63   0.66
0.02   0.45   0.24   0.45   0.47   0.11   0.40   0.42   0.05   0.42   0.44
0.03   0.31   0.14   0.31   0.32   0.06   0.27   0.29   0.22   0.28   0.30
0.04   0.21   0.08   0.21   0.22   0.17   0.18   0.19   0.46   0.18   0.20
0.05   0.14   0.05   0.15   0.15   0.22   0.12   0.13   0.66   0.12   0.14
0.06   0.09   0.03   0.10   0.11   0.23   0.08   0.09   0.82   0.08   0.09
0.07   0.06   0.02   0.07   0.07   0.20   0.06   0.06   0.93   0.05   0.06
0.08   0.04   0.01   0.05   0.05   0.16   0.04   0.04   1.00   0.04   0.04
0.09   0.03   0.01   0.03   0.03   0.11   0.03   0.03   1.02   0.02   0.03
0.10   0.02   0.00   0.02   0.02   0.06   0.02   0.02   1.01   0.02   0.02
0.11   0.01   0.00   0.02   0.02   0.02   0.01   0.01   0.96   0.01   0.01
0.12   0.01   0.00   0.01   0.01   0.01   0.01   0.01   0.88   0.01   0.01
0.13   0.01   0.00   0.01   0.01   0.03   0.01   0.01   0.77   0.00   0.01
0.14   0.00   0.00   0.00   0.01   0.03   0.00   0.00   0.65   0.00   0.00
0.15   0.00   0.00   0.00   0.00   0.04   0.00   0.00   0.51   0.00   0.00
Table 4.2:  Calculations illustrating the use of Dufort-Frankel algorithm to solve u
t
 = u
xx
with initial condition  u(x, 0) = sin(2x) and boundary conditions  u(0, t) = u(1, t) = 0.
However, it would appear that the process of increasing  n does not in itself ensure a more accurate
solution.   Spatial   renement   cannot   be  achieved  without   an  accompanying  temporal   renement.
Table 4.3 repeats some of these calculations with a wide range of spatial discretisations.   The most
important  point  to  observe  from  this  Table  is  that  the  scheme  appears  to  converge  as  n  increases,
but the convergence is not to the true solution of the partial dierential equation.
n = 32   n = 320   n = 3200
t   u(1/4, t)   k = 10
2
k = 10
3
k = 10
4
0.01   0.6738   0.3477   0.5742   0.6021
0.02   0.4540   0.0218   0.1873   0.2083
0.03   0.3059   0.3038   0.1993   0.1853
0.04   0.2062   0.6290   0.5854   0.5783
0.05   0.1389   0.9537   0.9710   0.9708
Table 4.3:
The   exact   and   numerical   values   of
u(1/4, t)   for   the   solution  of   the   dif-
fusion  equation   u
t
  =   u
xx
  with  ini-
tial   condition   u(x, 0)   =   sin(2x)
and   boundary   conditions   u(0, t)   =
u(1, t) = 0 are shown.
This  is  the  worst  possible  scenario.   Without  knowing  the  exact  solution,   one  might  (erroneously)
62   CHAPTER  4.   PARABOLIC  EQUATIONS
take the result of this numerical calculation to be a reasonable approximation to the exact solution
of the partial dierential equation.
4.1.4   Crank-Nicolson  method
The Crank-Nicolson algorithm averages not only u
i,j
 but also u
i+1,j
 and u
i1,j
.   Therefore the Crank-
Nicolson formulation of the diusion equation is
u
i,j+1
u
i,j1
2k
  =
  1
2h
2
_
 u
i+1,j+1
 +u
i1,j1
2(u
i,j+1
 +u
i,j1
) +u
i1,j+1
 +u
i1,j1
_
.
This equation can be re-expressed in the form
u
i,(j1)+2
u
i,(j1)
2k
  =
  1
2h
2
_
u
i+1,(j1)+2
 +u
i1,(j1)
2(u
i,(j1)+2
 +u
i,(j1)
) +u
i1,(j1)+2
 +u
i1,(j1)
_
.
In  particular,   the  value  of   the  solution  at  time  t
j
  nowhere  enters  the  algorithm  which  in  practice
advances the solution from  t
j1
  to  t
j+1
  through a time step of length 2k.   By reinterpreting  k to be
2k, the Crank-Nicolson algorithm becomes
u
i,j+1
u
i,j
k
  =
  1
2h
2
_
u
i+1,j+1
 +u
i+1,j
2(u
i,j+1
 +u
i,j
) +u
i1,j+1
 +u
i1,j
_
.
The Courant number  r = k/h
2
is now introduced to obtain
u
i,j+1
u
i,j
 =
  r
2
_
u
i+1,j+1
 +u
i+1,j
2(u
i,j+1
 +u
i,j
) +u
i1,j+1
 +u
i1,j
_
.
which may be re-arranged to give
r
2
 u
i+1,j+1
 + (1 +r) u
i,j+1
  r
2
 u
i1,j+1
 =
  r
2
 u
i+1,j
 + (1 r) u
i,j
 +
  r
2
 u
i1,j
 .
The computational module corresponding to the Crank-Nicolson algorithm is therefore
u
t
u
xx
 =
r/2   1 +r   r/2
r/2   1 +r   r/2
0   0   0
.
Let  U
(j)
=  (u
0,j
, u
1,j
,    , u
n,j
)  be  the  vector  of   u  values  at  time  t
j
  then  the  Crank-Nicolson  algo-
rithm for the solution of the diusion equation  u
t
 = u
xx
 with Dirichlet boundary conditions may be
expressed in the form
T
L
U
(j+1)
= T
R
U
(j)
+F
(j)
in which  T
L
  and  T
R
  are tri-diagonal matrices.   The rst and last rows of these matrices contain the
boundary conditions which may also arise in the expression for F
(j)
.   The initial solution is determined
from the initial conditions and subsequent solutions are obtained by solving a tri-diagonal system of
equations.   Table 4.4 gives the solution of the original problem using the Crank-Nicolson algorithm.
4.1.   INTRODUCTION   63
k values for  n = 8   k values for  n = 16   k values for  n = 32
t   u(1/4, t)   10
2
10
3
10
4
10
2
10
3
10
4
10
2
10
3
10
4
0.01   0.67   0.68   0.69   0.69   0.62   0.63   0.63   0.66   0.66   0.66
0.02   0.45   0.47   0.47   0.47   0.42   0.42   0.42   0.44   0.45   0.45
0.03   0.31   0.32   0.32   0.32   0.28   0.29   0.29   0.30   0.30   0.30
0.04   0.21   0.22   0.22   0.22   0.19   0.19   0.19   0.20   0.20   0.20
0.05   0.14   0.15   0.15   0.15   0.13   0.13   0.13   0.13   0.14   0.14
0.06   0.09   0.10   0.11   0.11   0.09   0.09   0.09   0.09   0.09   0.09
0.07   0.06   0.07   0.07   0.07   0.06   0.06   0.06   0.06   0.06   0.06
0.08   0.04   0.05   0.05   0.05   0.04   0.04   0.04   0.04   0.04   0.04
0.09   0.03   0.03   0.03   0.03   0.03   0.03   0.03   0.03   0.03   0.03
0.10   0.02   0.02   0.02   0.02   0.02   0.02   0.02   0.02   0.02   0.02
0.11   0.01   0.02   0.02   0.02   0.01   0.01   0.01   0.01   0.01   0.01
0.12   0.01   0.01   0.01   0.01   0.01   0.01   0.01   0.01   0.01   0.01
0.13   0.01   0.01   0.01   0.01   0.01   0.01   0.01   0.01   0.01   0.01
0.14   0.00   0.00   0.01   0.01   0.00   0.00   0.00   0.00   0.00   0.00
0.15   0.00   0.00   0.00   0.00   0.00   0.00   0.00   0.00   0.00   0.00
Table 4.4:   The Crank Nicolson algorithm is used to calculate the numerical
solution  of  the  diusion  equation  u
t
  =  u
xx
  with  initial  condition  u(x, 0) =
sin(2x) and boundary conditions  u(0, t) = u(1, t) = 0.
Table 4.5 illustrate the convergence of this solution as spatial resolution is rened.
n = 32   n = 320   n = 3200
t   u(1/4, t)   k = 10
2
k = 10
3
k = 10
4
0.01   0.67382545   0.67030550   0.67379098   0.67382511
0.02   0.45404074   0.44930947   0.45399428   0.45404027
0.03   0.30594421   0.30117461   0.30589725   0.30594374
0.04   0.20615299   0.20187900   0.20611081   0.20615257
0.05   0.13891113   0.13532060   0.13887560   0.13891078
0.06   0.09360186   0.09070614   0.09357313   0.09360157
0.09   0.02863695   0.02731839   0.02862376   0.02863681
0.12   0.00876131   0.00822760   0.00875593   0.00876125
0.15   0.00268047   0.00247795   0.00267842   0.00268045
Table 4.5:
Calculations   illustrating
the   convergence   of   the
Crank   Nicolson   algorithm
for various combinations of
parameters.
64   CHAPTER  4.   PARABOLIC  EQUATIONS
4.1.5   Summary
Several numerical schemes have been examined with varying degrees of success.   The performance of
each scheme is now summarised.
Euler   Converges if  k is suciently small but otherwise will blow-up.
Richardson   Seems to be unstable for all choices of the Courant number  r.
Dufort-Frankel   Converges but not necessarily to the correct solution!
Crank-Nicolson   Seems to exhibit stable behaviour for all choices of the Courant
number and converges to the correct solution as spatial resolu-
tion is improved.
4.2   Numerical  consistency
The  properties  required  by  a  numerical  scheme  to  enable  it  to  solve  a  partial  dierential  equation
can be summarised in terms of the consistency, stability and convergence of the scheme.
A  scheme  is  said  to  be  consistent  if  it  solves  the  problem  it  purports  to  solve.   In  the  context  of
the diusion equation, the numerical scheme must be consistent with the partial dierential equation
u
t
 = u
xx
.   It would appear, for example, that the Dufort-Frankel algorithm is not consistent.
A numerical algorithm is said to be stable provided small errors in arithmetic remain bounded.   For
example,   particular  combinations  of   temporal   and  spatial   discretisation  may  cause  the  scheme  to
blow up as happens with the Euler scheme.   It would appear that there are no combinations of h and
k for which the Richardson scheme is stable.   On the other hand, calculation would suggest that the
Crank-Nicolson scheme is stable for all combinations of temporal and spatial discretisation.
A  scheme  converges  if   it  is  both  consistent  and  stable  as  temporal   and  spatial   discretisation  is
reduced.
4.2.1   Consistency
The Dufort-Frankel and Crank-Nicolson algorithms are examined for consistency.   In both cases, the
analysis takes advantage of the Taylor series expansions
u
i,j+1
  =   u
i,j
 +k
u
i,j
t
  +
  k
2
2
2
u
i,j
t
2
  +O(k
3
)
u
i,j1
  =   u
i,j
k
u
i,j
t
  +
  k
2
2
2
u
i,j
t
2
  +O(k
3
)
u
i+1,j
  =   u
i,j
 +h
u
i,j
x
  +
  h
2
2
2
u
i,j
x
2
  +
  h
3
6
3
u
i,j
x
3
  +O(h
4
)
u
i1,j
  =   u
i,j
h
u
i,j
x
  +
  h
2
2
2
u
i,j
x
2
  
  h
3
6
3
u
i,j
x
3
  +O(h
4
) .
4.2.   NUMERICAL  CONSISTENCY   65
Dufort-Frankel
The Dufort-Frankel algorithm is
u
i,j+1
u
i,j1
2k
  =
  u
i+1,j
u
i,j+1
u
i,j1
 +u
i1,j
h
2
  .
Each term is now replaced from its Taylor series to obtain after some algebra
u
i,j
t
  
  
2
u
i,j
x
2
  = 
_
 k
h
_
2
2
u
i,j
t
2
  +O(h
2
) +O(k
2
) +O(k
3
/h
2
) .
Any implementation of the Dufort-Frankel algorithm in which  h and  k individual tend to zero with
k/h =  c, a constant, will solve  u
t
  =  u
xx
 c
2
u
tt
.   Consequently the Dufort-Frankel algorithm is not
consistent.
Crank-Nicolson
The Crank-Nicolson algorithm is
u
i,j+1
u
i,j
k
  =
  1
2h
2
_
u
i+1,j+1
 +u
i+1,j
2(u
i,j+1
 +u
i,j
) +u
i1,j+1
 +u
i1,j
_
.
As with the Dufort-Frankel algorithm, each term is now replaced from its Taylor series.   The algebra
is more complicated and so we list the individual components as follows:-
u
i,j+1
u
i,j
k
  =
  u
i,j
t
  +
  k
2
2
u
i,j
t
2
  +O(k
2
)
u
i+1,j+1
2u
i,j+1
 +u
i1,j+1
  =   h
2
  
2
u
i,j+1
x
2
  +O(h
4
)
u
i+1,j
2u
i,j
 +u
i1,j
  =   h
2
  
2
u
i,j
x
2
  +O(h
4
) .
When these components are inserted into the Crank-Nicolson algorithm, the result is
u
i,j
t
  +
  k
2
2
u
i,j
t
2
  +O(k
2
) =
  1
2h
2
_
h
2
  
2
u
i,j+1
x
2
  +h
2
  
2
u
i,j
x
2
  +O(h
4
)
_
which in turn simplies to
u
i,j
t
  +
  k
2
2
u
i,j
t
2
  +O(k
2
) =
  1
2
_
2
2
u
i,j
x
2
  +k
3
u
i,j
x
2
t
 +O(k
2
) +O(h
2
)
_
.
It is now straight forward algebra to observe that this equation can be rewritten in the form
u
i,j
t
  
  
2
u
i,j
x
2
  =
  k
2
_
 
3
u
i,j
x
2
t
 
  
2
u
i,j
t
2
_
+O(k
2
) +O(h
2
) .
Thus the Crank-Nicolson algorithm is consistent as  h and  k tend to zero.
66   CHAPTER  4.   PARABOLIC  EQUATIONS
4.3   Numerical  stability
Two  methods,   one  including  the  eect  of   boundary  conditions  and  the  other  excluding  the  eect
of boundary conditions, are used to investigate stability.   Both methods are attributed to John von
Neumann.   The  approach  which  excludes   the  eect   of   boundary  conditions   is   usually  called  the
Fourier  method  while  that  which  includes  the  eect  of  boundary  conditions  is  usually  called  the
matrix method.   In practice, it is normally assumed that the boundary conditions have a negligible
impact on the stability of the numerical procedure.
4.3.1   Fourier  method
The primary observation in the Fourier method is that the numerical scheme is linear and therefore
it  will  have  solutions  in  the  form  u
p,q
  =  
q
e
iph
.   The  numerical  scheme  is  stable  provided ||  < 1
and unstable whenever || > 1.
Euler  method
The Euler method uses the numerical scheme
u
p,q+1
 = u
p,q
 +r(u
p1,q
2u
p,q
 +u
p+1,q
)
and therefore
q+1
e
iph
= 
q
e
iph
+r
_
q
e
i(p1)h
2
q
e
iph
+
q
e
i(p+1)h
_
.
This equation simplies immediately to
 = 1 +r
_
e
ih
2 +e
ih
_
 = 1 + 2r
_
 cos h 1
_
 = 1 4r sin
2
h/2 .
Clearly    is  real-valued  and  satises    <  1.   Therefore  Eulers  method  is  stable  provided    > 1
which in turn requires that
1 4r sin
2
h/2 > 1   
  1
2 sin
2
h/2
  > r
for all choices of .   This condition can be satised only provided r < 1/2, and therefore the numerical
stability of the Euler method requires that  r < 1/2.
Richardson  method
The Richardson method uses the numerical scheme
u
p,q+1
 = u
p,q1
 + 2r(u
p+1,q
2u
p,q
 +u
p1,q
)
4.3.   NUMERICAL  STABILITY   67
and therefore
q+1
e
iph
= 
q1
e
iph
+ 2r
_
q
e
i(p+1)h
2
q
e
iph
+
q
e
i(p1)h
_
.
This equation simplies immediately to
 = 
1
+ 2r
_
e
ih
2 +e
ih
_
 = 
1
+ 4r
_
 cos h 1
_
 = 
1
8r sin
2
h/2 .
In conclusion,   is the solution of the quadratic equation
2
+ 8r sin
2
h/2 1 = 0
with solutions
 = 4r sin
2
h/2 
_
1 + 16r
2
sin
4
h/2 .
Clearly  both  solutions  are  real   but  one  solution  satises ||   >  1.   Therefore  Richardsons  method
is  never  stable  (as  was  observed  in  the  numerical   example).   The  oscillatory  nature  of   blow-up  in
Richardsons algorithm is due to the fact that instability ensues through a negative value of  .
Dufort-Frankel  method
The Dufort-Frankel method uses the numerical scheme
u
p,q+1
 = u
p,q1
 + 2r(u
p+1,q
u
p,q+1
u
p,q1
 +u
p1,q
)
and therefore
q+1
e
iph
= 
q1
e
iph
+ 2r
_
q
e
i(p+1)h
q+1
e
iph
q1
e
iph
+
q
e
i(p1)h
_
.
This equation simplies immediately to
 = 
1
+ 2r( e
ih
 
1
+e
ih
) = 2r + (1 2r)
1
+ 4r cos(h) .
In conclusion,   is the solution of the quadratic equation
(1 + 2r)
2
4r cos h (1 2r) = 0
with solutions
 =
  2r cos h 
_
1 4r
2
sin
2
h
1 + 2r
  .
The roots of this quadratic are either both real or form a complex conjugate pair.   If the roots are
real, the triangle inequality gives
|| =
2r cos h 
_
1 4r
2
sin
2
h
1 + 2r
 
 | 2r cos h| +
_
1 4r
2
sin
2
h
1 + 2r
  
  2r + 1
1 + 2r
  = 1 .
On the other hand, if the roots are complex then 2r > 1 and
||
2
=
  4r
2
cos
2
h + 4r
2
sin
2
h 1
(1 + 2r)
2
  =
  4r
2
1
(1 + 2r)
2
 
  2r 1
2r + 1
  1 .
Either  way,   the  Dufort-Frankel   scheme  is  stable.   This  means  that  it  will   not  blow-up  due  to  the
presence  of  rounding  error,   although  it  has  already  been  shown  to  be  inconsistent.   It  is  now  clear
why this scheme converges, but not necessarily to the desired solution.
68   CHAPTER  4.   PARABOLIC  EQUATIONS
Crank-Nicolson  method
The Crank-Nicolson method uses the numerical scheme
u
p,q+1
 = u
p,q
 +
  r
2
_
u
p+1,q+1
 +u
p+1,q
2(u
p,q+1
 +u
p,q
) +u
p1,q+1
 +u
p1,q
_
and therefore
q+1
e
iph
= 
q
e
iph
+
  r
2
_
q+1
e
i(p+1)h
+
q
e
i(p+1)h
 2
_
q+1
e
iph
+
q
e
iph
_
+
q+1
e
i(p1)h
+
q
e
i(p1)h
_
.
This equation simplies immediately to
 = 1 +
  r
2
_
e
ih
+e
ih
2( + 1) +e
ih
+e
ih
_
.
Further simplication gives
 = 1 +r ( cos h + cos h  1 )       =
  1 2r sin
2
h/2
1 + 2r sin
2
h/2
 .
Irrespective of the choice of  k or  h, it is clear that || < 1 and therefore the Crank-Nicolson scheme
is stable.   This means that it will not blow-up due to the presence of rounding error.
4.3.2   Matrix  method
The matrix method relies on Gershgorins circle theorem regarding the location of matrix eigenvalues.
The theorem asserts that if  A is an  n n matrix and C
1
,    C
n
 are the circles dened by
C
k
 = {X  : |X A
kk
| 
n
j=1 j=k
|A
kj
| }
then all the eigenvalues of  A are contained within the region D = C
1
UC
2
. . . UC
n
.   In particular, if  m
(m  n) of these circles form a subspace of D which does not intersect the remaining (nm) circles,
then precisely  m eigenvalues are contained within such a subregion.
Example
Use Gerschgorins theorem to locate the eigenvalues of the matrix.
A =
_
_
9   1   1
0   1   1
2   4   0
_
_
Solution
The Gershgorin circles are
C
1
 = {Z  C : |Z 9| = 2} ,   C
2
 = {Z  C : |Z 1| = 1} ,   C
3
 = {Z  C : |Z| = 6} .
Thus one eigenvalue lies in |Z 9| = 2 and two eigenvalues lie in |Z| = 6.
4.3.   NUMERICAL  STABILITY   69
Euler  method
The Euler method is the algorithm
u
p,q+1
 = u
p,q
 +r(u
p+1,q
2u
p,q
 +u
p1,q
) = (1 2r)u
p,q
 +ru
p+1,q
 +ru
p1,q
where the values of u
0,q
 and u
n,q
 are given by the boundary conditions in the Dirichlet problem.   The
aim of the algorithm is to compute  u
1,q
,    , u
n1,q
  for all values of  q.   Let  U
(q)
= (u
1,q
,    , u
n1,q
)
T
then Eulers algorithm corresponds to the matrix operation
U
(q+1)
= TU
(q)
+r F
(q)
(4.1)
where the (n 1) (n 1) tri-diagonal matrix  T  and the (n 1) dimensional vector  F
(q)
are
T  =
_
_
1 2r   r   0   0             0
r   1 2r   r   0             0
0   r   1 2r   r   0          
.
.
.
  .
.
.
  .
.
.
  .
.
.
  .
.
.
  .
.
.
    
0                  0   r   1 2r
_
_
,   F
(q)
=
_
_
u
0,q
0
.
.
.
0
u
n,q
_
_
.
Thus  U
(1)
= TU
(0)
+rF
(0)
and
U
(2)
= TU
(1)
+rF
(1)
= T(TU
(0)
+rF
(0)
) +rF
(1)
= T
2
U
(0)
+rTF
(0)
+rF
(1)
.
These calculations suggest that equation (4.1) has general solution
U
(q+1)
= T
q+1
U
(0)
+r
q
j=0
T
j
F
(qj)
.
This  result  can  be  established  by  induction.   Assuming  that  the  boundary  conditions  are  bounded
functions  of   time,   then  the  solution  U
(q+1)
remains  bound  only  provided  T
q+1
U
(0)
does  not  blow
up.   Since  T  is symmetric it has a complete set of (real) eigenvalues  
1
,    , 
n1
 with corresponding
normalised eigenvectors  E
1
,    , E
n1
.   Let  U
(0)
= c
1
E
1
 +   +c
n1
E
n1
 then
T
q+1
U
(0)
=
n1
j=1
c
j
q+1
j
  E
j
 ,
which will remain bounded as  t   only provided each eigenvalue of  T  lies within the unit circle.
On the unit circle is not good enough in the presence of non-zero boundary information.   Gershgorins
circle theorem asserts that all the eigenvalues of T  (which are known to be real) lie within the region
D = {z  C : |z (1 2r)| = r}  {z  C : |z (1 2r)| = 2r} = {z  C : |z (1 2r)| = 2r} .
This  is  a  circle  centre  (1  2r, 0)  and  radius  2r.   The  extremities  of   the  circle  on  the  x-axis  are
the  points  (1  4r, 0)  and  (1, 0).   Stability  is  therefore  guaranteed  in  the  Euler  algorithm  provided
1 4r > 1, that is,  r < 1/2.
70   CHAPTER  4.   PARABOLIC  EQUATIONS
4.3.3   Gradient  boundary  conditions
Consider now the diusion equation
u
t
 = u
xx
,   (t, x)  (0, ) (0, 1)
with initial condition u(x, 0) = f(x) and the gradient conditions u
x
(0, t) = u
x
(1, t) = 0.   As previously,
u
t
  is  replaced  by  its  forward  dierence  approximation  and  u
xx
  is  replaced  by  its  central  dierence
approximation to obtain
u
i,j+1
 = u
i,j
 +r (u
i+1,j
2u
i,j
 +u
i1,j
) +O(k
2
, kh
2
) ,   i = 1,    , (n 1)
in  which  r   is   the   Courant   number.   The   boundary  conditions   when  expressed  in  terms   of   for-
ward/backward dierences of  u give
3u
0,j
 + 4u
1,j
u
2,j
2h
  = O(h
2
) ,
  3u
n,j
4u
n1,j
 +u
n2,j
2h
  = O(h
2
) .
The values of the solution at time j +1 can be computed from the solution at time j using the results
u
1,j+1
  =   (1 2r)u
1,j
 +ru
2,j
 +ru
0,j
 +O(kh
2
, k
2
)
u
2,j+1
  =   (1 2r)u
2,j
 +ru
3,j
 +ru
1,j
 +O(kh
2
, k
2
)
u
n1,j+1
  =   (1 2r)u
n1,j
 +ru
n,j
 +ru
n2,j
 +O(kh
2
, k
2
)
u
n2,j+1
  =   (1 2r)u
n2,j
 +ru
n1,j
 +ru
n3,j
 +O(kh
2
, k
2
)
These results are now used to compute  u
0,j+1
 and  u
n,j+1
 as follows:-
u
0,j+1
  =
  1
3
_
4u
1,j+1
u
2,j+1
_
+O(h
3
)
=
  1
3
_
(4 9r)u
1,j
 + (6r 1)u
2,j
 + 4ru
0,j
ru
3,j
_
+O(kh
2
, k
2
, h
3
)
u
n,j+1
  =
  1
3
_
4u
n1,j+1
u
n2,j+1
_
+O(h
3
)
=
  1
3
_
(4 9r)u
n1,j
 + 4ru
n,j
 + (6r 1)u
n2,j
ru
n3,j
_
+O(kh
2
, k
2
, h
3
)
Now let U
(j)
= (u
0,j
,    , u
n,j
)
T
then the Euler scheme for the determination of the solution becomes
U
(j+1)
= AU
(j)
4.3.   NUMERICAL  STABILITY   71
where  A is the (n + 1) (n + 1) matrix
_
_
4r
3
4 9r
3
6r 1
3
  
r
3
  0        0
r   1 2r   r   0             0
0   r   1 2r   r   0          
.
.
.
  .
.
.
  .
.
.
  .
.
.
  .
.
.
  .
.
.
  b
0             
r
3
6r 1
3
4 9r
3
4r
3
_
_
Clearly  A  is  neither  symmetric  or  tri-diagonal.   Nevertheless  Gershgorins  circle  theorem  indicates
that all the eigenvalues of  A are located within the region
D = {z  C : |z (1 2r)| = 2r}  {z  C : |3z 4r| = r +|4 9r| +|6r 1| } .
As  previously,   the  circle  centre  (1-2r,0)  lies  within  the  unit  circle  provided  r  <  1/2.   However  the
analysis of the second circle requires the treatment of several cases.   The boundary values are r = 4/9
and  r = 1/6.
Case  1   Suppose 0 < r < 1/6 then 0 < 9r < 3/2 < 4.   The second circle is
|3z 4r| = r + 4 9r + 1 6r = 5 14r      |3z 4r| = 5 14r .
The  line  segment  of   the  x-axis  lying  within  this  circle  is 5 + 14r   3x  4r   5  14r,   which
becomes  after  simplication 5/3 + 6r   x  5/3  10r/3.   This  line  segment  lies  within  the  unit
circle provided 5/3  10r/3  < 1 and 1  < 5/3 + 6r, that is, provided 1/5  <  r  and 1/9  <  r.   But
r < 1/6 by assumption and so these conditions cannot be satised.
Case  2   When 1/6 < r < 4/9, the second circle has equation
|3z 4r| = r + 4 9r + 6r 1 = 3 2r      |3z 4r| = 3 2r .
The line segment of the  x-axis lying within this circle is 3 +2r  3x 4r  3 2r, which becomes
after simplication 1 + 2r  x  1 + 2r/3.   Clearly 1 + 2r/3 > 1 for  r > 0 and so this line segment
never lies within the unit circle.
Case  3   When 4/9 < r < 1/2, the second circle has equation
|3z 4r| = r + 9r 4 + 6r 1 = 16r 5      |3z 4r| = 16r 5 .
The line segment of the x-axis lying within this circle is 16r +5  3x4r  16r 5, which becomes
after simplication 5/34r  x  20r/35/3.   This line segment lies within the unit circle provided
72   CHAPTER  4.   PARABOLIC  EQUATIONS
20r/3  5/3  < 1 and 1  < 5/3  4r,  that is,  provided  r  < 2/5 and  r  < 2/3.   But  r  > 4/9 and so
again these conditions cannot be satised.
In  conclusion,   Gershgorins  theorem  on  this  occasion  provides  no  useful   information  regarding  the
stability of the algorithm.   In fact, the issue of stability is of secondary importance in this problem
to  the  issue  of   accuracy.   The  detailed  analysis  of   the  error   structure  in  this  Neumann  problem
indicates that the numerical scheme no longer has the expected  O(kh
2
, k
2
) accuracy of the Dirichlet
problem.   The error at each iteration is dominated by O(h
3
) which behaves like O(kh) assuming that
k =  rh
2
=  O(1).   the treatment of the Neumann boundary value problem for the diusion equation
is non-trivial and is not pursued further here.
4.4   Numerical  convergence  -  The  Lax  equivalence  theorem
Given a properly posed linear initial boundary value problem and a linear nite dierence approxi-
mation to it that is consistent and stable, then the scheme is convergent.
The problem is properly posed if
(i)   The solution is unique when it exists;
(ii)   The solution depends continuously on the initial data;
(iii)   A solution always exists for initial data arbitrarily close to initial data for which a solution does
not exist.
Example   Let  e
i,j
 = u
i,j
  u(x
i
, t
j
) where  u(x, t) is the exact solution at (x, t).   Eulers algorithm
u
i,j+1
 = u
i,j
 +r(u
i+1,j
2u
i,j
 +u
i1,j
)
can be re-expressed in the form
 u
i,j+1
 +e
i,j+1
 =  u
i,j
 +e
i,j
 +r( u
i+1,j
2 u
i,j
 +  u
i1,j
) +r(e
i+1,j
2e
i,j
 +e
i1,j
) .
Taylors theorem may now be use to assert that
 u
i,j+1
  =  u +k  u
t
 +
  k
2
2
  u
tt
 +O(k
3
)
 u
i+1,j
  =  u +h  u
x
 +
  h
2
2
  u
xx
 +
  h
3
6
  u
xxx
 +
  h
4
24
  u
xxxx
 +O(h
5
)
 u
i1,j
  =  u h  u
x
 +
  h
2
2
  u
xx
  h
3
6
  u
xxx
 +
  h
4
24
  u
xxxx
 +O(h
5
)
where  u =  u(x, t).   These formulae are now inserted into the numerical scheme to obtain
k  u
t
 +
  k
2
2
  u
tt
 +O(k
3
) +e
i,j+1
  =   e
i,j
 +r
_
h
2
 u
xx
 +
  h
4
12
  u
xxxx
 +O(h
6
)
_
+r(e
i+1,j
2e
i,j
 +e
i1,j
) .
4.4.   NUMERICAL  CONVERGENCE  -  THE  LAX  EQUIVALENCE  THEOREM   73
after all elementary simplications have been done.   The courant number in the second term on the
right hand side of this equation is now replaced by its denition to get
e
i,j+1
  =   (1 2r)e
i,j
 +k ( u
xx
  u
t
) +
  kh
2
12
  u
xxxx
  k
2
2
  u
tt
 +O(k
3
, kh
4
)
+r e
i+1,j
 +r e
i1,j
 .
Since  u satises  u
t
 =  u
xx
  then it also follows that  u
tt
 =  u
xxxx
.   With these further simplications, it
is clear that
e
i,j+1
 = (1 2r)e
i,j
 +
  kh
2
12
  (1 6r)  u
tt
 +r e
i+1,j
 +r e
i1,j
 +O(k
3
, kh
4
) .
Now dene E
j
  to be the maximum spatial discretisation error at time t
j
  and let M  be the maximum
value of  u
tt
 across space and time then the triangle inequality yields initially
| e
i,j+1
|  | 1 2r | | e
i,j
| +
  kh
2
12
  |1 6r | M +r | e
i+1,j
| +r | e
i1,j
| +O(k
3
, kh
4
) ,
which in turn indicates that  E
j
  satises
E
j+1
  (2r +| 1 2r | ) E
j
 +
  kh
2
12
  |1 6r | M +O(k
3
, kh
4
) .
Assuming that  r  1/2, then | 1 2r | = 1 2r and it now obvious that
E
j+1
  E
j
 +
  kh
2
12
  |1 6r | M +O(k
3
, kh
4
)
with solution
E
j
  E
0
 +
  jkh
2
12
  |1 6r | M +jkO(k
2
, h
3
) .
Since  jk = T, the current time interval covered by the calculation, it follows immediately that
E(T)  E(0) +T
_
 Mh
2
|1 6r |
12
  +O(k
2
, h
4
)
_
.
Thus provided  r  < 1/2 then the maximum spatial discretisation error in the Euler scheme grows in
direct proportion to  T.   In particular, the Euler scheme is  O(k) more accurate than might otherwise
have been anticipated when  r = 1/6.
74   CHAPTER  4.   PARABOLIC  EQUATIONS
Exercises
Question  27.
The equation  u
t
 = u
xx
 is approximated at the point (ih, jk) by
_
u
i,j+1
u
i,j1
2k
_
+ (1 )
_
u
i,j
u
i,j1
k
_
_
u
i+1,j
2u
i,j
 +u
i1,j
h
2
_
 = 0,
Show that the truncation error is given by
k(1 )
2
2
u
t
2
 
  h
2
12
4
u
x
4
 +O(k
2
, h
4
).
What is the optimal value for  ?
Question  28.
The function  u(x, t) satises the equation
u
t
 = u
xx
,   (x, t)  (0, 1) (0, )
with initial condition  u(x, 0) = sinx and boundary conditions  u(0, t) = u(1, t) = 0 for  t > 0.
Write out the discretised scheme using the explicit Euler method with a step length  h = 1/4 and a
time step  k = 0.05.   Use the symmetry of the problem to reduce the number of unknowns.   Solve the
equations to nd an approximate value for  u(1/2, 0.1).   [You could compare your numerical solution
with the exact one obtained via separation of variables].
Question  29.
Derive the Crank-Nicolson equations for the previous problem.   Use these with  h = 1/4,   k = 0.1 to
estimate the value of  u(1/2, 0.1) [exact answer 0.3727].   Also, with  h = 1/4 and  k = 0.025 estimate
u(0.5, 0.05).
Question  30.
The function  u(x, t) satises the equation
u
t
 = xu
xx
,   (x, t)  (0, 1/2) (0, )
with  initial  condition  u(x, 0) =  x(1  x)  and  boundary  conditions  u(0, t) = 0  and  u(1/2, t)/x =
u(1/2, t)/2.
Write out the discretised scheme using the explicit Euler method with a step length  h = 0.1 and a
time  step  k  = 0.005  using  the  ctitious  point  approach  to  the  derivative  boundary  condition.   [No
need to solve.]   What will happen around (1/2, 0)?  Why?
4.4.   NUMERICAL  CONVERGENCE  -  THE  LAX  EQUIVALENCE  THEOREM   75
Question  31.
Use Gershgorin Circles to prove that the Euler method used in the previous question with a general
(N + 1)-node mesh will not grow exponentially provided the Courant number  r satises
r 
  4
4 +h
.
Question  32.
The equation u
t
 +u
x
f(x, t) = 0 in which  is a constant is approximated at the point (ih, jk) by
2k
_
2u
i,j+1
(u
i+1,j
 +u
i1,j
)
_
+
 (u
i+1,j
u
i1,j
)
2h
  f
i,j
 = 0.
Investigate  the  consistency  of  this  method  for  r  =  k/h  and  for  r  =  k/h
2
.   If  either  is  inconsistent,
what equation does the method solve?
Question  33.
The equation u
t
 = u
xx
 with (x, t)  (0, 1)(0, ) is approximated at the point (ih, jk) by the scheme
u
i,j+1
u
i,j
 = r
_
(u
i1,j+1
2u
i,j+1
 +u
i+1,j+1
) + (1 )(u
i1,j
2u
i,j
 +u
i+1,j
)
_
where     [0, 1],   r  =  k/h
2
is  the  Courant  number  and  h  =  1/N.   Assuming  that  the  boundary
conditions and initial conditions are known, investigate the Fourier stability of this method.
Question  34.
Show that the  n n tri-diagonal matrix
_
_
a   b   0   0             0
c   a   b   0             0
0   c   a   b   0          
.
.
.
  .
.
.
  .
.
.
  .
.
.
  .
.
.
  .
.
.
  b
0                  0   c   a
_
_
has eigenvalues
k
 = a + 2
bc cos 
k
 ,   k = 1,    , n
with corresponding eigenvector
X
k
 =
__
c
b
_
1/2
sin 
k
,
_
c
b
_
sin 2
k
,    ,
_
c
b
_
j/2
sin j
k
,    ,
_
c
b
_
n/2
sin n
k
 ]
T
.
where  
k
 = k/(n + 1).
76   CHAPTER  4.   PARABOLIC  EQUATIONS
Question  35.
Let  be a constant.   The equation u
t
 = u
xx
 is to be solved in the region (x, t)  (0, 1) (0, ) with
boundary conditions  u(0, t) = u(1, t) = 0.   The solution is approximated at the point (ih, jk) by the
fully implicit backward Euler scheme
u
i,j+1
u
i,j
 = r(u
i1,j+1
2u
i,j+1
 +u
i+1,j+1
),
where  r  =  k/h
2
and  h  =  1/N.   Assuming  that  the  initial   conditions  are  known,   investigate  the
stability of this method using both the matrix and Fourier methods.
[Recall that the eigenvalues of a constant  n n tri-diagonal matrix are
i
 = a + 2
bc cos
  i
n + 1
where  a is the diagonal entry,  b, and  c are the sub- and super=diagonal entries.]
Question  36.
Show that the Euler and Richardson methods are both consistent with the original equation u
t
 = u
xx
.
So their diculties are entirely due to their instabilities.
Chapter  5
Parabolic  equations  in  two  dimensions
5.1   Introduction
The canonical parabolic equation in two dimensions is
u
t
 = u
xx
 +u
yy
 ,   (x, y)  D   t  (0, )
with initial condition u(x, y, 0) = u
0
(x, y) and boundary conditions at along D which may be either
Dirichlet (i.e.   u), Neumann (i.e.   u/n) or Robin (i.e.   u/n +u = 0).   As with elliptic equations
in two dimensions, it will be assumed a priori that the spatial discretisation on the x and y directions
is  h.   The nite dierence formulation of  u
t
 = u
xx
 +u
yy
  is
du
i,j
dt
  =
  1
h
2
_ _
u
i+1,j
2u
i,j
 +u
i1,j
_
+
_
u
i,j+1
2u
i,j
 +u
i,j1
__
The  Crank-Nicholson  re-formulation  of   this  equation  is  obtained  by  integrating  the  equation  over
[t
q
, t
q+1
] and approximating each integral on the right hand side by the trapezoidal rule to obtain
u
q+1
i,j
  =   u
q
i,j
 +
  r
2
_ _
u
q+1
i+1,j
2u
q+1
i,j
  +u
q+1
i1,j
_
+
_
u
q+1
i,j+1
2u
q+1
i,j
  +u
q+1
i,j1
_
+
_
u
q
i+1,j
2u
q
i,j
 +u
q
i1,j
_
+
_
u
q
i,j+1
2u
q
i,j
 +u
q
i,j1
__
which is subsequently re-arranged to give
u
q+1
i,j
  
  r
2
_
u
q+1
i+1,j
2u
q+1
i,j
  +u
q+1
i1,j
_
  r
2
_
u
q+1
i,j+1
2u
q+1
i,j
  +u
q+1
i,j1
_
= u
q
i,j
 +
  r
2
_
u
q
i+1,j
2u
q
i,j
 +u
q
i1,j
_
+
  r
2
_
u
q
i,j+1
2u
q
i,j
 +u
q
i,j1
_
.
As  with  the  Crank-Nicolson  formulation  of  the  one  dimensional   problem,   it  may  be  demonstrated
that  this  scheme  is  consistent.   However,   the  matrix  of  the  system  is  no  longer  tri-diagonal  and  so
an  iterative procedure  is  required  to  advance  the  solution  through  each  time  step.   To  simplify  the
representation of the scheme, the nite dierence operator   is introduce through the denition
(x
p
) = x
p+1/2
x
p1/2
.
77
78   CHAPTER  5.   PARABOLIC  EQUATIONS  IN  TWO  DIMENSIONS
Thus
2
(x
p
) = (x
p+1/2
x
p1/2
) = (x
p+1
x
p
) (x
p
x
p1
) = x
p+1
2x
p
 +x
p1
.
With this denition, the Crank-Nicolson scheme becomes
u
q+1
i,j
  
  r
2
 
2
x
 ( u
q+1
i,j
  ) 
  r
2
 
2
y
 ( u
q+1
i,j
  ) = u
q
i,j
 +
  r
2
 
2
x
 ( u
q
i,j
 ) +
  r
2
 
2
y
 ( u
q
i,j
 ) .
5.2   Alternating  Direction  Implicit
The construction of the alternating direction implicit (ADI) scheme begins with the nite dierence
scheme
u
q+1
i,j
  u
q
i,j
 = r
_
2
x
 ( u
q
i,j
 ) +
2
y
 (u
q
i,j
 )
_
in which the time derivative in the original equation has been replaced by its forward dierence.   A
further time step gives
u
q+2
i,j
  u
q+1
i,j
  = r
_
2
x
 ( u
q+1
i,j
  ) +
2
y
 (u
q+1
i,j
  )
_
.
The term  
2
x
 ( u
q
i,j
 ) in the rst of these equations is replaced by  
2
x
 ( u
q+1
i,j
  ) and the term  
2
y
 ( u
q+1
i,j
  ) in
the second of these equations is replaced by  
2
y
 ( u
q+2
i,j
  ).   This procedure gives the two-step algorithm
u
q+1
i,j
  r 
2
x
 ( u
q+1
i,j
  )   =   u
q
i,j
 +r 
2
y
 (u
q
i,j
 )
u
q+2
i,j
  r 
2
y
 ( u
q+2
i,j
  )   =   u
q+1
i,j
  +r 
2
x
 (u
q+1
i,j
  ) .
By  this  methodology,   
2
y
 (u
q
i,j
 )  is  explicit  and  u
q+1
i,j
  is  determined  implicitly.   The  roles  are  reversed
in  the  second  equation.   Here  
2
x
 (u
q+1
i,j
  )  is  explicit  and  u
q+2
i,j
  is  determined  implicitly.   This  is  why
origin of the alternating direction description of the algorithm.   The important property of the ADI
algorithm is that both steps only require the solution of tri-diagonal systems of linear equations, the
rst to nd  u
q+1
i,j
  and the second to nd  u
q+2
i,j
  .
5.2.1   Consistency  of  ADI
For representational convenience, the subscripts are dropped from  u.   The ADI algorithm is
(1 r 
2
x
 )u
q+1
=   (1 +r 
2
y
 )u
q
(1 r 
2
y
 )u
q+2
=   (1 +r 
2
x
 )u
q+1
which becomes on elimination of the terms at  t
q+1
(1 r 
2
x
 )(1 r 
2
y
 )u
q+2
= (1 +r 
2
x
 )(1 +r 
2
y
 )u
q
.
It is now straight forward algebra to verify that
(1 +r
2
2
x
2
y
 )(u
q+2
u
q
) = r (
2
x
 +
2
y
 )(u
q+2
+u
q
) .
5.2.   ALTERNATING  DIRECTION  IMPLICIT   79
This equation is now divided by 2k and  r replaced by its denition to obtain
(1 +r
2
2
x
2
y
 )
u
q+2
u
q
2k
  = (
2
x
 +
2
y
 )
u
q+2
+u
q
2h
2
  .
Each  term  appearing  in  this  equation  is  now  replaced  by  its  Taylor  series  expansion  taken  about
(x
i
, y
j
) at time  t
q+1
.   It is easy to see that (u
q+2
 u
q
)/2k  is the central dierence formula for  u
t
  at
time  t
q+1
 and therefore
u
q+2
u
q
2k
  = u
q+1
t
  +O(k
2
) ,   u
q+2
+u
q
= 2u
q+1
+O(k
2
) .
Consequently,
(
2
x
 +
2
y
 )
u
q+2
+u
q
2h
2
  =
  (
2
x
 +
2
y
 )2u
q+1
t
2h
2
  +O(k
2
) = u
q+1
xx
  +u
q+1
yy
  +O(k
2
, h
2
) .
Finally,
r
2
2
x
2
y
 (u
q+2
u
q
)
2k
  =
  k
2
h
4
  
2
x
2
y
 (u
q+1
t
  ) +O(k
2
) = k
2
u
q+1
txxyy
 +O(k
2
, h
2
) .
The component parts are now assembled to give
u
q+1
t
  +O(k
2
) +k
2
u
q+1
txxyy
 +O(r
2
k
2
) = u
q+1
xx
  +u
q+1
yy
  +O(k
2
, h
2
)
which may be rearranged to give
u
q+1
t
  u
q+1
xx
  u
q+1
yy
  = O(k
2
, h
2
) .
Consequently the ADI algorithm is unconditionally consistent.
5.2.2   Stability  of  the  ADI  algorithm
The stability of the ADI algorithm is investigated by the substitution
u
n
p,q
 = 
n
e
i(p+q)h
.
The calculation is based on the auxiliary observation that
2
x
u   =   (e
ih
2 +e
ih
) u = 2(cos h 1) u = 4(sin
2
h/2) u,
2
y
 u   =   (e
ih
2 +e
ih
) u = 2(cos h 1) u = 4(sin
2
h/2) u.
These formulae are now substituted into the numerical scheme
(1 r 
2
x
 )(1 r 
2
y
 )u
q+2
= (1 +r 
2
x
 )(1 +r 
2
y
 )u
q
.
to obtain
2
=
  (1 4r sin
2
(h/2) )(1 4r sin
2
(h/2) )
(1 + 4r sin
2
(h/2) )(1 + 4r sin
2
(h/2) )
 .
The triangle inequality applied to the expression for  
2
yields immediately that ||
2
< 1, and so the
ADI algorithm is unconditionally stable for all choices of  r.
80   CHAPTER  5.   PARABOLIC  EQUATIONS  IN  TWO  DIMENSIONS
Exercises
Question  37.
Show that the 2-D ADI method
(1 r
2
x
) u
n+1
=   (1 +r
2
y
) u
n
,
(1 r
2
y
)u
n+1
=    u
n+1
r
2
y
u
n
is unconditionally stable.   [Start by eliminating   u
n+1
.]
Question  38.
Show that the 3-D ADI method
(1 r
2
x
)u
n+1
=   (1 +r
2
y
 +r
2
z
)u
n
,
(1 r
2
y
)u
n+2
=   (1 +r
2
x
 +r
2
z
)u
n+1
,
(1 r
2
z
)u
n+3
=   (1 +r
2
x
 +r
2
y
)u
n+2
is stable provided that  r  1/2.   [Start by eliminating  u
n+1
and  u
n+2
.]
Question  39.
The function  u(x, t) satises the equation
u
t
 = u
xx
 +f(x, t) ,   (x, t)  (0, 1) (0, )
with given initial condition and boundary conditions
u(0, t) = 0 ,
  u(1, t)
x
  = g(t) .
Show  by  careful   analysis  of  truncation  error  that  the  gradient  boundary  condition  derived  by  the
ctitious nodes procedure usually gives rise to a system of nite dierence equations that are  O(h)
accurate.
Show that  O(h
3
) accuracy is recovered provided
dg
dt
  =
  f(x
n
, t)
x
  .
Chapter  6
NSPDE  solutions
Solution  1.
(a)   Dene  g(h) = 2hf
(x) f(x +h) +f(x h) then clearly  g(0) = 0.   Three dierentiations with
respect to  h give
dg
dh
  =   2f
(x) f
(x +h) f
(x h)
d
2
g
dh
2
  =   f
(x +h) +f
(x h)
d
3
g
dh
3
  =   f
(x +h) f
(x h) .
Since  g
(0) = g
(0) +
  h
2
2
  g
(0) +
  h
3
6
  g
(h) =
  h
3
6
  g
(h) ,     (0, 1) .
Given any continuous function  y(x),  the task is now to show that the equation  q() =  y(x +
h) + y(x  h)  2y(x + h) has a solution in the interval [1, 1].   To establish this result, it is
enough to observe that  q(1) =  y(x + h)  y(x  h) and  q(1) =  y(x  h)  y(x + h).   Clearly
q(1) + q(1) = 0 and therefore  q() has at least one zero in [1, 1].   Consequently there exist
 such that
d
3
g
dh
3
  = f
(x +h) f
(x h) = 2f
() ,     [x h, x +h]
and the result is proves by asserting that
g(h) = 2hf
() .
which in turn simplies to the answer
f
(x) =
  f(x +h) f(x h)
2h
  
  h
2
6
  f
() .
81
82   CHAPTER  6.   NSPDE  SOLUTIONS
(b)   Dene g(h) = h
2
f
(x) f
(x +h) +f
(x h)
d
2
g
dh
2
  =   2f
(x) f
(x +h) f
(x h)
d
3
g
dh
3
  =   f
(x +h) +f
(x h)
d
4
g
dh
4
  =   f
(x +h) f
(x h) .
Since  g
(0) = g
(0) = g
(0) +
  h
2
2
  g
(0) +
  h
3
6
  g
(0) +
  h
4
24
 g
(h) =
  h
4
24
 g
(h) ,     (0, 1) .
Taking  y(x) = f
(x +h) f
(x h) = 2f
() ,     [x h, x +h]
and the result is proved by asserting that
g(h) = h
2
f
() .
which in turn simplies to the answer
f
(x) =
  f(x +h) 2f(x) +f(x h)
h
2
  
  h
2
12
 f
() .
Solution  2.
The Taylor series expansion of  f(x +h) is
f(x +h) = f(x) +hf
(x) +
  h
2
2
  f
(x) +
  h
3
6
  f
(x) +O(h
4
) .
The Taylor series of 4f(x +h) f(x + 2h) 3f(x) is therefore
4
_
f(x) +hf
(x) +
  h
2
2
  f
(x) +
  h
3
6
  f
(x) +O(h
4
)
_
_
f(x) + 2hf
(x) +
 (2h)
2
2
  f
(x) +
 (2h)
3
6
  f
(x) +O(h
4
)
_
3f(x)
=   4f(x) + 4hf
(x) +
 4h
2
2
  f
(x) +
 4h
3
6
  f
(x)
 f(x) 2hf
(x) 
 4h
2
2
  f
(x) 
 8h
3
6
  f
(x) 
 2h
3
3
  f
(x) +O(h
4
) .
83
It now follows by straight forward algebra that
f
(x)   =
  4f(x +h) f(x + 2h) 3f(x)
2h
  +
  h
2
3
  f
(x) +O(h
4
)
=
  4f(x +h) f(x + 2h) 3f(x)
2h
  +O(h
2
) .
This is the forward dierence formula  for  f
(x)   =
  4f(x h) f(x 2h) 3f(x)
2h
  +
  h
2
3
  f
(x) +O(h
4
)
=
  f(x 2h) 4f(x h) + 3f(x)
2h
  +O(h
2
) .
Solution  3.
These results can be established directly using Taylor series in an ecient way by recognising that
the nite dierence expressions involve very specic combinations of function values.   Dene
(x, h) = u(x +h) u(x h) ,   (x, h) = u(x +h) +u(x h) .
then Taylors theorem gives immediately that
(x, h)   =   2hu
(1)
(x) +
  h
3
3
  u
(3)
(x) +
  h
5
60
u
(5)
(x) +O(h
7
)
(x, h)   =   2u(x) +h
2
u
(2)
(x) +
  h
4
12
u
(4)
(x) +O(h
6
) .
(a)   It now follows by straight forward algebra that
u(x + 2h) 2u(x +h) + 2u(x h) u(x 2h) = (x, 2h) 2(x, h)
=
_
4hu
(1)
(x) +
 8h
3
3
  u
(3)
(x)
_
2
_
2hu
(1)
(x) +
  h
3
3
  u
(3)
(x)
_
+O(h
5
)
= 2h
3
u
(3)
(x) +O(h
5
) .
This result and the denition of Big O indicate that
u
(3)
(x) =
  u(x + 2h) 2u(x +h) + 2u(x h) u(x 2h)
2h
3
  +O(h
2
) .
(b)   By a similar piece of straight forward algebra gives
u(x + 2h) 4u(x +h) + 6u(x) 4u(x h) +u(x 2h) = (x, 2h) 4(x, h) + 6u(x)
=
_
2u(x) + 4h
2
u
(2)
(x) +
 16h
4
12
  u
(4)
(x)
_
4
_
2u(x) +h
2
u
(2)
(x) +
  h
4
12
u
(4)
(x)
_
+ 6u(x) +O(h
6
)
= h
4
u
(4)
(x) +O(h
6
) .
84   CHAPTER  6.   NSPDE  SOLUTIONS
This result and the denition of Big O indicate that
u
(4)
(x) =
  u(x + 2h) 4u(x +h) + 6u(x) 4u(x h) +u(x 2h)
h
4
  +O(h
2
) .
Solution  4.
These results are established directly using the Taylor series expansions
(x, h)   =   u(x +h) u(x h) = 2hu
(1)
(x) +
  h
3
3
  u
(3)
(x) +
  h
5
60
u
(5)
(x) +O(h
7
)
(x, h)   =   u(x +h) +u(x h) = 2u(x) +h
2
u
(2)
(x) +
  h
4
12
u
(4)
(x) +O(h
6
) .
(a)   It follows by straight forward algebra that
u(x 2h) + 8u(x h) 8u(x +h) u(x + 2h) = 8(x, h) (x, 2h)
= 8
_
2hu
(1)
(x) +
  h
3
3
  u
(3)
(x)
_
_
4hu
(1)
(x) +
 8h
3
3
  u
(3)
(x)
_
+O(h
5
)
= 12hu
(1)
(x) +O(h
5
) .
This result and the denition of Big O indicate that
u
(1)
(x) =
  u(x 2h) 8u(x h) + 8u(x +h) u(x + 2h)
12h
  +O(h
4
) .
(b)   It follows by straight forward algebra that
u(x 2h) + 16u(x h) 30u(x) + 16u(x +h) u(x 2h) = 16(x, h) (x, 2h) 30u(x)
= 16
_
2u(x) +h
2
u
(2)
(x) +
  h
4
12
u
(4)
(x)
_
_
2u(x) + 4h
2
u
(2)
(x) +
 16h
4
12
  u
(4)
(x)
_
30u(x) +O(h
6
)
= 12h
2
u
(2)
(x) +O(h
6
) .
This result and the denition of Big O indicate that
u
(2)
(x) =
 u(x 2h) + 16u(x h) 30u(x) + 16u(x +h) u(x + 2h)
12h
2
  +O(h
4
) .
These nite dierence formulae are not used for several obvious reasons.   First, nodes neighbouring
on  a  boundary  cannot  be  treated  easily  even  when  the  value  of   the  solution  is  prescribed  on  the
boundary (Dirichlet boundary conditions).   Second, the underlying system of equations gives rise to a
penta-diagonal system which is not so easy to solve.   Third, the matrix of this penta-diagonal system
of equations is not diagonally dominated.   Consequently pivoting may be required.
Solution  5.
The  combination  of   derivatives  dened  by    occurs  frequently  in  problems  involving  non-uniform
material   or  geometrical   properties.   The  second  order  central   dierence  expression  for     may  be
85
written down using the identity
d
dx
_
a(x)
  df
dx
_
 =
  da
dx
 f(x) +a(x)
 d
2
f
dx
2
  =
  1
2
_
  d
2
dx
2
_
a(x)f(x)
_
f(x)
d
2
a
dx
2
 +a(x)
d
2
f
dx
2
_
.
Consequently,
k
  =
  1
2
_
 a
k+1
f
k+1
2a
k
f
k
 +a
k1
f
k1
h
2
  f
k
a
k+1
2a
k
 +a
k1
h
2
  +a
k
f
k+1
2f
k
 +f
k1
h
2
_
=
  1
2h
2
_
a
k+1
f
k+1
2a
k
f
k
 +a
k1
f
k1
f
k
a
k+1
 + 2f
k
a
k
f
k
a
k1
 +a
k
f
k+1
2a
k
f
k
 +a
k
f
k1
_
=
  1
2h
2
_
(a
k+1
 +a
k
)f
k+1
(a
k+1
 + 2a
k
 +a
k1
)f
k
 + (a
k1
 +a
k
)f
k1
_
.
It is obvious from its construction that this nite dierence expression for 
k
 is second order accurate.
Solution  6.
The Taylor series expansions of  f(x h) and  f(x +h) are
f(x h)   =   f(x) hf
(x) +
  h
2
2
  f
(x) +O(h
3
)
f(x +h)   =   f(x) +hf
(x) +
  
2
h
2
2
  f
(x) +O(h
3
) .
Subtracting these expressions gives
f(x +h) f(x h) = h(1 +)f
(x) +
  h
2
(
2
1)
2
  f
(x) +O(h
3
)
which in turn leads to the result
f
(x) =
  f(x +h) f(x h)
( + 1)h
  +O(h) .
Solution  7.
Unlike the previous example, there is now no need to eliminate  f(x) between the equations
f(x h)   =   f(x) hf
(x) +
  h
2
2
  f
(x) +O(h
3
)
f(x +h)   =   f(x) +hf
(x) +
  
2
h
2
2
  f
(x) +O(h
3
)
since its value is known.   The idea is to eliminate the term involving  f
(x).   Thus
f(x +h) 
2
f(x h) = (1 
2
)f(x) + ( +
2
)hf
(x) +O(h
3
) .
Straight forward algebra now gives
f
(x) =
  f(x +h) (1 
2
)f(x) 
2
f(x h)
(1 +)h
  +O(h
2
) .
86   CHAPTER  6.   NSPDE  SOLUTIONS
Solution  8.
The Taylor series expansion of  f(x 2h),  f(x h) and  f(x +h) are
f(x 2h)   =   f(x) 2hf
(x) + 2h
2
f
(x) 
 4h
3
3
  f
(x) +O(h
4
)
f(x h)   =   f(x) hf
(x) +
  h
2
2
  f
(x) 
  h
3
6
  f
(x) +O(h
4
)
f(x +h)   =   f(x) +hf
(x) +
  
2
h
2
2
  f
(x) +
  
3
h
3
6
  f
(x) +O(h
4
) .
The value of  f
(x) is rst eliminated between equations 1 and 2 and between equations 2 and 3 to
obtain
f(x 2h) 2f(x h)   =   f(x) +h
2
f
(x) h
3
f
(x) +O(h
4
)
6(f(x h) +f(x +h))   =   6(1 +)f(x) + 3( +
2
)h
2
f
(x) + (
3
)h
3
f
(x) +O(h
4
) .
The  nal  answer  is  constructed  by  multiplying  the  rst  of  the  previous  equations  by  (
3
 )  and
adding to obtain
(
3
)(f(x 2h) 2f(x h)) + 6(f(x h) +f(x +h)) =
6(1 +)f(x) (
3
)f(x) + 3( +
2
)h
2
f
(x) + (
3
)h
2
f
(x) +O(h
4
)
and this in turn simplies to
(
2
1)f(x 2h) 2(
2
4)f(x h) + 6f(x +h) =
( + 1)(6 + 
2
)f(x) + (2 + 3
2
+
3
)h
2
f
(x) +O(h
4
) .
Further simplication gives
(
2
1)f(x 2h) 2(
2
4)f(x h) + 6f(x +h) =
( + 1)(2 +)(3 )f(x) +( + 1)( + 2)h
2
f
(x) +O(h
4
) .
from which it follows by straight forward algebra that
f
(x) =
  6f(x +h) + ( 3)( + 1)( + 2)f(x) 2(
2
4)f(x h) +(
2
1)f(x 2h)
( + 1)( + 2)h
2
  +O(h
2
) .
87
Solution  9.
(a)   There  are  three  products  of  two  basis  functions  that  contribute  to  integrals  over  the  element
occupying  [x
j
, x
j+1
],   namely,   u
2
j
(x),   u
j
(x)u
j+1
(x)  and  u
2
j+1
(x).   Under  the  change  of  variable
x = x
j
 + (x
j+1
x
j
) , it is clear that  u
j
(x) = (1 ) and  u
j+1
(x) = .   Consequently,
_
  x
j+1
x
j
u
2
j
(x) dx   =   (x
j+1
x
j
)
_
  1
0
(1 )
2
d =
  x
j+1
x
j
3
  ,
_
  x
j+1
x
j
u
j
(x)u
j+1
(x) dx   =   (x
j+1
x
j
)
_
  1
0
(1 )d =
  x
j+1
x
j
6
  ,
_
  x
j+1
x
j
u
2
j+1
(x) dx   =   (x
j+1
x
j
)
_
  1
0
2
d =
  x
j+1
x
j
3
  .
Since basis functions interior to a segment span two elements and those at the ends of a segment
span only one element (see, Figure 1.1), it follows immediately that
_
  L
0
u
k
(x)u
j
(x) dx =
_
  x
j
x
j1
u
k
(x)u
j
(x) dx +
_
  x
j+1
x
j
u
k
(x)u
j
(x) dx
where each integral on the right hand side is present only if the underlying element exists.
(b)   There are three possible products of two dierentiated basis functions that contribute to inte-
grals over the element occupying [x
j
, x
j+1
], namely,
_
  x
j+1
x
j
_
du
j
dx
_
2
dx,
_
  x
j+1
x
j
du
j
dx
du
j+1
dx
  dx,
_
  x
j+1
x
j
_
du
j+1
dx
_
2
dx.
The computation of these integrals is immediate once it is recognised that
du
j
dx
  =
  1
x
j+1
x
j
,
  du
j+1
dx
  =
  1
x
j+1
x
j
,
The result of these computations is
_
  x
j+1
x
j
_
du
j
dx
_
2
dx =
_
  x
j+1
x
j
_
du
j+1
dx
_
2
dx =
  1
x
j+1
x
j
,
_
  x
j+1
x
j
du
j
dx
du
j+1
dx
  dx = 
  1
x
j+1
x
j
.
In the computation of
_
  L
0
du
k
(x)
dx
du
j
(x)
dx
  dx =
_
  x
j
x
j1
u
i
du
k
dx
du
j
dx
  dx +
_
  x
j+1
x
j
u
i
du
k
dx
du
j
dx
  dx
it is again understood that each integral on the right hand side is present only if the underlying
element exists.
(c)   There are four possible products of three basis functions that contribute to integrals over the
element   occupying  [x
j
, x
j+1
],   namely,   u
3
j
(x),   u
2
j
(x)u
j+1
(x),   u
j
(x)u
2
j+1
(x)   and  u
3
j+1
(x).   The
88   CHAPTER  6.   NSPDE  SOLUTIONS
change of variable  x = x
j
 + (x
j+1
x
j
)  now yields.
_
  x
j+1
x
j
u
3
j
(x) dx   =   (x
j+1
x
j
)
_
  1
0
(1 )
3
d =
  x
j+1
x
j
4
  ,
_
  x
j+1
x
j
u
2
j
(x)u
j+1
(x) dx   =   (x
j+1
x
j
)
_
  1
0
(1 )
2
d =
  x
j+1
x
j
12
  ,
_
  x
j+1
x
j
u
j
(x)u
2
j+1
(x) dx   =   (x
j+1
x
j
)
_
  1
0
(1 )
2
d =
  x
j+1
x
j
12
  ,
_
  x
j+1
x
j
u
3
j+1
(x) dx   =   (x
j+1
x
j
)
_
  1
0
3
d =
  x
j+1
x
j
4
  .
Since basis functions interior to a segment span two elements and those at the ends of a segment
span only one element, it again follows immediately that
_
  L
0
u
i
(x)u
k
(x)u
j
(x) dx =
_
  x
j
x
j1
u
i
(x)u
k
(x)u
j
(x) dx +
_
  x
j+1
x
j
u
i
(x)u
k
(x)u
j
(x) dx
where each integral on the right hand side is present only if the underlying element exists.
(d)   There  are  four  possible  products  of  one  basis  function  and  two  dierentiated  basis  functions
that contribute to integrals over the element occupying [x
j
, x
j+1
], namely,
_
  x
j+1
x
j
u
j
_
du
j
dx
_
2
dx,
_
  x
j+1
x
j
u
j
du
j
dx
du
j+1
dx
  dx,
_
  x
j+1
x
j
u
j
_
du
j+1
dx
_
2
dx,
_
  x
j+1
x
j
u
j+1
_
du
j
dx
_
2
dx,
_
  x
j+1
x
j
u
j+1
du
j
dx
du
j+1
dx
  dx,
_
  x
j+1
x
j
u
j+1
_
du
j+1
dx
_
2
dx.
The computation of these integrals is immediate once it is recognised that
du
j
dx
  =
  1
x
j+1
x
j
,
du
j+1
dx
  =
  1
x
j+1
x
j
,
_
  x
j+1
x
j
u
j
(x) dx   =
  x
j+1
x
j
2
  ,
_
  x
j+1
x
j
u
j+1
(x) dx   =
  x
j+1
x
j
2
  .
The result of these computations is
_
  x
j+1
x
j
u
j
_
du
j
dx
_
2
dx   =
_
  x
j+1
x
j
u
j+1
_
du
j
dx
_
2
dx
_
  x
j+1
x
j
u
j
_
du
j+1
dx
_
2
dx   =
_
  x
j+1
x
j
u
j+1
_
du
j+1
dx
_
2
dx
_
_
 =
  1
2
1
x
j+1
x
j
,
_
  x
j+1
x
j
u
j
du
j
dx
du
j+1
dx
  dx =
_
  x
j+1
x
j
u
j+1
du
j
dx
du
j+1
dx
  dx = 
1
2
1
x
j+1
x
j
.
In the computation of
_
  L
0
u
i
(x)
 du
k
(x)
dx
du
j
(x)
dx
  dx =
_
  x
j
x
j1
u
i
du
k
dx
du
j
dx
  dx +
_
  x
j+1
x
j
u
i
du
k
dx
du
j
dx
  dx
it is again understood that each integral on the right hand side is present only if the underlying
element exists.
89
Solution  10.
Let  x
j
 = jL/n and dene  f
j
 =
 
f(x
j
) where
 
f(x) is given by the partial sum
f(x) =
n/21
k=n/2
c
k
 e
2ikx/L
,   x  [0, L] .
The value  f
j
  is obtained from this summation by replacing  x by  x
j
  to obtain
f
j
 =
 
f(x
j
) =
n/21
k=n/2
c
k
  exp
_
2ikjL
nL
_
 =
n/21
k=n/2
c
k
 e
2ikj/n
.
Since the solution is given a priori, there is no need to solve these equations  it is enough to show
that the above expression for  f
j
  satises
c
k
 =
  1
n
n1
j=0
f
j
 e
2ijk/n
.
With this action in mind, consider
n1
j=0
f
j
 e
2ijk/n
=
n1
j=0
_
  n/21
m=n/2
c
m
e
2imj/n
_
e
2ijk/n
,
=
n/21
m=n/2
c
m
n1
j=0
e
2imj/n
e
2ijk/n
=
n/21
m=n/2
c
m
n1
j=0
e
2i(mk)j/n
.
Now consider the value of the second summation on the right hand side of this equation.   If  k =  m
then each term in this summation is one and the value of the sum is  n.   The same comment applies
if  k m is a multiple of  n, but this is impossible in this particular problem.   So if  k m = 0 then
n1
j=0
e
2i(mk)j/n
=
  1 e
2i(mk)
1 e
2i(mk)/n
  = 0
since  e
2i(mk)
= 1.   In conclusion,
n1
j=0
e
2i(mk)j/n
=
_
_
  n   k = m
0   k = m
and therefore
n1
j=0
f
j
 e
2ijk/n
=
n/21
m=n/2
c
m
n1
j=0
e
2i(mk)j/n
= c
k
 .
Solution  11.
The aim is to nd  y
1
 = y(1/4),  y
2
 = y(1/2) and  y
3
 = y(3/4) bearing in mind that  y
0
 = 0 and  y
4
 = 1
from  the  boundary  conditions.   The  central  dierence  formulae  indicate  that  y
1
,   y
2
  and  y
3
  are  the
90   CHAPTER  6.   NSPDE  SOLUTIONS
solutions of the linear equations
_
127
64
  1   0
1   
63
32
  1
0   1   
125
64
_
_
_
_
y
1
y
2
y
3
_
_
=
_
_
1
16
1
16
15
16
_
_
.
These equations have approximate solution  y
1
 = 0.173,  y
2
 = 0.405 and  y
3
 = 0.687.   The Maple code
to solve this system of equations is
>   with(linalg):
>   A:=array([[-127/64,1,0],[1,-63/32,1],[0,1,-125/64]]);
>   b:=array([1/16,1/16,-15/16]);
>   x:=linsolve(A,b);
The Maple solution to this problem is
x =
_
  83572
484029
 ,
 196090
484029
 ,
 332732
484029
_
.
Solution  12.
The aim is to nd  y
1
  =  y(1/4),   y
2
  =  y(1/2),   y
3
  =  y(3/4) and  y
4
  =  y(1) bearing in mind that only
y
0
 = 0 is known from the boundary conditions.   The central dierence formulae indicate that  y
1
,  y
2
,
y
3
 and  y
4
 are the solutions of the linear equations
_
31
16
  1   0   0
1   
31
16
  1   0
0   1   
31
16
  1
0   1   4   3
_
_
_
_
y
1
y
2
y
3
y
4
_
_
=
_
_
1
0
0
0
_
_
.
These equations have approximate solution  y
1
 = 1.354,  y
2
 = 1.623,  y
3
 = 1.791 and  y
4
 = 1.847.   The
Maple code to solve this system of equations is
>   with(linalg):
>   A:=array([[-31/16,1,0,0],[1,-31/16,1,0],[0,1,-31/16,1],[0,1,-4,3]]);
>   b:=array([-1,0,0,0]);
>   x:=linsolve(A,b);
91
The Maple solution to this problem is
x   =
_
 6192
4573
 ,
 7424
4573
 ,
 8192
4573
 ,
 8448
4573
_
   [1.354034551, 1.623441942, 1.791384212, 1.847364968] .
It is straight forward to demonstrate that
y(x) =
  cos (1 x)
cos 1
satises the dierential equation and the boundary conditions.   The exact values of  y
1
,  y
2
,  y
3
 and  y
4
are compared with the approximate solution to get
y
1
  y
2
  y
3
  y
4
Exact   1.354221259   1.624243599   1.793278339   1.850815718
Approx   1.354034551   1.623441942   1.791384212   1.847364968
Hand calculation   The hand calculation of the solution to this problem working to an accuracy of
3dp involves the manipulation of the augmented matrix as follows:
_
_
1.936   1.000   0.000   0.000   1.000
1.000   1.938   1.000   0.000   0.000
0.000   1.000   1.938   1.000   0.000
0.000   1.000   4.000   3.000   0.000
_
_
1.936   1.000   0.000   0.000   1.000
1.000   1.938   1.000   0.000   0.000
0.000   1.000   1.938   1.000   0.000
0.000   0.000   2.062   2.000   0.000
_
_
1.936   1.000   0.000   0.000   1.000
0.000   1.422   1.000   0.000   0.516
0.000   1.000   1.938   1.000   0.000
0.000   0.000   2.062   2.000   0.000
_
_
1.936   1.000   0.000   0.000   1.000
0.000   1.422   1.000   0.000   0.516
0.000   0.000   1.235   1.000   0.363
0.000   0.000   2.062   2.000   0.000
_
_
1.936   1.000   0.000   0.000   1.000
0.000   1.422   1.000   0.000   0.516
0.000   0.000   2.062   2.000   0.000
0.000   0.000   1.235   1.000   0.363
_
_
92   CHAPTER  6.   NSPDE  SOLUTIONS
The nal step of this calculation gives the upper triangular form
_
_
1.936   1.000   0.000   0.000   1.000
0.000   1.422   1.000   0.000   0.516
0.000   0.000   2.062   2.000   0.000
0.000   0.000   0.000   0.198   0.363
_
_
which in turn leads to the solutions
y
4
 = 1.833 ,   y
3
 = 1.778 ,   y
2
 = 1.613 ,   y
1
 = 1.348 .
Solution  13.
The dissection has nodes  x
0
 = 1,  x
1
 = 4/3,  x
2
 = 5/3, and  x
3
 = 2.   The boundary conditions indicate
that  y
0
 = 0 and  y
3
 = 4.   The remaining unknowns  y
1
,  y
2
 are the subject of the linear equations
_
_
31   14
55
2
  49
_
_
_
_
y
1
y
2
_
_
 =
_
_
16
9
785
9
_
_
with augmented matrix
B =
_
_
31   14
  16
9
55
2
  49   
785
9
_
_
31   14
  16
9
0   
1134
31
  
2655
31
_
_
  .
Using back substitution, the solutions are
y
2
 =
  2655
1134
  =
  295
126
  2.3412 ,   y
1
  1.0000 .
The exact solution to four decimal places is  y(4/3) = 0.9978 and  y(5/3) = 2.3394.
Solution  14.
Forward  dierence   The  dissection  has  nodes  x
0
  =  1,   x
1
  =  4/3,   x
2
  =  5/3,   and  x
3
  =  2.   The
boundary conditions indicate that only  y
4
 = 1 is known.   The remaining unknowns  y
0
,  y
1
 and  y
2
 are
the subject of the linear equations
_
_
3   4   1
18   31   14
0
  55
2
  49
_
_
_
_
y
0
y
1
y
2
_
_
=
_
_
0
16
9
355
18
_
_
in which the forward dierence formula has been used for  y
_
17   18   0
18   31   14
0
  55
2
  49
_
_
_
_
y
0
y
1
y
2
_
_
=
_
_
1
16
9
355
18
_
_
where  the  ctitious  nodes  procedure  has  been  used  for   y
(x) =
  u(x +h) 2u(x) +u(x h)
h
2
  
  h
2
12
 u
() +O(h
6
) .
If  u
xx
 = f(x) then it is clear from this identity that
f(x) =
  u(x +h) 2u(x) +u(x h)
h
2
  
  h
2
12
 f
(x) +O(h
4
)
which in turn gives
u(x +h) 2u(x) +u(x h)
h
2
  =   f(x) +
  h
2
12
_
 f(x +h) 2f(x) +f(x h)
h
2
  +O(h
2
)
_
+O(h
4
)
=
  1
12
_
f(x h) + 10f(x) +f(x +h)
_
+O(h
4
) .
The equations written out in the question may be derived directly from this identity.   In particular,
it   is  clear  that   this  technique  may  be  applied  to  any  second  order  dierential   equations  of   type
u
xx
 = f(x, u, u
x
), and will allow it to be solved to higher accuracy than might initial be imagined.
Solution  16.
Let  (h) = u(x +h, y +h) u(x +h, y h) u(x h, y +h) +u(x h, y h) then simple algebra
gives
(h) = 2h
_
 u(x +h, y +h) u(x +h, y h)
2h
  
  u(x h, y +h) u(x h, y h)
2h
_
Each fraction is now expanded as a power series about  y using the idea that
u(z, y +h) u(z, y h)
2h
  = u
y
(z, y) 
  h
2
6
  u
yyy
(z, )
The result of this operation is that
(h) = 2h
_
u
y
(x +h, y) 
  h
2
6
  u
yyy
(x +h, 
1
) u
y
(x h, y) +
  h
2
6
  u
yyy
(x h, 
2
)
_
Applying the same idea to  u
y
(x +h, y) u
y
(x h, y) yields
u
y
(x +h, y) u
y
(x h, y) = 2hu
xy
(x, y) 
  h
3
6
  u
xxxy
(
1
, y) .
95
This formula is now used to simplify   to obtain
(h) = 4h
2
u
xy
(x, y) +O(h
4
)
from which it is transparent that
2
u(x, y)
xy
  =
  u(x +h, y +h) u(x +h, y h) u(x h, y +h) +u(x h, y h)
4h
2
  +O(h
2
) .
Solution  17.
Under the change of variable  x =  e
t
,  it is easy to see that  xu
x
  =  u
t
.   The original equation can be
re-expressed in the form x((xu)
x
)
x
+u = x which then becomes u
tt
+u = e
t
under the given change of
variable.   The points  x =  and  x = 1 become respectively  t = log  and  t = 0.   The general solution
for  u(t) is
u(t) = Asin t +Bcos t +
  e
t
2
where A and B are determined by the conditions u(log ) = u(0) = 0.   The discretised equation for u
is
u
k+1
2u
k
 +u
k1
h
2
  +u
k
 = e
t
k
   u
k+1
(2 h
2
)u
k
 +u
k1
 = h
2
e
t
k
.
where  u
0
 = u
n
 = 0.   The unknowns values  u
1
,    , u
n1
 satisfy the linear system
(2 h
2
)u
1
 +u
2
  =   h
2
e
t
0
u
1
(2 h
2
)u
2
 +u
3
  =   h
2
e
t
1
.
.
.
  .
.
.
  .
.
.
u
k1
(2 h
2
)u
k
 +u
k+1
  =   h
2
e
t
k
.
.
.
  .
.
.
  .
.
.
u
n2
(2 h
2
)u
n1
  =   h
2
e
t
n1
.
This approach is superior because the new dierential equation is not a singular equation.
Solution  18.
Clearly  u = sin x satises the initial conditions and the dierential equation.
In the usual notation, the central dierence representation of the dierential equation is
u
k+1
2u
k
 +u
k1
h
2
  + 2u
k
u
k+1
u
k1
2h
  +u
k
 = sin 2x
k
where  u
0
 = u
n
 = 0.   This equation further simplies to
u
k+1
2u
k
 +u
k1
 +hu
k
 (u
k+1
u
k1
) +h
2
u
k
 = h
2
sin 2x
k
 .
96   CHAPTER  6.   NSPDE  SOLUTIONS
Taking account of the boundary conditions, the unknowns  u
1
,    , u
n1
 now satisfy  F
1
 = F
2
 =    =
F
n1
 = 0 where
F
1
  =   u
2
2u
1
 +hu
1
u
2
 +h
2
u
1
h
2
sin 2x
1
.
.
.
F
k
  =   u
k+1
2u
k
 +u
k1
 +hu
k
 (u
k+1
u
k1
) +h
2
u
k
h
2
sin 2x
k
.
.
.
F
n1
  =   2u
n1
 +u
n2
hu
n1
u
n2
 +h
2
u
n1
h
2
sin 2x
n1
.
The Jacobian matrix of  F  is
J =
_
_
hu
2
 +h
2
2   1 +hu
1
  0                  0
1 hu
2
h(u
3
u
1
)
+h
2
2
  1 +hu
2
  0             0
0   1 hu
3
                        
          1 hu
k
h(u
k+1
u
k1
)
+h
2
2
  1 +hu
k
  0   0
                                
                    0   1 hu
n1
hu
n2
+h
2
2
_
_
Another possible approach is to rewrite the equations F
1
 = F
2
 =    = F
n1
 = 0 in the iterative form
u
1
  =
  u
2
 +hu
1
u
2
h
2
sin 2x
1
2 h
2
.
.
.
u
k
  =
  u
k+1
 +u
k1
 +hu
k
 (u
k+1
u
k1
) h
2
sin 2x
k
2 h
2
.
.
.
u
n1
  =
  u
n2
hu
n1
u
n2
h
2
sin 2x
n1
2 h
2
Solution  19.
(a)   To establish this result rst recognise that
u(x h, y) +u(x +h, y) = 2u(x, y) +h
2
  
2
u(x, y)
x
2
  +
  h
4
12
4
u(x, y)
x
4
  +O(h
6
) .
97
This result is now used in with  y replaced by  y +h and  y h to get
[u(x h, y +h) +u(x +h, y +h)] + [u(x h, y h) +u(x +h, y h)]
= 2u(x, y +h) + 2u(x, y h) +h
2
  
2
u(x, y +h)
x
2
  +h
2
  
2
u(x, y h)
x
2
+
h
4
12
4
u(x, y +h)
x
4
  +
  h
4
12
4
u(x, y h)
x
4
  +O(h
6
) .
(6.1)
Taylors theorem is now used to assert that
u(x, y h) +u(x, y +h)   =   2u(x, y) +h
2
  
2
u(x, y)
y
2
  +
  h
4
12
4
u(x, y)
y
4
  +O(h
6
)
2
u(x, y +h)
x
2
  +
  
2
u(x, y h)
x
2
  =   2
2
u(x, y)
x
2
  +h
2
  
4
u
x
2
y
2
 +O(h
4
)
4
u(x, y +h)
x
4
  +
  
4
u(x, y h)
x
4
  =   2
4
u
x
4
 +O(h
2
) .
These results are now inserted into (6.1) to obtain
u(x h, y +h) +u(x +h, y +h) 4u(x, y) +u(x h, y h) +u(x +h, y h)
= 2h
2
_
2
u
y
2
  +
  
2
u
x
2
_
+
  h
4
6
_
4
u
y
4
  + 6
  
4
u
x
2
y
2
 +
  
4
u
x
4
_
+O(h
6
)
(6.2)
where all derivative are evaluated at (x, y).   The immediate conclusion from this calculation is
2
u
y
2
  +
  
2
u
x
2
  =
  1
2h
2
_
u(x h, y +h) +u(x +h, y +h) 4u(x, y)
+ u(x h, y h) +u(x +h, y h)
_
+O(h
2
) .
(6.3)
In particular, if u satises Laplaces equation then u
xxxx
+2u
xxyy
+u
yyyy
 = 0.   This simplication
leads to the particular result
2
u
y
2
  +
  
2
u
x
2
  =
  1
2h
2
_
u(x h, y +h) +u(x +h, y +h) 4u(x, y)
+ u(x h, y h) +u(x +h, y h)
_
+
 2h
2
3
4
u
x
2
y
2
 +O(h
4
) .
(6.4)
(b)   This result is a combination of result (a) and the standard nite dierence approximation for
the  Laplacian  of   u.   Consider  rst  the  Taylor  series  approximations  used  in  the  derivation  of
the standard central dierence formula for the Laplacian of  u.   These are
u(x h, y) +u(x +h, y)   =   2u(x, y) +h
2
  
2
u
x
2
 +
  h
4
12
4
u
x
4
 +O(h
6
) ,
u(x, y h) +u(x, y +h)   =   2u(x, y) +h
2
  
2
u
y
2
  +
  h
4
12
4
u
y
4
  +O(h
6
) .
98   CHAPTER  6.   NSPDE  SOLUTIONS
These results are added together to give
u(x h, y) +u(x +h, y) 4u(x, y) +u(x, y h) +u(x, y +h)
= h
2
_
2
u
x
2
 +
  
2
u
y
2
_
+
  h
4
12
_
4
u
x
4
 +
  
4
u
y
4
_
+O(h
6
)
where all derivative are evaluated at (x, y).   The immediate conclusion from this calculation is
2
u
y
2
  +
  
2
u
x
2
  =
  1
h
2
_
u(x h, y) +u(x, y +h) 4u(x, y)
+ u(x, y h) +u(x +h, y)
_
+O(h
2
) .
(6.5)
When  u satises Laplaces equation then  u
xxxx
 + 2u
xxyy
 + u
yyyy
  = 0 and equation (6.5) takes
on the particular form
2
u
y
2
  +
  
2
u
x
2
  =
  1
h
2
_
u(x h, y) +u(x, y +h) 4u(x, y)
+ u(x, y h) +u(x +h, y)
_
  h
2
6
4
u
x
2
y
2
 +O(h
4
) .
(6.6)
The  result  is  quoted  in  the  question  is  constructed  by  multiplying  equation  (6.3)  by  2h
2
and
adding to that equation (6.6) multiplied by 4h
2
.   The truncation error is  O(h
2
) in general and
O(h
4
) if  u satises Laplaces equation.
Solution  20.
It is required to nd the eigenvalues of  H
GJ
 = D
1
(L +U) and  H
GS
 = (D +L)
1
U  when
A = L +D +U  =
_
_
1   2   4
1/8   1   1
1   4   1
_
_
  .
Gauss-Jacobi
If  H
GJ
X = X  then (D +L +U)X = 0.   Thus   satises the equation
   2   4
1/8      1
1   4   
= 
3
  
4
  = 0 .
The  eigenvalues  are    =  0  and    = 1/2.   All   eigenvalues  lie  within  the  unit  circle  and  so  the
Gauss-Jacobi algorithm converges for these equations.
Gauss-Seidel
99
If  H
GS
X = X  then (D +L +U)X = 0.   Thus   satises the equation
   2   4
/8      1
   4   
= 
3
+
 7
2
4
  2 = 0 .
The eigenvalues are   = 0 and   = (7 
177)/8.   Not all eigenvalues lie within the unit circle and
so the Gauss-Seidel algorithm diverges for these equations.
Solution  21.
This is a repetition of the previous problem.   It is required to nd the eigenvalues of H
GJ
 = D
1
(L+
U) and  H
GS
 = (D +L)
1
U  when
A = L +D +U  =
_
_
3   2   1
2   3   2
1   2   3
_
_
  .
Gauss-Jacobi
If  H
GJ
X = X  then (D +L +U)X = 0.   Thus   satises the equation
3   2   1
2   3   2
1   2   3
= 27
3
27 + 8 = 0 .
This cubic factorises into the product (3 1)(9
2
+ 3 8).   Consequently, the eigenvalues of  H
GJ
are   = 1/3 and   = (1 
3   2   1
2   3   2
   2   3
= 27
3
23
2
+ 4 = 0 .
The eigenvalues are  = 0 and  = (23 
   c   1
c      c
c
2
   c   
= 
3
c
4
 = 0 .
100   CHAPTER  6.   NSPDE  SOLUTIONS
The eigenvalues are  = 0 and  = c
2
.   The spectral radius is  = c
2
and therefore the Gauss-Seidel
algorithm converges for these equations provided | c | < 1.
The Gauss-Jacobi algorithm for these equations with c = 2 requires that the solutions of the equation
   2   1
2      2
4   2   
= 
3
4 12 = 0
lie within the unit circle.   If  g() = 
3
4 12 then  g(1) = 15 and  g()  as   .   Clearly
there is a zero outside the unit circle and so the Gauss-Jacobi algorithm diverges for these equations.
Solution  23.
Consider the matrix identity
_
_
1   0
V
  I
_
_
_
_
1   0
0   C
1
  V W
T
_
_
_
_
   W
T
0   I
_
_
 =
_
_
1   0
V
  C
1
  V W
T
_
_
_
_
   W
T
0   I
_
_
 =
_
_
   W
T
V   C
1
_
_
  .
Suppose that  C
1
V W
T
/ = L
1
U
1
, then it follows directly from the identity that
_
_
   W
T
V   C
1
_
_
 =
_
_
1   0
V
  I
_
_
_
_
1   0
0   L
1
U
1
_
_
_
_
   W
T
0   I
_
_
 =
_
_
1   0
V
  L
1
_
_
_
_
   W
T
0   U
1
_
_
  .
In particular, if  A
T
is diagonally dominated then the rst row of  A is the pivotal row.   The task is
now to show that if  A
T
is diagonally dominated then so is  C
T
2
  where  C
2
 = L
1
U
1
.   Now
n1
i=1 i=j
| (C
2
)
ij
|   =
n1
i=1 i=j
| (C
1
)
ij
v
i
w
j
/|
n1
i=1 i=j
| (C
1
)
ij
| +
 w
j
n1
i=1 i=j
| v
i
|
=
_
  n
i=1 i=j+1
| A
i,j+1
| | w
j
|
_
+
 w
j
_
  n1
i=1
| v
i
| | v
j
|
_
_
| (C
1
)
jj
| | w
j
|
_
+
 w
j
_
| | | v
j
|
_
=   | (C
1
)
jj
| 
 w
j
v
j
 (C
1
)
jj
  w
j
v
j
 = | (C
2
)
jj
|
where this last inequality is an application of the identity |a|  |b|  ||a|  |b||  |a  b|.   Thus the
matrix  C
2
  is  diagonally  dominated.   This  analysis  forms  the  basis  of   an  induction  assumption  to
demonstrate that  k
th
iteration of the  LU  decomposition of  A may take the  k
th
row as pivotal row -
in eect, no pivoting is required in the  LU  factorisation of  A provided  A
T
is diagonally dominated.
101
Solution  24.
(a)   To appreciate why each diagonal entry of a symmetric positive denite matrix is positive, it is
enough to note that every positive denite matrix can be represented in the form LDL
T
where
the entries of D are the eigenvalues of A and L is a lower triangular matrix.   [Of course, D and
L in the representation LDL
T
are dierent from their usage in the main body of this solution. ]
Furthermore,  LDL
T
= (LD
1/2
)(LD
1/2
)
T
= L
1
L
T
1
 .   Thus
A
ii
 =
n
k=1
(L
1
)
ik
(L
T
1
 )
ki
 =
n
k=1
(L
1
)
ik
(L
1
)
ik
  > 0 .
Thus each diagonal entry of  A is positive.
(b)   Since  A =  L + D + U  is symmetric then  U  =  L
T
.   Take  D =  S
2
and construct  H
1
 =  SHS
1
.
Clearly  H  and  H
1
 are similar matrices and therefore have the same eigenvalues.   Also
H
1
 = SHS
1
= S(D +L)
1
L
T
S
1
=   S(S
2
+L)
1
L
T
S
1
=   S[S(I +S
1
LS
1
)S]
1
L
T
S
1
=   SS
1
(I +S
1
LS
1
)
1
S
1
L
T
S
1
=   (I +S
1
LS
1
)
1
S
1
L
T
S
1
=   (I +L
1
)
1
L
T
1
where  L
1
 = S
1
LS
1
.
(c)   Furthermore,  D
1/2
AD
1/2
= S
1
(D +L +L
T
)S
1
.   By writing  D = S
2
, it follows that
D
1/2
AD
1/2
= I +S
1
LS
1
+S
1
L
T
S
1
= I +L
1
 +L
T
1
  .
(d)   To show that the Gauss-Seidel algorithm always converges, it is required to show that all the
eigenvalues  of   H  =  (D +  L)
1
U  =  (D +  L)
1
L
T
lie  within  the  unit   circle.   Let   X  be  an
eigenvector of  H
1
 with eigenvalue   then  H
1
X = X.   Therefore,
(I +L
1
)
1
L
T
1
X = X      L
T
1
X = (I +L
1
)X      X
H
L
T
1
X = X
H
(I +L
1
)X
where  X
H
implies  transposition  of   the  complex  conjugate  of   X.   Take  Y   =  X/| X|   then  it
follows that  Y
H
L
T
1
Y  =  +Y
H
L
1
Y .   Let  Y
H
L
T
1
Y  = a +ib then
 =
  a ib
1 +a +ib
     | |
2
1 = 
  1 + 2a
1 + 2a +a
2
+b
2
  .
Returning to result (c), it follows immediately that
X
H
D
1/2
AD
1/2
X = X
H
(I +L
1
 +L
T
1
 )X = X
H
X +X
H
L
1
X +X
H
L
T
1
X  > 0 .
102   CHAPTER  6.   NSPDE  SOLUTIONS
On dividing by  X
H
X, we obtain
1 +Y
H
L
1
Y  +Y
H
L
T
1
Y  = 1 + (a +ib) + (a ib) = 1 + 2a > 0
and  therefore | |
2
 1  < 0,  leading  to | |   < 1.   Thus  all  the  eigenvalues  of   H
1
  and  so  H  lie
within the unit circle which in turn ensures that the Gauss-Seidel algorithm always converges
if  A is a symmetric and positive denite matrix.
Solution  25.
The region in which the solution is sought consists of sixteen squares as illustrated in the gure.
1
-1
-1   1 u = 0
u = 0
u = 0 u = 0
     
   
u
1
  u
2
u
3
  u
4
The discretised form of the partial dierential equation is
u
i+1,j
 +u
i1,j
 +u
i,j+1
 +u
i,j1
4u
i,j
 = 2h
2
where  h = 1/2 where 2  i  2 and 2  j  2.   We consider each internal node in turn.
Node   Position   Equation
1   (0, 0)   u
1,0
 +u
1,0
 +u
0,1
 +u
0,1
4u
0,0
 = 2h
2
2   (1/2, 0)   u
2,0
 +u
0,0
 +u
1,1
 +u
1,1
4u
1,0
 = 2h
2
4   (0, 1/2)   u
1,1
 +u
1,1
 +u
0,2
 +u
0,0
4u
0,1
 = 2h
2
5   (1/2, 1/2)   u
2,1
 +u
0,1
 +u
1,2
 +u
1,0
4u
1,1
 = 2h
2
Furthermore, symmetry requires that u
1
 = u
0,0
, u
2
 = u
1,0
 = u
1,0
, u
3
 = u
0,1
 = u
0,1
 and u
4
 = u
1,1
 =
u
1,1
  =  u
1,1
 .   The  boundary  conditions  are  now introduced  into  the  nite  dierence  equations  to
103
obtain
Node   Equation
1   2u
2
 + 2u
3
4u
1
 = 2h
2
2   u
1
 + 2u
4
4u
2
 = 2h
2
4   2u
4
 +u
1
4u
3
 = 2h
2
5   u
3
 +u
2
4u
4
 = 2h
2
It is clear from the second and third equations that  u
2
 = u
3
.   With this simplication,  u
1
,  u
2
 and  u
4
satisfy
u
1
u
2
 = 1/8 ,   u
1
4u
2
 + 2u
4
 = 1/2 ,   u
2
2u
4
 = 1/4 .
Eliminating  u
1
 between equations 1 and 2 gives
3u
2
 + 2u
4
  =   5/8
u
2
2u
4
  =   1/4
_
_
  
u
1
  =
  9
16
u
2
  =
  7
16
u
4
  =
  11
32
 .
Solution  26.
The discretised form of the partial dierential equation is
u
i+1,j
 +u
i1,j
 +u
i,j+1
 +u
i,j1
4u
i,j
 = h
2
u
3
i,j
where  h = 1/2.   We consider each internal node in turn.
Node   Position   Equation
1   (0, 0)   u
1,0
 +u
1,0
 +u
0,1
 +u
0,1
4u
0,0
 = h
2
u
3
0,0
2   (1/2, 0)   u
2,0
 +u
0,0
 +u
1,1
 +u
1,1
4u
1,0
 = h
2
u
3
1,0
4   (0, 1/2)   u
1,1
 +u
1,1
 +u
0,2
 +u
0,0
4u
0,1
 = h
2
u
3
0,1
5   (1/2, 1/2)   u
2,1
 +u
0,1
 +u
1,2
 +u
1,0
4u
1,1
 = h
2
u
3
1,1
Taking advantage of symmetry and boundary conditions, it is clear that
u
1
 = u
0,0
  u
2
 = u
1,0
 = u
1,0
  u
3
 = u
2,0
u
4
 = u
0,1
 = u
0,1
  u
5
 = u
1,1
 = u
1,1
 = u
1,1
  u
6
 = u
2,1
.
104   CHAPTER  6.   NSPDE  SOLUTIONS
The simplied form of the nite dierence equations is now
Node   Equation
1   2u
2
 + 2u
4
4u
1
 = h
2
u
3
1
2   u
3
 +u
1
 + 2u
5
4u
2
 = h
2
u
3
2
4   2u
5
 +c +u
1
4u
4
 = h
2
u
3
4
5   u
6
 +u
4
 +c +u
2
4u
5
 = h
2
u
3
5
.
The treatment of the gradient boundary condition can be done in two ways.
Fictitious  nodes   Using this approach, it follows that  u
3,0
 = u
1,0
 = u
2
  and  u
3,1
 = u
1,1
 = u
5
.   The
partial dierential equation applied at nodes 3 and 6 yields
Node   Equation
3   u
3,0
 +u
1,0
 +u
2,1
 +u
2,1
4u
2,0
 = h
2
u
3
2,0
     2u
2
 + 2u
6
4u
3
 = h
2
u
3
3
6   u
3,1
 +u
1,1
 +u
2,2
 +u
2,0
4u
2,1
 = h
2
u
3
2,1
     2u
5
 +c +u
3
4u
6
 = h
2
u
3
6
.
The  nite  dierence  equations  are  now  rewritten  in  iterative  form  by  solving  the  equation  derived
from node  k for  u
k
  to get
Node   Equation   Gauss-Seidel Iteration
1   u
1
  =
  2u
2
 + 2u
4
4 +h
2
u
2
1
u
(k+1)
1
  =
  2u
(k)
2
  + 2u
(k)
4
4 +h
2
[u
(k)
1
  ]
2
2   u
2
  =
  u
3
 +u
1
 + 2u
5
4 +h
2
u
2
2
u
(k+1)
2
  =
  u
(k)
3
  +u
(k+1)
1
  + 2u
(k)
5
4 +h
2
[u
(k)
2
  ]
2
3   u
3
  =
  2u
2
 + 2u
6
4 +h
2
u
2
3
u
(k+1)
3
  =
  2u
(k+1)
2
  + 2u
(k)
6
4 +h
2
[u
(k)
3
  ]
2
4   u
4
  =
  2u
5
 +c +u
1
4 +h
2
u
2
4
u
(k+1)
4
  =
  2u
(k)
5
  +c +u
(k+1)
1
4 +h
2
[u
(k)
4
  ]
2
5   u
5
  =
  u
6
 +u
4
 +c +u
2
4 +h
2
u
2
5
u
(k+1)
5
  =
  u
(k)
6
  +u
(k+1)
4
  +c +u
(k+1)
2
4 +h
2
[u
(k)
5
  ]
2
6   u
6
  =
  2u
5
 +c +u
3
4 +h
2
u
2
6
u
(k+1)
6
  =
  2u
(k+1)
5
  +c +u
(k+1)
3
4 +h
2
[u
(k)
6
  ]
2
.
Take at starting condition  u
(0)
1
  = u
(0)
2
  = u
(0)
3
  = u
(0)
4
  = u
(0)
5
  = u
(0)
6
  = c.
105
Backward  dierence   Using this approach, it follows that
Node   Equation
3   u
x
(1, 0) =
  u
0,0
4u
1,0
 + 3u
2,0
2h
  = 0      u
1
4u
2
 + 3u
3
 = 0
6   u
x
(1, 1/2) =
  u
0,1
4u
1,1
 + 3u
2,1
2h
  = 0      u
4
4u
5
 + 3u
6
 = 0 .
These equation are now used to eliminate  u
3
 and  u
6
 to obtain
h
2
u
3
1
  =   2u
2
 + 2u
4
4u
1
h
2
u
3
2
  =
  2
3
u
1
 + 2u
5
 8
3
u
2
h
2
u
3
4
  =   2u
5
 +c +u
1
4u
4
h
2
u
3
5
  =
  2
5
 u
4
 +c +u
2
 8
3
u
5
.
These equations are rewritten in xed point form for  u
1
,  u
2
,  u
4
 and  u
5
.
Solution  27.
Each term in the nite dierence scheme is replaced by its appropriate Taylor series.   The component
parts are:-
u
i,j+1
u
i,j1
2k
  =
  u
i,j
t
  +O(k
2
)
u
i,j
u
i,j1
k
  =
  u
i,j
t
  
  k
2
2
u
i,j
t
2
  +O(k
2
)
u
i+1,j
2u
i,j
 +u
i1,j
h
2
  =
  
2
u
i,j
x
2
  +
  h
2
12
4
u
i,j
x
4
  +O(h
4
)
The component parts are now assembled and lead to the conclusion that
_
u
i,j+1
u
i,j1
2k
_
+ (1 )
_
u
i,j
u
i,j1
k
_
_
u
i+1,j
2u
i,j
 +u
i1,j
h
2
_
= 
u
i,j
t
  + (1 )
_
u
i,j
t
  
  k
2
2
u
i,j
t
2
_
2
u
i,j
x
2
  +
  h
2
12
4
u
i,j
x
4
_
+O(k
2
, h
4
)
= 
k(1 )
2
2
u
i,j
t
2
  
  h
2
12
4
u
i,j
x
4
  +O(k
2
, h
4
)
taking account of the fact that  u
t
 = u
xx
.   Thus the truncation error is
k(1 )
2
2
u
t
2
 
  h
2
12
4
u
x
4
 +O(k
2
, h
4
).
Furthermore,  u
tt
 = u
txx
 = u
xxxx
 and so this error terms further simplies to
h
2
12
_
6r(1 ) + 1
_
 
2
u
t
2
  +O(k
2
, h
4
) ,   r =
  k
h
2
.
Clearly it is benecial to choose   = 1 1/6r.
106   CHAPTER  6.   NSPDE  SOLUTIONS
Solution  28.
The explicit Euler representation of the partial dierential equation is
u
i,j+1
 = r u
i+1,j
 + (1 2r) u
i,j
 +r u
i1,j
 )
where  r  =  k/h
2
.   In  this  application  k  =  1/20  and  h  =  1/4,   and  therefore  r  =  4/5  and  the  Euler
algorithm becomes
u
i,j+1
 =
  1
5
_
4u
i+1,j
3u
i,j
 + 4u
i1,j
_
.
The boundary conditions are  u
0,j
 = u
4,j
 = 0.   The system of dierential equations to be solved is
u
1,j+1
  =
  1
5
_
4u
2,j
3u
1,j
_
u
2,j+1
  =
  1
5
_
4u
3,j
3u
2,j
 + 4u
1,j
_
u
3,j+1
  =
  1
5
_
 3u
3,j
 + 4u
2,j
_
with  initial   conditions   u
1,0
  =  1/
2,   u
2,0
  =  1  and  u
3,0
  =  1/
2.   It   is   immediately  obvious   by
subtracting the rst and third equations that
u
3,j+1
u
1,j+1
 = 
3
5
_
u
3,j
u
1,j
_
.
Since the initial value of the dierence is zero, then the dierence remains zero, that is,  u
1,j
  = u
3,j
.
Thus  u
i,j
  and  u
2,j
  satisfy
u
1,j+1
 =
  1
5
_
4u
2,j
3u
1,j
_
  u
2,j+1
 =
  1
5
_
8u
1,j
3u
2,j
_
with  initial   conditions  u
1,0
  =  1/
2,   u
2,0
  =  1.   We  are  required  to  estimate  u
2,2
.   First  integration
gives
u
1,1
 =
  1
5
_
4 3/
2
_
 =
  1
5
2
_
4
2 3
_
  u
2,1
 =
  1
5
_
4
2 3
_
.
First integration gives
u
1,2
 =
  1
5
_
4u
2,1
3u
1,1
_
 =
  1
25
2
_
4
2 3
_
2
u
2,2
 =
  1
5
_
8u
1,1
3u
2,1
_
 =
  1
25
_
4
2 3
_
2
.
Therefore  u
1,2
  0.1997 (0.2635) and  u
2,2
  0.2824 (0.3727).   Exact solution is  u(x, t) = e
2
t
sin x.
Solution  29.
(a)   The Crank-Nicolson algorithm with  h = 1/4 and  k = 0.1 corresponds to the Courant number
r = 1.6.   The re-arranged Crank-Nicolson iterative scheme is therefore
u
i+1,j+1
 +
 13
4
  u
i,j+1
u
i1,j+1
 = u
i+1,j
 3
4
 u
i,j
 +  u
i1,j
 .
107
The boundary conditions are  u
0,j
 = u
4,j
 = 0 giving the simplied iterative algorithm
u
2,j+1
 +
 13
4
  u
1,j+1
  =   u
2,j
 3
4
 u
1,j
u
3,j+1
 +
 13
4
  u
2,j+1
u
1,j+1
  =   u
3,j
 3
4
 u
2,j
 +  u
1,j
13
4
  u
3,j+1
u
2,j+1
  =   
3
4
 u
3,j
 +  u
2,j
 .
As in the previous example, the solution is symmetric about  x = 1/2 so that  u
1,j
 = u
3,j
.   With
this additional simplication, it follows that
13
4
  u
1,j+1
u
2,j+1
  =   
3
4
 u
1,j
 +u
2,j
2u
1,j+1
 +
 13
4
  u
2,j+1
  =   2u
1,j
 3
4
 u
2,j
_
_
  
u
1,j+1
  =   
7u
1,j
137
  +
 40u
2,j
137
u
2,j+1
  =
  80u
1,j
137
  
 7u
2,j
137
  .
With initial conditions  u
1,0
 = 1/
2 and  u
2,0
 = 1, it is simple arithmetic to verify that  u
1,1
 =
0.2558 (0.2635) and  u
2,1
 = 0.3618 (0.3727).
(b)   The Crank-Nicolson algorithm with h = 1/4 and k = 0.025 corresponds to the Courant number
r = 0.4.   The re-arranged Crank-Nicolson iterative scheme is therefore
u
i+1,j+1
 + 7u
i,j+1
u
i1,j+1
 = u
i+1,j
 + 3u
i,j
 +u
i1,j
 .
The boundary conditions are  u
0,j
 = u
4,j
 = 0 and give rise to the simplied iterative algorithm
u
2,j+1
 + 7u
1,j+1
  =   u
2,j
 + 3u
1,j
u
3,j+1
 + 7u
2,j+1
u
1,j+1
  =   u
3,j
 + 3u
2,j
 +u
1,j
7u
3,j+1
u
2,j+1
  =   3u
3,j
 +u
2,j
 .
The  symmetry  of   the  solution  about  x  =  1/2  means  that  u
1,j
  =  u
3,j
.   With  this  additional
simplication, it follows that
7u
1,j+1
u
2,j+1
  =   3u
1,j
 +u
2,j
2u
1,j+1
 + 7u
2,j+1
  =   2u
1,j
 + 3u
2,j
_
_
  
u
1,j+1
  =
  23u
1,j
47
  +
 7u
2,j
47
u
2,j+1
  =
  20u
1,j
47
  +
 23u
2,j
47
  .
With initial conditions u
1,0
 = 1/
2 and u
2,0
 = 1, it is straight forward arithmetic to verify that
u
1,1
 = 0.49497 and  u
2,1
 = 0.79026.   One further calculation gives  u
2,2
 = 0.59735 (0.61050).
Solution  30.
The explicit Euler representation of the partial dierential equation is
u
i,j+1
u
i,j
 = r x
i
 ( u
i+1,j
2u
i,j
 +u
i1,j
 )
108   CHAPTER  6.   NSPDE  SOLUTIONS
where  r = k/h
2
.   This formulae may in turn be re-written in the explicit form
u
i,j+1
 = r x
i
u
i+1,j
 + (1 2r x
i
)u
i,j
 +r x
i
u
i1,j
The  ctitious  nodes  procedure  for  treating  the  gradient  boundary  condition  introduces  a  ctitious
solution  u
n+1,j
  and removes this solution by eliminating it between the equations
u(1/2, 0)
x
  = 
u
n,j
2
  =
  u
n+1,j
u
n1,j
2h
     u
n+1,j
 = u
n1,j
hu
n,j
u(1/2, t)
t
  =
  1
2
2
u(1/2, t)
x
2
     u
n,j+1
 =
  r
2
 u
n+1,j
 + (1 r)u
n,j
 +
  r
2
u
n1,j
 .
The result of this elimination is the condition
u
n,j+1
 =
_
1 r 
  rh
2
_
u
n,j
 +ru
n1,j
 .
The  boundary  condition  at   x  =  0  is   u
0,j
  =  0,   and  therefore  the  numerical   problem  to  be  solved
consists of the equations
i = 1   u
1,j+1
  =   (1 2rh)u
1,j
 +rhu
2,j
1 < i < n   u
i,j+1
  =   r x
i
u
i+1,j
 + (1 2r x
i
)u
i,j
 +r x
i
u
i1,j
i = n   u
n,j+1
  =
_
1 r 
  rh
2
_
u
n,j
 +ru
n1,j
 .
In  this  example  k  =  0.005  and  h  =  1/10,   and  therefore  r  =  1/2  and  n  =  5.   The  explicit  Euler
algorithm algorithm is now particularised to the equations
u
1,j+1
  =
  18u
1,j
 +u
2,j
20
u
2,j+1
  =
  u
1,j
 + 8u
2,j
 +u
3,j
10
u
3,j+1
  =
  3u
2,j
 + 14u
3,j
 + 3u
4,j
20
u
4,j+1
  =
  u
3,j
 + 3u
4,j
 +u
5,j
5
u
5,j+1
  =
  20u
4,j
 + 19u
5,j
40
which are iterated starting with
u
1,0
 = 0.09 ,   u
2,0
 = 0.16 ,   u
3,0
 = 0.21 ,   u
4,0
 = 0.24 .
To determine whether or not the solution is well-behaved at (1/2, 0), we check if the gradient boundary
condition  is  satised  there.   Clearly  u(1/2, 0)  =  0.35  while  the  gradient  is  zero  there.   Clearly  the
gradient boundary condition is not instantaneously satised initially, and so we expect trouble with
the numerical scheme.
109
Solution  31.
The general iterative scheme in the previous question is
i = 1   u
1,j+1
  =   (1 2rh)u
1,j
 +rhu
2,j
1 < i < n   u
i,j+1
  =   r x
i
u
i+1,j
 + (1 2r x
i
)u
i,j
 +r x
i
u
i1,j
i = n   u
n,j+1
  =
_
1 r 
  rh
2
_
u
n,j
 +ru
n1,j
 .
The solution of this dierential equation may therefore be written in the form U
j+1
 = AU
j
 where the
matrix  A has form
_
_
(1 2rh)   rh   0   0               
                                
     0   r x
i
  (1 2r x
i
)   r x
i
  0     
                                
                    0   r   1 r 
  rh
2
_
_
We require the eigenvalues of this matrix to lie within the unit circle.   The rst Gershgorin circle is
|z  (1  2rh)| =  rh.   This circle has centre on the  x axis and the extremities of its diameter are at
the points (1  rh, 0) and (1  3rh, 0).   The second Gershgorin circle is |z  (1  2rx
i
)| = rx
i
.   This
circle  also  has  centre  on  the  x  axis  and  the  extremities  of  its  diameter  are  at  the  points  (1, 0)  and
(1  4rx
i
, 0) where 0  <  x
i
  < 1/2.   These two circles are contained within the unit circle centred on
the origin provided
1 3rh   >   1
1 4rx
i
  >   1
_
_
  
2   >   3rh
1   >   2rx
i
_
_
     r < 1
since 0 < x
i
  < 1/2.   The third Gershgorin circle is
 z 
_
1 r 
  rh
2
_
 = r
This circle also has centre on the  x axis and its diameter extends over the interval
1 2r rh/2 < x < 1 
  rh
2
  < 1 .
Thus the third Gershgorin circle is entirely contained within the unit circle provided
1 2r rh/2 > 1      r <
  4
4 +h
.
110   CHAPTER  6.   NSPDE  SOLUTIONS
Solution  32.
Some useful Taylor series expansions of  u(x +h, t +k) are
u
i,j+1
u
i,j1
2k
  =
  u
i,j
t
  +O(k
2
)
u
i,j+1
u
i,j
k
  =
  u
i,j
t
  +
  k
2
2
u
i,j
t
2
  +O(k
2
)
u
i+1,j
2u
i,j
 +u
i1,j
h
2
  =
  
2
u
i,j
x
2
  +
  h
2
12
4
u
i,j
x
4
  +O(h
4
)
The left hand side of the iterative scheme may be rewritten
2k
_
2u
i,j+1
(u
i+1,j
 +u
i1,j
)
_
+
 (u
i+1,j
u
i1,j
)
2h
  f
i,j
=
  
2k
_
2(u
i,j+1
u
i,j
) (u
i+1,j
2u
i,j
 +u
i1,j
)
_
+
 (u
i+1,j
u
i1,j
)
2h
  f
i,j
=
  
2k
_
2k
u
i,j
t
  +k
2
2
u
i,j
t
2
  h
2
2
u
i,j
x
2
  
  h
4
12
4
u
i,j
x
4
  +O(k
3
, h
6
)
_
+
  u
i,j
x
  +O(h
2
) f
i,j
= 
 u
i,j
t
  +
  u
i,j
x
  f
i,j
 +
  k
2
_
 
2
u
i,j
t
2
  
  h
2
k
2
2
u
i,j
x
2
  
  h
4
12k
2
4
u
i,j
x
4
_
+O(k
2
, h
6
/k, h
2
)
=
  k
2
_
 
2
u
i,j
t
2
  
  h
2
k
2
2
u
i,j
x
2
  
  h
4
12k
2
4
u
i,j
x
4
_
+O(k
2
, h
6
/k, h
2
)
(a)   Take  r = k/h then the remainder is
k
2
_
 
2
u
i,j
t
2
  
  1
r
2
2
u
i,j
x
2
  
  h
2
12r
2
4
u
i,j
x
4
_
+O(k
2
, h
5
, h
2
)
As  h   0  and  k   0  such  that  r  is  constant  then  the  remainder  also  tends  to  zero  and  the
scheme is consistent.
(b)   Take  r = k/h
2
then the remainder is
2
_
k
2
u
i,j
t
2
  
 1
r
2
u
i,j
x
2
  
  h
2
12r
4
u
i,j
x
4
_
+O(k
2
, h
4
, h
2
)
When  h 0 and  k 0 such that  r is constant, the remainder no longer tends to zero and so
the scheme is not consistent.   The scheme now gives the solution to the equation
u
t
 +u
x
  c
2
  u
xx
f(x, t) = 0
where  c is a positive constant.
Solution  33.
The numerical scheme is
u
i,j+1
u
i,j
 = r
_
(u
i1,j+1
2u
i,j+1
 +u
i+1,j+1
) + (1 )(u
i1,j
2u
i,j
 +u
i+1,j
)
_
111
in the usual notation where    [0, 1],   h = 1/n and  r =  k/h
2
is the Courant number.   The stability
of this numerical scheme is investigated by substituting
u
p,q
 = 
q
e
iph
to obtain
q+1
e
iph
q
e
iph
=   r
_
(
q+1
e
i(p1)h
2
q+1
e
iph
+
q+1
e
i(p+1)h
)
+ (1 )(
q
e
i(p1)h
2
q
e
iph
+
q
e
i(p+1)h
)
_
.
Cancellation of  
q
e
iph
now simplies the previous expression to give
 1   =   r
_
(e
ih
2 +e
ih
) + (1 )(e
ih
2 +e
ih
)
_
=   r (1  +) (e
ih
2 +e
ih
)
=   2r (1  +) (cos h 1) .
This is a linear equation for   with solution
 =
  1 2r (1 ) (1 cos h)
1 + 2r  (1 cos h)
  .
The denominator of this fraction is always positive and so || < 1 provided
1 2r  (1 cos h) < 1 2r (1 ) (1 cos h) < 1 + 2r  (1 cos h)
   2 2r  (1 cos h) < 2r (1 ) (1 cos h) < 2r  (1 cos h)
   1 r  (1 cos h) < r (1 ) (1 cos h) < r  (1 cos h)
   1 +r (1 2) (1 cos h) < 0 < r (1 cos h)
The  right  hand  inequality  simply  asserts  that  r  must  be  positive  and  so  this  inequality  is  satised
automatically.   The second inequality asserts that
r <
  1
2(1 2) sin
2
(h/2)
In particular,    [0, 1/2) to ensure that  r  > 0.   Subject to this constraint on  ,  r must be less than
the minimum upper bound as   varies, that is,
0 < r <
  1
2(1 2)
 ,     [0, 1/2) .
112   CHAPTER  6.   NSPDE  SOLUTIONS
Solution  34.
Let  T  denote the tri-diagonal matrix of the question, then
T  =
_
_
a   b   0   0             0
c   a   b   0             0
0   c   a   b   0          
.
.
.
  .
.
.
  .
.
.
  .
.
.
  .
.
.
  .
.
.
  b
0                  0   c   a
_
_
and let the column vector  X
k
  be dened by the formula
X
k
 =
__
c
b
_
1/2
sin 
k
,
_
c
b
_
sin 2
k
,    ,
_
c
b
_
j/2
sin j
k
,    ,
_
c
b
_
n/2
sin n
k
 ]
T
.
To  establish  the  required  result,   it  is  enough  to  demonstrate  that   TX
k
  =  
k
X
k
  for  any  suitable
integer  k.
The computation of the rst and last entries of  TX
k
  require special attention.
First row of  TX
k
  gives the matrix product
a
_
c
b
_
1/2
sin 
k
 +b
_
c
b
_
 sin 2
k
  =
_
c
b
_
1/2
_
a sin
k
 + 2b
_
c
b
_
1/2
sin 
k
 cos 
k
_
=
_
c
b
_
1/2
sin 
k
_
a + 2
bc cos 
k
_
=   
k
_
c
b
_
1/2
sin 
k
 .
Intermediate rows of  TX
k
, say  j
th
row, give the matrix product
c
_
c
b
_
(j1)/2
sin(j 1)
k
 +a
_
c
b
_
j/2
sin j
k
 +b
_
c
b
_
(j+1)/2
sin(j + 1)
k
=
_
c
b
_
j/2
_
c
_
c
b
_
1/2
sin(j 1)
k
 +a sin j
k
 +b
_
c
b
_
1/2
sin(j + 1)
k
_
=
_
c
b
_
j/2
_
bc sin(j 1)
k
 +a sin j
k
 +
bc sin(j + 1)
k
_
=
_
c
b
_
j/2
_
2
bc sin j
k
 cos 
k
 +a sin j
k
_
=
_
c
b
_
j/2
sin j
k
_
a + 2
bc cos 
k
_
= 
k
_
c
b
_
j/2
sin j
k
 .
113
Last row of  TX
k
  gives the matrix product
c
_
c
b
_
(n1)/2
sin(n 1)
k
 +a
_
c
b
_
n/2
sin n
k
=
_
c
b
_
n/2
_
c
_
c
b
_
1/2
sin(n 1)
k
 +a sin n
k
_
=
_
c
b
_
n/2
_
bc sin(n 1)
k
 +a sin n
k
_
.
The identity sin(n 1)
k
 + sin(n + 1)
k
 = 2 sin n
k
 cos 
k
  implies that sin(n 1)
k
 = 2 sinn
k
 cos 
k
in view on the denition of  
k
.   Therefore,
c
_
c
b
_
(n1)/2
sin(n 1)
k
 +a
_
c
b
_
n/2
sin n
k
 = 
k
_
c
b
_
n/2
sin n
k
 .
In conclusion  
k
  is an eigenvalue of  T  with eigenvector  X
k
.
Solution  35.
The numerical scheme is
u
i,j+1
u
i,j
 = r(u
i1,j+1
2u
i,j+1
 +u
i+1,j+1
)
in the usual notation where    [0, 1],  h = 1/n and  r = k/h
2
is the Courant number.
(a)   The Fourier method investigates the stability of the numerical scheme by substituting
u
p,q
 = 
q
e
iph
in the scheme to obtain
q+1
e
iph
q
e
iph
= r(
q+1
e
i(p1)h
2
q+1
e
iph
+
q+1
e
i(p+1)h
) .
After cancellation of  
q
e
iph
this equation simplies to
 1 = r(e
ih
2 +e
ih
) = 2r(cos h 1) = 4rsin
2
h/2 .
Thus it is seen that
 =
  1
1 + 4rsin
2
h/2
Since 0  <    < 1 for all choices of  r  and positive  ,  then the numerical algorithm is uncondi-
tionally stable.
(b)   The Matrix method investigates the stability of the numerical scheme
u
i,j+1
u
i,j
 = r(u
i1,j+1
2u
i,j+1
 +u
i+1,j+1
)
by rewriting it in the form
ru
i1,j+1
 + (1 + 2r)u
i,j+1
ru
i+1,j+1
 = u
i,j
 .
114   CHAPTER  6.   NSPDE  SOLUTIONS
The boundary conditions are  u
0,j
 = u
n,j
 = 0 and therefore the equations to be solved are
i = 1   (1 + 2r)u
1,j+1
ru
2,j+1
  =   u
1,j
1 < i < n 1   ru
i1,j+1
 + (1 + 2r)u
i,j+1
ru
i+1,j+1
  =   u
i,j
i = n 1   ru
n2,j+1
 + (1 + 2r)u
n2,j+1
  =   u
i,j
 .
There equations  can  be  assembled  into the  tri-diagonal  form  TU
j+1
  =  U
j
  where  T  is  the  tri-
diagonal matrix
T  =
_
_
(1 + 2r)   r   0                              
r   (1 + 2r)   r   0                         
                                          
          0   r   (1 + 2r)   r   0          
                                          
                              0   r   (1 + 2r)
_
_
The numerical scheme TU
j+1
 = U
j
  will be stable provided the eigenvalues of T
1
all lie within
the  unit   circle.   However,   if   the  eigenvalues   of   T
1
all   lie  within  the  unit   circle,   then  the
eigenvalues of T  are required to lie outside the unit circle to guarantee stability.   The eigenvalues
of  T  are
i
 = 1 + 2r + 2 r cos i/n = 1 + 2r(1 + cos i/n) = 1 + 4r sin
2
(i/2n) > 1
and so the previous condition is satised and the numerical scheme is unconditionally stable.
Solution  36.
(a)   The Euler algorithm is
u
i,j+1
r u
i+1,j
(1 2r) u
i,j
r u
i1,j
 = 0 .
To  establish  the  consistency  of   this  scheme,   it  is  convenient  to  re-express  the  scheme  in  the
form
u
i,j+1
u
i,j
r (u
i+1,j
2 u
i,j
 +  u
i1,j
) = 0 .
The component quantities appearing in this algorithm are now replaced by their Taylor series
forms
u
i,j+1
u
i,j
  =   k
u
i,j
t
  +
  k
2
2
2
u
i,j
t
2
  +O(k
2
)
u
i+1,j
2 u
i,j
 +  u
i1,j
  =   h
2
2
u
i,j
x
2
  +
  h
4
12
4
u
i,j
x
4
  +O(h
6
)
115
to obtain
u
i,j+1
u
i,j
r (u
i+1,j
2 u
i,j
 +  u
i1,j
)
= k
u
i,j
t
  +
  k
2
2
2
u
i,j
t
2
  r
_
h
2
2
u
i,j
x
2
  +
  h
4
12
4
u
i,j
x
4
_
+O(k
2
, rh
6
)
= k
_
u
i,j
t
  
  
2
u
i,j
x
2
_
+
  k
2
2
2
u
i,j
t
2
  
  kh
2
12
4
u
i,j
x
4
  +O(k
2
, kh
4
)
=
  k
2
2
2
u
i,j
t
2
  
  kh
2
12
4
u
i,j
x
4
  +O(k
2
, kh
4
) .
This tends towards zero as  h 0
+
and  k 0
+
and so the Euler algorithm is consistent.   Note
again that the choice  r = 1/6 is particularly benecial in the case of the diusion equation.
(b)   The Richardson algorithm is
u
i,j+1
u
i,j1
2r (u
i+1,j
2u
i,j
 +u
i1,j
 ) = 0 .
The component quantities appearing in this algorithm are now replaced by their Taylor series
forms
u
i,j+1
u
i,j1
  =   2k
u
i,j
t
  +
  k
3
3
2
u
i,j
t
2
  +O(k
5
)
u
i+1,j
2 u
i,j
 +  u
i1,j
  =   h
2
2
u
i,j
x
2
  +
  h
4
12
4
u
i,j
x
4
  +O(h
6
)
to obtain
u
i,j+1
u
i,j1
2r (u
i+1,j
2u
i,j
 +u
i1,j
)
= 2k
u
i,j
t
  2r
_
h
2
2
u
i,j
x
2
  +
  h
4
12
4
u
i,j
x
4
_
+O(k
3
, rh
6
)
= 2k
_
u
i,j
t
  
  
2
u
i,j
x
2
_
  kh
2
6
4
u
i,j
x
4
  +O(k
3
, kh
4
)
= 
kh
2
6
4
u
i,j
x
4
  +O(k
3
, kh
4
) .
This tends towards zero as  h 0
+
and  k 0
+
and so the Richardson algorithm is consistent.
Solution  37.
The stability of the 2-D ADI algorithm is investigated by the substitution
u
n
p,q
 = 
n
e
i(p+q)h
.
The calculation is based on the auxiliary observation that
2
x
u   =   (e
ih
2 +e
ih
) u = 2(cos h 1) u = 4(sin
2
h/2) u,
2
y
 u   =   (e
ih
2 +e
ih
) u = 2(cos h 1) u = 4(sin
2
h/2) u.
116   CHAPTER  6.   NSPDE  SOLUTIONS
The quantity   u
n+1
is rst eliminated between the equations
(1 r
2
x
) u
n+1
=   (1 +r
2
y
) u
n
,
(1 r
2
y
)u
n+1
=    u
n+1
r
2
y
u
n
by  applying  the  operator  (1  r
2
x
)  to  the  second  equation  and  then  taking  advantage  of  the  rst
equation.   The result of this procedure is
(1 r
2
x
)(1 r
2
y
)u
n+1
= (1 +r
2
y
) u
n
r(1 r
2
x
)
2
y
u
n
.
Taking account of the auxiliary observations,
(1 + 4r sin
2
(h/2))(1 + 4r sin
2
(h/2))   =   (1 4r sin
2
(h/2)) + 4r(1 + 4r sin
2
(h/2))(sin
2
h/2)
=   1 + 16r
2
sin
2
(h/2) sin
2
(h/2) .
Thus we see that
0 <  =
  1 + 16r
2
sin
2
(h/2) sin
2
(h/2)
1 + 4r sin
2
(h/2) + 4r sin
2
(h/2) + 16r
2
sin
2
(h/2) sin
2
(h/2)
  < 1 .
Thus the 2-D ADI method yields is unconditionally stable.
Solution  38.
The the 3-D ADI method is given by the iterative scheme
(1 r
2
x
)u
n+1
=   (1 +r
2
y
 +r
2
z
)u
n
,
(1 r
2
y
)u
n+2
=   (1 +r
2
x
 +r
2
z
)u
n+1
,
(1 r
2
z
)u
n+3
=   (1 +r
2
x
 +r
2
y
)u
n+2
.
The quantity  u
n+1
is rst eliminated by multiplying the second equation by the operator (1  r
2
x
)
and then using the rst equation.   The result of this procedure is
(1 r
2
x
)(1 r
2
y
)u
n+2
= (1 +r
2
x
 +r
2
z
)(1 r
2
x
)u
n+1
= (1 +r
2
x
 +r
2
z
)(1 +r
2
y
 +r
2
z
)u
n
.
The quantity u
n+2
is now eliminated by multiplying the third equation by the operator (1r
2
x
)(1 
r
2
y
) and then using the equation above.   The result of this procedure is
(1 r
2
x
)(1 r
2
y
)(1 r
2
z
)u
n+3
= (1 +r
2
x
 +r
2
y
)(1 +r
2
x
 +r
2
z
)(1 +r
2
y
 +r
2
z
)u
n
.
The stability of the 3-D ADI algorithm is investigated by the substitution
u
n
p,q
 = 
n
e
i(p+q+s)h
.
117
The calculation is based on the auxiliary observation that
2
x
u   =   (e
ih
2 +e
ih
) u = 2(cos h 1) u = 4 sin
2
(h/2) u,
2
y
 u   =   (e
ih
2 +e
ih
) u = 2(cos h 1) u = 4 sin
2
(h/2) u,
2
z
 u   =   (e
ih
2 +e
ih
) u = 2(cos h 1) u = 4 sin
2
(h/2) u.
As in the previous question, it is straight forward algebra to conrm that   satises
3
=
  [1 4r(sin
2
(h/2) + sin
2
(h/2))][1 4r(sin
2
(h/2) + sin
2
(h/2)][1 4r(sin
2
(h/2) + sin
2
(h/2)]
(1 + 4r sin
2
(h/2))(1 + 4r sin
2
(h/2))(1 + 4r sin
2
(h/2))
  .
In order to facilitate the discussion of  
3
, write
x = 4r sin
2
(h/2) ,   y = 4r sin
2
(h/2) ,   z = 4r sin
2
(h/2)
so that
3
=
  [1 x y][1 x z][1 y z]
[1 +x][1 +y][1 +z]
  .
By symmetry, the minimum/maximum values of  
3
occur where  x = y = z.   In this case
3
=
_
 1 2x
1 +x
_
3
,
and this value will lie within || < 1 provided
1 <
  1 2x
1 +x
  < 1 ,      1 x < 1 2x < 1 +x,      2 +x < 0 < 3x.
Since  x  0 by construction then we require 4r sin
2
(h/2) < 2 for all choices of  .   Thus  r  1/2 to
guarantee stability.
Solution  39.
The central dierence numerical scheme for the solution of the diusion equation is
du
k
dt
  =
  1
h
2
_
u
k+1
2u
k
 +u
k1
_
+f
k
(t) +O(h
2
) ,   k = 1, 2,    , n 1
where  f
k
(t)  =  f(kh, t).   The  solution  at  x  =  0  is  given  by  the  boundary  condition  u
0
(t)  =  0,   and
therefore  it  remains  to  construct  the  dierential   equation  for  u
n
(t)  using  the  boundary  condition.
In  the  ctitious  nodes  procedure,  this  equation  is  determined  by  eliminating  the  ctitious  solution
u
n+1
(t) between the equations
du
n
dt
  =
  1
h
2
_
u
n+1
2u
n
 +u
n1
_
+f
n
(t) +O(h
2
)
u
n+1
u
n1
2h
  =
  u(x
n
, t)
x
  +
  h
2
6
3
u(x
n
, t)
x
3
  +O(h
4
) = g(t) +
  h
2
6
3
u(x
n
, t)
x
3
  +O(h
4
) .
(6.7)
118   CHAPTER  6.   NSPDE  SOLUTIONS
The second equation yields
u
n+1
 = u
n1
 + 2h
u(x
n
, t)
x
  +O(h
3
) = u
n1
 + 2hg(t) +
  h
3
3
3
u(x
n
, t)
x
3
  +O(h
5
)
and when this identity is substituted into the rst of equation (6.7), the result is that  u
n
(t) satises
du
n
dt
  =
  1
h
2
_
2u
n1
 + 2hg(t) +
  h
3
3
3
u(x
n
, t)
x
3
  +O(h
5
) 2u
n
_
+f
n
(t) +O(h
2
)
=
  2(u
n1
u
n
)
h
2
  +
 2g(t)
h
  +f
n
(t) +
  h
3
3
u(x
n
, t)
x
3
  +O(h
3
) .
Thus the dierential equation satised by  u
n
  is only  O(h) accurate and so the solution of the nite
dierence equations in this case are  O(h) accurate.
Clearly  O(h
3
)   accuracy  is   recovered  whenever   
3
u(x
n
, t)/x
3
=  0.   It   follows   directly  from  the
dierential equation that
3
u(x
n
, t)
x
3
  =
  
x
_
u(x
n
, t)
t
  f(x, t)
_
 =
  
t
_
u(x
n
, t)
x
_
  f(x
n
, t)
x
  =
  dg
dt
 
  f(x
n
, t)
x
  .
Thus  O(h
3
) accuracy is recovered provided
dg
dt
  =
  f(x
n
, t)
x