0% found this document useful (0 votes)
45 views64 pages

2

Uploaded by

ljw8180
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views64 pages

2

Uploaded by

ljw8180
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 64

Chapter 08

INTEGRATION +
EXISTENCE, UNIQUENESS, AND
CONDITIONING
 For 𝑓: 𝑅 → 𝑅, definite integral over interval 𝑎, 𝑏
𝑏
𝐼 𝑓 = න 𝑓 𝑥 𝑑𝑥
𝑎
is defined by limit of Riemann
𝑛
sums
𝑅𝑛 = ෍ 𝑥𝑖+1 − 𝑥𝑖 𝑓(𝜉𝑖 )
𝑖=1
 Riemann integral exists provided integrand 𝑓 is bounded
and continuous almost everywhere
 Absolute condition number of integration with respect to
perturbations in integrand is 𝑏 − 𝑎
𝑏 𝑏
 𝐼 𝑓መ − 𝐼 𝑓 = ‫𝑓 𝑎׬‬መ 𝑥 𝑑𝑥 − ‫𝑓 𝑎׬‬ 𝑥 𝑑𝑥
𝑏
≤ න 𝑓መ 𝑥 − 𝑓 𝑥 𝑑𝑥
𝑎
≤ 𝑏 − 𝑎 𝑓መ − 𝑓 ∞
 An n-point quadrature rule has form
𝑛

𝑄𝑛 𝑓 = ෍ 𝑤𝑖 𝑓(𝑥𝑖 )
𝑖=1
 Points 𝑥𝑖 are called nodes or abscissas
 Multipliers 𝑤𝑖 are called weights

 Quadrature rule is
 Open if 𝑎 < 𝑥1 and 𝑥𝑛 < 𝑏
 Closed if 𝑥1 = 𝑎 and 𝑥𝑛 = 𝑏
 Quadrature rules are based on polynomial interpolation
 Integrand function 𝑓 is sampled at finite set of points
 Polynomial interpolating those points is determined
 Integral of interpolant is taken as estimate for integral of original function
 In practice, interpolating polynomial is not determined explicitly but used
to determine weights corresponding to nodes
 If Lagrange is interpolation used, then weights are given by
𝑏
𝑤𝑖 = න 𝑙𝑖 (𝑥) , 𝑖 = 1, … , 𝑛
𝑎
 Quadrature rule is weighted sum of finite number of sample
values of integrand function
 To obtain desired level of accuracy at low cost,
 How should sample points be chosen ?

 How should their contributions be weighted ?

 Computational work is measured by number of evaluations of


integrand function required
 Alternative derivation of quadrature rule uses method of
undetermined coefficients
 To derive n-point rule on interval [𝑎, 𝑏], take nodes 𝑥1 , … , 𝑥𝑛
as given and consider weights 𝑤1 , … , 𝑤𝑛 as coefficients to be
determined
 Force quadrature rule to integrate first 𝑛 polynomial basis
functions exactly, and by linearity, it will then integrate any
polynomial of degree 𝑛 − 1 exactly
 Thus we obtain system of moment equations that determines
weights for quadrature rule
 Derive 3-point rule 𝑄3 𝑓 = 𝑤1 𝑓 𝑥1 + 𝑤2 𝑓 𝑥2 + 𝑤3 𝑓(𝑥3 )

on interval [𝑎, 𝑏] using monomial basis

𝑎+𝑏
 Take 𝑥1 = 𝑎, 𝑥2 = , 𝑥3 = 𝑏 as nodes
2

 First three monomials are 1, 𝑥, 𝑥 2


 Resulting system of moment equations is
𝑏
𝑤1 ⋅ 1 + 𝑤2 ⋅ 1 + 𝑤3 ⋅ 1 = න 1 𝑑𝑥 = 𝑏 − 𝑎
𝑎
𝑏
𝑎+𝑏
𝑤1 ⋅ 𝑎 + 𝑤2 ⋅ + 𝑤3 ⋅ 𝑏 = න 𝑥 𝑑𝑥 = (𝑏 2 − 𝑎2 )/2
2 𝑎
2 𝑏
2
𝑎+𝑏
𝑤1 ⋅ 𝑎 + 𝑤2 ⋅ + 𝑤3 ⋅ 𝑏 2 = න 𝑥 2 𝑑𝑥 = (𝑏 3 − 𝑎3 )/3
2 𝑎
 In matrix form, linear system is

1 1 1
𝑎+𝑏 𝑤1 𝑏−𝑎
𝑎 𝑏
2 𝑤2 = (𝑏2 − 𝑎2 )/2
2
𝑎+𝑏 𝑤3 (𝑏3 − 𝑎3 )/3
𝑎2 𝑏2
2
 Solving system, we obtain weights

𝑏−𝑎 2 𝑏−𝑎 𝑏−𝑎


𝑤1 = , 𝑤2 = , 𝑤3 =
6 3 6
which is known as Simpson’s rule
 More generally, for any 𝑛 and choice of nodes 𝑥1 , … , 𝑥𝑛 ,

Vandermonde system

1 ⋯ 1 𝑤1 𝑏−𝑎
⋮ ⋱ ⋮ ⋮ = ⋮
𝑥1𝑛−1 ⋯ 𝑥𝑛𝑛−1 𝑤𝑛 (𝑏 𝑛 − 𝑎𝑛 )/𝑛
determines weights 𝑤1 , … , 𝑤𝑛
 Quadrature rule is of degree 𝑑 if it is exact for every

polynomial of degree 𝑑, but not exact for some


polynomial of degree 𝑑 + 1

 By construction, n-point interpolatory quadrature rule is

of degree at least 𝑛 − 1
 Review
𝑛
𝑓 𝑡 ℎ𝑛
max |𝑓 𝑡 − 𝑝𝑛−1 𝑡 | ≤
𝑡 4𝑛
 Rough error bound

1 𝑛+1 𝑛
𝐼 𝑓 − 𝑄𝑛 𝑓 ≤ ℎ 𝑓
4 ∞

where ℎ = max{𝑥𝑖+1 − 𝑥𝑖 }, shows that 𝑄𝑛 𝑓 → 𝐼 𝑓 as 𝑛 → ∞,provided 𝑓 𝑛


remains
well behaved

 Higher accuracy can be obtained by increasing n or by decreasing h


 Absolute condition number of quadrature rule is sum of
magnitudes of weights σ𝑛𝑖=1 |𝑤𝑖 |
 If weights are all nonnegative, then absolute condition
number of quadrature rule is 𝑏 − 𝑎, same as that of
underlying integral, so rule is stable
 If any weights are negative, then absolute condition number
can be much larger, and rule can be unstable
NUMERICAL QUADRATURE
NEWTON-COTES QUADRATURE
 Newton-Cotes quadrature rules use equally spaced nodes in
interval 𝑎, 𝑏
𝑎+𝑏
 Midpoint rule: 𝑀 𝑓 = 𝑏 − 𝑎 𝑓
2

𝑏−𝑎
 Trapezoid rule: 𝑇 𝑓 = 𝑓 𝑎 +𝑓 𝑏
2

𝑏−𝑎 𝑎+𝑏
 Simpson’s rule: 𝑆 𝑓 = 𝑓 𝑎 + 4𝑓 +𝑓 𝑏
6 2
1
 Approximate integral of ‫׬‬0 exp −𝑥 2 𝑑𝑥

1
 𝑀 𝑓 = 1 − 0 exp − ≈ 0.778801
4

1
𝑇 𝑓 = exp 0 + exp −1 ≈ 0.683940
2

1 1
𝑆 𝑓 = exp 0 + 4 exp − + exp −1 ≈ 0.747180
6 4
 Expanding integrand 𝑓 in Taylor series about midpoint 𝑚 = (𝑎 + 𝑏)/2 of interval
𝑎, 𝑏
𝑓′ 𝑚 𝑓 ′′ 𝑚
𝑓 𝑥 =𝑓 𝑚 + 𝑥−𝑚 + 𝑥−𝑚 2
1! 2!
𝑓 ′′′ 𝑚 𝑓 4 𝑚
+ 𝑥−𝑚 3+ 𝑥−𝑚 4+⋯
3! 4!
 Integrating from 𝑎 to 𝑏, odd-order terms drop out, yielding
𝑓 ′′ 𝑚 𝑓 4 𝑚
𝐼 𝑓 =𝑓 𝑚 𝑏−𝑎 + 𝑏−𝑎 3+ 𝑏−𝑎 5+⋯
24 1920
=𝑀 𝑓 +𝐸 𝑓 +𝐹 𝑓 +⋯
where 𝐸(𝑓) and 𝐹(𝑓) represent first two terms in error expansion for midpoint rule
 If we substitute 𝑥 = 𝑎 and 𝑥 = 𝑏 into Taylor series, add two
series together, observe once again that odd-order terms
drop out, solve for 𝑓(𝑚), and substitute into midpoint rule,
we obtain
𝐼 𝑓 = 𝑇 𝑓 − 2𝐸 𝑓 − 4𝐹 𝑓 − ⋯
 Thus, provided length of interval is sufficiently small and 𝑓 4
is well behaved, midpoint rule is about twice as accurate as
trapezoid rule
 Halving length of interval decreases error in either rule by
factor of about 1/8
 Difference between midpoint and trapezoid rules provides estimate for error in
either of them
𝑇 𝑓 − 𝑀 𝑓 = 3𝐸 𝑓 + 5 𝑓 + ⋯
so
𝑇 𝑓 −𝑀 𝑓
𝐸 𝑓 ≈
3
 Weighted combination of midpoint and trapezoid rules eliminates 𝐸 𝑓 term from
error expansion
2 1 2
𝐼 𝑓 = 𝑀 𝑓 + 𝑇 𝑓 − 𝐹 𝑓 +⋯
3 3 3
2
=𝑆 𝑓 − 𝐹 𝑓 +⋯
3
which gives alternate derivation for Simpson’s rule and estimate for its error
 We illustrate error estimation by computing approximate
1 2 1
value for integral ‫׬‬0 𝑥 𝑑𝑥 =
3

1 2 1
𝑀 𝑓 = 1 − 0
2
=
4
, error: 1/12

1−0 1
𝑇 𝑓 = 02 + 12 = , error: −1/6
2 2

𝑇 𝑓 −𝑀 𝑓
𝐸 𝑓 ≈ = 1/12 , error: 0
3
 Since n-point Newton-Cotes rule is based on polynomial interpolant of degree 𝑛 − 1, we

expect rule to have degree 𝑛 − 1

 Thus, we expect midpoint rule to have degree 0, trapezoid rule degree 1, Simpson’s rule

degree 2, etc.

 From Taylor series expansion, error for midpoint rule depends on second and higher

derivatives of integrand, which vanish for linear as well as constant polynomials

 So midpoint rule integrate linear polynomials exactly, hence its degree is 1 rather than 0

 Similarly, error for Simpson’s rule depends on fourth and higher derivatives, which vanish for

cubics as well as quadratic polynomials, so Simpson’s rule is of degree 3


 In general, odd-order Newton-Cotes rule gains extra
degree beyond that of polynomial interpolant on which it
is based
 n-point Newton-Cotes rule is of degree 𝑛 − 1 if 𝑛 is even,
but of degree 𝑛 if 𝑛 is odd
 This phenomenon is due to cancellation of positive and
negative errors
 Newton-cotes quadrature rules are simple and often effective,
but they have drawbacks
 Using large number of equally spaced nodes may incur erratic
behavior associated with high-degree polynomial interpolation
(e.g.,weights may be negative)
 Indeed, every n-point Newton-Cotes rule with 𝑛 ≥ 11 has at
least one negative weight, and σ𝑛𝑖=1 𝑤𝑖 → ∞ as 𝑛 → ∞, so
Newton-Cotes rules become arbitrarily ill-conditioned
 Newton-Cotes rules are not of highest degree possible for
number of nodes used
 As with polynomial interpolation, use of Chebyshev points produces better
results
 Improved accuracy results from good approximation properties of
interpolation at Chebyshev points
 Weights are always positive and approximate integral always converges to
exact integral as 𝑛 → ∞
 Quadrature rules using Chebyshev points are known as Clenshaw-Curtis
quadrature, which can be impolemented very efficiently
 Clenshaw-Curtis quadrature has many attractive features, but still does
not have maximum possible degree for number of nodes used
NUMERICAL QUADRATURE
GAUSSIAN QUADRATURE
 Gaussian quadrature rules are based on polynomial interpolation, but nodes as well
as weights are chosen to maximize degree of resulting rule
 With 2𝑛 parameters, we can attain degree of 2𝑛 − 1

 Gaussian quadrature rules can be derived by method of undetermined coefficients,


but resulting system of moment equations that determines nodes and weights is
nonlinear
 Also, nodes are usually irrational, even if endpoints of interval are rational

 Although inconvenient for hand computation, nodes are weights are tabulated in
advance and stored in subroutine for use on computer
 Derive two-point Gaussian rule on −1, 1
𝐺2 𝑓 = 𝑤1 𝑓 𝑥1 + 𝑤2 𝑓 𝑥2
where nodes 𝑥𝑖 and weights 𝑤𝑖 are chosen to maximize
degree of resulting rule
 We use method of undetermined coefficients, but now nodes
as well as weights are unknown parameters to be determined
 Four parameters are to be determined, so we expect to be
able to integrate cubic polynomials exactly, since cubics
depend on four parameters
 Requiring rule to integrate first four monomials exactly
gives moment equations
1
 𝑤1 + 𝑤2 = ‫׬‬−1 1 𝑑𝑥 = 𝑥 |1−1 =2
1
 𝑤1 𝑥1 + 𝑤2 𝑥2 = ‫׬‬−1 𝑥 𝑑𝑥 = 𝑥 2 /2|1−1 = 0
2 2 1 2
 𝑤1 𝑥1 + 𝑤2 𝑥2 = ‫׬‬−1 𝑥 𝑑𝑥 = 𝑥 3 /3|1−1 = 2/3
3 3 1 3
 𝑤1 𝑥1 + 𝑤2 𝑥2 = ‫׬‬−1 𝑥 𝑑𝑥 = 𝑥 4 /4|1−1 = 0
 One solution of this system of four nonlinear equations in four unknowns is given by
1 1
𝑥1 = − , 𝑥2 = , 𝑤1 = 1, 𝑤2 = 1
3 3
 Another solution reverses signs of 𝑥1 and 𝑥2
 Resulting two-point Gaussian rule has form
1 1
𝐺2 𝑓 = 𝑓 − +𝑓
3 3
and by construction it has degree three
 In general, for each 𝑛 there is unique n-point Gaussian rule, and it is of degree 2𝑛 −
1
 Gaussian quadrature rules can also be derived using orthogonal polynomials
https://en.wikipedia.org/wiki/Gaussian_quadratur
https://en.wikipedia.org/wiki/Gaussian_quadratur
 Gaussian rules are somewhat more difficult to apply than Newton-Cotes rules because
weights and nodes are usually derived for some specific interval, such as −1, 1
 Given interval of integration [𝑎, 𝑏] must be transformed into standard interval for which
nodes and weights have been tabulated
 To use quadrature rule tabulated on interval [𝛼, 𝛽],
𝛽 𝑛

න 𝑓 𝑥 𝑑𝑥 ≈ ෍ 𝑤𝑖 𝑓 𝑥𝑖
𝛼 𝑖=1
to approximate integral on interval [𝑎, 𝑏],
𝑏
𝑙 𝑔 = න 𝑔 𝑡 𝑑𝑡
𝑎
we must change variable from 𝑥 in [𝛼, 𝛽] to 𝑡 in [𝑎, 𝑏]
 Many transformations are possible, but simple linear
transformation
𝑏 − 𝑎 𝑥 + 𝑎𝛽 − 𝑏𝛼
𝑡=
𝛽−𝛼
has advantage of preserving degree of quadrature rule
 Gaussian quadrature rules have maximal degree and optimal
accuracy for number of nodes used
 Weights are always positive and approximate integral always
converges to exact integral as 𝑛 → ∞
 Unfortunately, Gaussian rules of different orders have no
nodes in common (except possibly midpoint), so Gaussian
rules are not progressive
 Thus, estimating error using Gaussian rules of different order
requires evaluating integrand function at full set of nodes of
both rules
NUMERICAL QUADRATURE
COMPOSITE QUADRATURE
 Alternative to using more nodes and higher degree rule is to subdivided
original interval into subintervals, then apply simple quadrature rule in
each subinterval
 Summing partial results then yields approximation to overall integral
 This approach is equivalent to using piecewise interpolation to derive
composite quadrature rule
 Composite rule is always stable if underlying simple rule is stable
 Approximate integral converges to exact integral as number of
subintervals goes to infinity provided underlying simple rule has degree at
least zero
 Subdivide interval [𝑎, 𝑏] into 𝑘 subintervals of length ℎ = (𝑏 − 𝑎)/𝑘, letting 𝑥𝑗 = 𝑎 +
𝑗ℎ, 𝑗 = 0, … , 𝑘
 Composite midpoint rule
𝑘 𝑘
𝑥𝑗−1 + 𝑥𝑗 𝑥𝑗−1 + 𝑥𝑗
𝑀𝑘 𝑓 = ෍ 𝑥𝑗 − 𝑥𝑗−1 𝑓 = ℎ෍𝑓
2 2
𝑗=1 𝑗=1
 Composite trapezoid rule
𝑘
𝑥𝑗 − 𝑥𝑗−1
𝑇𝑘 𝑓 = ෍ (𝑓 𝑥𝑗−1 + 𝑓 𝑥𝑗 )
2
𝑗=1
1 1
= ℎ 𝑓 𝑎 + 𝑓 𝑥1 + ⋯ + 𝑓 𝑥𝑘−1 + 𝑓 𝑏
2 2
 Composite quadrature offers simple means of estimating error by
using two different levels of subdivision, which is easily made
progressive
 For example, halving interval length reduces error in midpoint or
trapezoid rule by factor of about 1/8
 Halving width of each subinterval means twice as many subintervals
are required, so overall reduction in error is by factor of about 1/4
 If ℎ denotes subinterval length, then dominant term in error of
composite midpoint of trapezoid rules is 𝑂 ℎ2
 Dominant term in error of composite Simpson’s rule is 𝑂(ℎ4 ), so
halving subinterval length reduces error by factor of about 1/6
NUMERICAL QUADRATURE
ADAPTIVE QUADRATURE
 Composite quadrature rule with error estimate suggests
simple automatic quadrature procedure
 Continue to subdivide all subintervals, say by half, until
overall error estimate falls below desired tolerance
 Such uniform subdivision is grossly inefficient for many
integrands, however
 More intelligent approach is adaptive quadrature, in which
domain of integration is selectively refined to reflect
behavior of particular integrand function
 Start with pair of quadrature rules whose difference gives
error estimate
 Apply both rules on initial interval 𝑎, 𝑏
 If difference between rules exceeds error tolerance, subdivide
interval and apply rules in each subinterval
 Continue subdividing subintervals, as necessary, until
tolerance is met on all subintervals
 Integrand is sampled densely in regions where it is difficult to
integrate and sparsely in regions where it is easy
 Adaptive quadrature tends to be effective in practice, but it can be
fooled: both approximate integral and error estimate can be
completely wrong
 Integrand function is sampled at only finite number of points, so
significant features of integrand may be missed
 For example, interval of integration may be very wide but
“interesting” behavior of integrand may be confined to narrow range
 Sampling by automatic routine may miss interesting part of
integrand behavior, and resulting value for integral may be
completely wrong
 Adaptive quadrature routine may be inefficient in
handling discontinuities in integrand
 For example, adaptive routine may use many function
evaluations refining region around discontinuity of
integrand
 To prevent this, call quadrature routine separately to
compute integral on eighter side of discontinuity, avoiding
need to resolve discontinuity
NUMERICAL DIFFERENTIATION
 Differentiation is inherently sensitive, as small
perturbations in data can cause large changes in result
 Differentiation is inverse of integration, which is
inherently stable because of its smoothing effect
 For example, two functions shown below have very
similar definite integrals but very different derivatives
 To approximate derivative of function whose values are
known only at discrete set of points, good approach is to
fit some smooth function to given data and then
differentiate approximating function
 If given data are sufficiently smooth, then interpolation
may be appropriate, but if data are noisy, then smoothing
approximating function, such as least squares spline, is
more appropriate
 Given smooth function 𝑓: 𝑅 → 𝑅, we wish to approximate its first and second
derivatives at point 𝑥
 Consider Taylor series expansions
𝑓 ′′ 𝑥 𝑓 ′′′ 𝑥
𝑓 𝑥 + ℎ = 𝑓 𝑥 + 𝑓′ 𝑥 ℎ + ℎ2 + ℎ3 + ⋯
2 6
′′
𝑓 𝑥 2 𝑓 𝑥 3 ′′′

𝑓 𝑥−ℎ =𝑓 𝑥 −𝑓 𝑥 ℎ+ ℎ − ℎ +⋯
2 6
 Solving for 𝑓′(𝑥) in first series, obtain forward difference approximation
′′ 𝑥

𝑓 𝑥 + ℎ − 𝑓 𝑥 𝑓 𝑓 𝑥+ℎ −𝑓 𝑥
𝑓 (𝑥) = − ℎ+⋯≈
ℎ 2 ℎ
which is first order accurate since dominant term in remainder of series is 𝑂(ℎ)
 Similarly, from second series derive backward difference
approximation
′′

𝑓 𝑥 − 𝑓 𝑥 − ℎ 𝑓 𝑥 𝑓 𝑥 −𝑓 𝑥−ℎ
𝑓 (𝑥) = + ℎ+⋯≈
ℎ 2 ℎ
which is also first order accurate
 Subtracting second series from first series gives centered difference
approximation
𝑓 𝑥 + ℎ − 𝑓 𝑥 − ℎ 𝑓 ′′′ 𝑥 𝑓 𝑥+ℎ −𝑓 𝑥−ℎ

𝑓 (𝑥) = − 2
ℎ +⋯≈
2ℎ 6 2ℎ
which is second order accurate
 Adding both series together gives centered difference approximation for
second derivative
(4) 𝑥
𝑓 𝑥 + ℎ − 2𝑓 𝑥 + 𝑓 𝑥 − ℎ 𝑓
𝑓 ′′ (𝑥) = − ℎ 2+⋯
ℎ2 12
𝑓 𝑥 + ℎ − 2𝑓 𝑥 + 𝑓 𝑥 − ℎ

ℎ2
which is also second order accurate
 Finite difference approximations can also be derived by polynomial
interpolation, which is less cumbersome than Taylor series for higher-
order accuracy or higher-order derivatives, and is more easily generalized
to unequally spaced points
RICHARDSON EXTRAPOLATION
 In many problems, such as numerical integration or
differentiation, approximate value for some quantity is
computed based on some step size
 Ideally, we would like to obtain limiting value as step size
approaches zero, but we cannot take step size arbitrarily
small because of excessive cost or rounding error
 Based on values for nonzero step size, however, we may
be able to estimate value for step size of zero
 Let 𝐹(ℎ) denote value obtained with step size ℎ
 If we compute value of 𝐹 for some nonzero step sizes, and if
we know theoretical behavior of 𝐹(ℎ) as ℎ → 0, then we can
extrapolate from known values to obtain approximate value
for 𝐹 0
 Suppose that
𝐹 ℎ = 𝑎0 + 𝑎1 ℎ𝑝 + 𝑂 ℎ𝑟
as ℎ → 0 for some 𝑝 and 𝑟, with 𝑟 > 𝑝
 Assume we know values of 𝑝 and 𝑟, but not 𝑎0 or 𝑎1 (indeed,
𝐹 0 = 𝑎0 is what we seek)
 Suppose we have computed 𝐹 for two step sizes, say ℎ and
ℎ/𝑞 for some positive integer 𝑞
 Then we have
𝐹 ℎ = 𝑎0 + 𝑎1 ℎ𝑝 + 𝑂 ℎ𝑟
𝐹 ℎ/𝑞 = 𝑎0 + 𝑎1 ℎ/𝑞 𝑝 + 𝑂 ℎ𝑟
 This system of two linear equations in two unknowns 𝑎0 and
𝑎1 is easily solved to obtain
𝐹 ℎ − 𝐹(ℎ/𝑞) 𝑟
𝑎0 = 𝐹 ℎ + + 𝑂 ℎ
𝑞 −𝑝 − 1
 Accuracy of improved value, 𝑎0 , is 𝑂(ℎ𝑟 )
 Extrapolated value, though improved, is still only
approximate, not exact, and its accuracy is still limited by
step size and arithmetic precision used
 If 𝐹(ℎ) is known for several values for ℎ, then
extrapolation process can be repeated to produce still
more accurate approximations, up to limitations imposed
by finite-precision arithmetic
 Use Richardson extrapolation to improve accuracy of finite difference
approximation to derivative of function sin 𝑥 at 𝑥 = 1
 Using first-order accurate forward difference approximation, we have
𝐹 ℎ = 𝑎0 + 𝑎1 ℎ + 𝑂 ℎ2
so 𝑝 = 1 and 𝑟 = 2 in this instance
 Using step sizes of ℎ = 0.5 and h/2 = 0.25 (i.e., 𝑞 = 2), we obtain
sin 1.5 − sin 1
𝐹 ℎ = = 0.312048
0.5
sin 1.25 − sin 1
𝐹 ℎ/2 = = 0.430055
0.25
 Extrapolated value is then given by
𝐹 ℎ − 𝐹(ℎ/2) ℎ
𝐹 0 = 𝑎0 = 𝐹 ℎ + = 2𝐹 −𝐹 ℎ
1/2 − 1 2
= 0.548061
 For comparison, correctly rounded result is cos 1 =
0.540302
 As another example, evaluate
𝜋/2
න sin 𝑥 𝑑𝑥
0
 Using composite trapezoid rule, we have
𝐹 ℎ = 𝑎0 + 𝑎1 ℎ2 + 𝑂 ℎ4
so 𝑝 = 2 and 𝑟 = 4 in this instance
 With ℎ = 𝜋/2, 𝐹 ℎ = 𝐹 𝜋/2 = 0.785398
 With q = 2, 𝐹 ℎ/2 = 𝐹 𝜋/4 = 0.948059
 Extrapolated value is then given by
𝐹 ℎ − 𝐹(ℎ/2) 4𝐹 ℎ/2 − 𝐹(ℎ)
𝐹 0 = 𝑎0 = 𝐹 ℎ + −2
=
2 −1 3
= 1.002280
which is substantially more accurate than values
previously computed (exact answer is 1)
 Continued Richardson extrapolations using composite
trapezoid rule with successively halved step sizes is called
Romberg integration
 It is capable of producing very high accuracy (up to limit
imposed by arithmetic precision) for very smooth
integrands
 It is often implemented in automatic (though nonadaptive)
fashion, with extrapolations continuing until change in
successive values falls below specified error tolerance
 본 강의자료는 연세대학교 학생들을 위해 수업목적으로
제작, 게시된 것이므로 수업목적 외 용도로 사용할 수 없
으며, 다른 사람들과 공유할 수 없습니다. 위반에 따른 법
적 책임을 행위자 본인에게 있습니다.

You might also like