0% found this document useful (0 votes)
157 views38 pages

Numerical Integration Guide

1) Romberg integration and Gauss quadrature are two numerical techniques for integrating functions. 2) Romberg integration uses Richardson extrapolation applied to the trapezoidal rule to successively improve the accuracy of the integral approximation. 3) Gauss quadrature chooses integration points and weights to exactly integrate polynomials up to a certain order over an interval, yielding high accuracy with relatively few function evaluations.

Uploaded by

razlan ghazali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
157 views38 pages

Numerical Integration Guide

1) Romberg integration and Gauss quadrature are two numerical techniques for integrating functions. 2) Romberg integration uses Richardson extrapolation applied to the trapezoidal rule to successively improve the accuracy of the integral approximation. 3) Gauss quadrature chooses integration points and weights to exactly integrate polynomials up to a certain order over an interval, yielding high accuracy with relatively few function evaluations.

Uploaded by

razlan ghazali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Numerical Integration & Differentiation

Integration of equation
Introduction
 Integration of Equation
 Numerical Differentiation
Integration of equation
 Functions to be integrated numerically are in two forms:

 Table of values (discrete data);


limited number of points are available.
 Function;
 use the function to generate table of values
 can generate as many data values required to attain acceptable accuracy!
 Composite Simpson 1/3 rule can be used to integrate a function BUT more
efficient method is available

 Two techniques that are designed to integrate functions:


 Romberg integration (based on Richardson extrapolation)
 Gauss quadrature
1. Romberg Integration
 Is based on successive application of the trapezoidal rule to attain efficient
numerical integrals of functions.
 Use Richardson extrapolation to trapezoidal rule.
 Gives better approximation of the integral  lower the true error faster.

Richardson’s Extrapolation
 Uses two estimates of an integral to compute a third and more accurate
approximation
1. Romberg Integration
 The estimate and error associated with a multiple-application trapezoidal rule can
be represented generally as
I  I ( h)  E ( h)
I = exact value of integral
h  (b  a ) / n I(h) = the approximation from an
I (h1 )  E (h1 )  I (h2 )  E (h2 ) n segment application of
trapezoidal rule with step size h
n  (b  a ) / h
E(h) = the truncation error
ba 2
E h f  Assumed
12 constant
regardless
E (h1 ) h12 of step size
 2
E (h2 ) h2
2
 h1 
E (h1 )  E (h2 ) 
 h2 
1. Romberg Integration
2
h 
I (h1 )  E (h2 ) 1   I (h2 )  E (h2 )
 h2 

I (h1 )  I (h2 )
E (h2 )  2
1   h1 
 h2 

I  I (h2 )  E (h2 )

I  I (h2 ) 
1
2
I (h2 )  I (h1 ) Improved estimate of
 h1   1 the integral
 h 
 2
1. Romberg Integration
 If two O(h2) estimates I(h1) and I(h2) are calculated for an integral using step sizes of
h1 and h2, respectively, an improved O(h4) estimate may be formed using (Ralston &
Rabinowitz, 1978) :

I  I(h2 ) 
1
I(h2 )  I(h1 )
(h1/h 2 )  1
2

 For the special case where the interval is halved (h2=h1/2), this becomes:

4 1
I  I(h2 )  I(h1 )  coefficient =1
3 3
1. Romberg Integration
 For the cases where there are two O(h4) estimates and the interval is halved
(hm=hl/2), an improved O(h6) estimate may be formed using:

16 1
I  Im  Il  coefficient =1

15 15
 For the cases where there are two O(h6) estimates and the interval is halved
(hm=hl/2), an improved O(h8) estimate may be formed using:

64 1
I  Im  Il  coefficient =1
63 63
Where Im & Il are the more and less accurate estimates.
1. Romberg Integration - Example
 Example: Single and multiple applications of trapezoidal rule yielded;

 The estimates for one and two segments can be combined to give;
4 1
I  (1.0688) (0.1728) 1.367467  ε t  16.6%
3 3
 The estimates for two and four segments can be combined to give;

4 1
I  (1.4848) (1.0688) 1.623467  ε t  1.0%
3 3
 The estimates can then be used for further iteration to give;

16 1
I (1.623467) (1.367467) 1.640533  ε t  0.0%
15 15
1. Romberg Integration - Algorithm
 Note that the weighting factors for the Richardson extrapolation add up to 1 and
that as accuracy increases, the approximation using the smaller step size is given
greater weight.

 In general,

4 k 1 I j 1,k 1  I j,k 1
I j,k 
4 k 1  1

where ij+1,k-1 and ij,k-1 are the more and less accurate integrals, respectively, and
ij,k is the new approximation. k is the level of integration and j is used to
determine which approximation is more accurate.
1. Romberg Integration - Algorithm
 The process by which lower level integrations are combined to produce more
accurate estimates:

 In general, Romberg integration is more efficient than the trapezoidal and


Simpson’s rule. Viz. Simpson’s 1/3 requires 256-segment to yield I = 1.640533
2. Gauss Quadrature
 Gauss quadrature describes a class of
techniques for evaluating the area under a
straight line by joining any two points on a
curve rather than simply choosing the
endpoints.

 The key is to choose the line that balances


the positive and negative errors.

 The integral estimates are of the form:

I  c0f x 0   c1f x1     c n 1f x n 1 

 where the ci and xi are calculated to ensure


that the method exactly integrates up to
(2n-1)th order polynomials over the interval
from -1 to 1.
2. Gauss Quadrature – Gauss Legendre Formula
Derivation of the Two-Point Gauss-Legendre Formula

 The object of Gauss quadrature is to determine the equations of the form

I  c0f(x0 )  c1f(x1 )
 However, in contrast to trapezoidal rule that uses fixed end points a and b, the
function arguments x0 and x1 are not fixed end points but unknowns.

 Thus, four unknowns to be evaluated require four conditions.

 First two conditions are obtained by assuming that the above eqn. for I fits the
integral of a constant and a linear function exactly.

 The other two conditions are obtained by extending this reasoning to a parabolic
and a cubic functions.
2. Gauss Quadrature – Gauss Legendre Formula
1


c 0 f(x0 )  c1f(x1 )  1 dx  2
1
1


c 0 f(x0 )  c1f(x1 )  x dx  0
1
1
2

c 0 f(x0 )  c1f(x1 )  x 2 dx 
1
3 c 0  c1  1
1 1
x0    0.5773503

c 0 f(x0 )  c1f(x1 )  x 3 dx  0
1 1
3
x1   0.5773503
3
 1   1 
I  f    f  
 3  3
Yields an integral estimate that
is third order accurate
2. Gauss Quadrature – Gauss Legendre Formula
 The integration limit is between -1 through 1. This was done to simplify the
mathematics.

 A simple change of variable can be used to translate other limits of integration into
this form. This is accomplished by assuming that a new variable xd is related to the
original variable x in a linear fashion.

x  a0  a1 xd 
a 
ba
 0 2

ba
 a1 
x  a  xd  1  2
x  b  xd  1
(b  a )  (b  a ) xd
x
2
a  a0  a1 (1) ba
 dx  dxd
 b  a0  a1 (1) 2
Numerical Differentiation
 Why?
A. A car moves at constant 60 km/h

 rate of change is constant


 Slope = 300/5 = 60 km/h

B. Projectile motion

 rate of change is not constant


 Slope=???????????????

 Differentiation:
 Finding rate of change of one
quantity with respect to another
 Needed when the rate is not
constant
Numerical Differentiation
 Differentiation?
 Process of finding derivative (dy/dx) - rate of change of a dependent variable
with respect to an independent variable.

 The mathematical definition of a derivative begins with a difference


approximation:
Δy f x i  Δx   f x i 

Δx Δx
 and as x is allowed to approach zero, the difference becomes a derivative:
dy f x i  Δx   f x i 
 lim
dx Δx0 Δx
Numerical Differentiation
 Refresher … Taylor Series

f '' x i  2 f (3) x i  3 f (n) x i  n


f x i 1   f x i   f x i h 
'
h  h  h  Rn
2! 3! n!

f (n  1) (ξ ) (n  1) x i 1  x i  h
Rn  h
(n  1)! x i 1  x i   h

 In general, the nth order Taylor series expansion will be exact for an nth order
polynomial.

 In other cases, the remainder term Rn is of the order of hn+1, meaning:


 The more terms are used, the smaller the error, and
 The smaller the spacing, the smaller the error for a given number of terms.
Numerical Differentiation
 Truncate TSE after the first order gives,

f(xi 1 )  f(xi )  f ' (xi )h  O(h2 )


 Rearranged ..

f(xi 1 )  f(xi )
 Forward difference: f (xi ) 
'
 O(h)
 Also…. h

 Backward difference: f(xi )  f(xi 1 )


f (xi ) 
'
 O(h)
h

 Centered difference: f(xi 1 )  f(xi 1 )


f (xi ) 
'
 O(h 2 )
2h
 O(h)2 = truncation error of the order of h2 (i.e. error is proportional to the
square of the step size)
Numerical Differentiation - Example
 Example;
 Given f(x)=-0.1x4 – 0.15x3 – 0.5x2 – 0.25x + 1.2
 Find f’(x) when x=0.5 using h = 0.5 and h = 0.25

Analytical solution
f(x)  -0.1x 4  0.15x 3  0.5x 2  0.25x  1.2
f' ' (x)  -0.4x 3  0.45x 2  1.0x  0.25

True value
f' ' (0.5)  -0.4(0.5) 3  0.45(0.5) 2  1.0(0.5)  0.25
 -0.9125
Numerical Differentiation
 Forward difference: f(x)=-0.1x4 – 0.15x3 – 0.5x2 – 0.25x + 1.2

Forward difference ;
x i  0.5, h  0.5  x i 1  1

f(x i 1 )  f(x i )
f' (x i ) 
h
f(1.0)  f(0.5)
f' (0.5) 
0.5
 1.45

ε t  58.9%
Numerical Differentiation
 Backward difference: f(x)=-0.1x4 – 0.15x3 – 0.5x2 – 0.25x + 1.2

Backward difference ;
x i  0.5, h  0.5  x i 1  0

f(x i ) - f(x i- 1 )
f' (x i ) 
h
f(0.5)  f(0)
f' (0.5) 
0.5
 0.55

ε t  39.7%
Numerical Differentiation
 Centeredrd difference: f(x)=-0.1x4 – 0.15x3 – 0.5x2 – 0.25x + 1.2

Centered difference ;
x i  0.5, h  0.5  x i 1  1 & x i- 1  0

f(x i 1 ) - f(x i- 1 )
f' (x i ) 
2h
f(1)  f(0)
f' (0.5) 
2(0.5)
 1.0

ε t  9.6%
Numerical Differentiation
 Comparison of true errors

h=0.5 h=0.25
f’(0.5) ∆t(%) f’(0.5) ∆t(%) ∆h(%) ∆t(%)
Forward -1.45 58.9 -1.155 26.5 -50 ~50
Backward -0.55 39.7 -0.714 21.7 -50 ~50
Centered -1.0 9.6 -0.934 2.4 -50 ~75
Higher Accuracy Numerical Differentiation
 More accurate differentiation formulas (i.e. finite divided difference) can be
developed by including more terms from the TSE.

 This is achieved using linear algebra to combine the expansion around several
points.

 Three categories for the formula include forward finite-difference, backward finite-
difference, and centered finite-difference.
Higher Accuracy Numerical Differentiation
f (xi ) 2 f' ' ' (xi ) 3
f(xi  1 )  f(xi )  f (xi )h  h  h 
2 3!
f(xi  1 )  f(xi ) f (xi )
f (xi)   h  O(h2 )
h 2
f(xi  2 )  2f(xi  1 )  f(xi )
f (xi )  2
 O(h)
h
f(xi  1 )  f(xi ) f(xi  2 )  2f(xi  1 )  f(xi )
f (xi)   2
h  O(h 2
)
h 2h
 f(xi  2 )  4f(x i  1 )  3f(xi )
f (xi)   O(h2 ) Need extra point!
2h
 Inclusion of the 2nd derivative term has improved the accuracy to O(h2).
Higher Accuracy Numerical Differentiation
 Forward
Higher Accuracy Numerical Differentiation
 Backward
Higher Accuracy Numerical Differentiation
 Centered
Higher Accuracy Numerical Differentiation
 Example; f(x)= -0.1x4 – 0.15x3 – 0.5x2 – 0.25x + 1.2

 SOLUTION 01: Estimate f’(0.5) using FDD & h = 0.25

Forward Backward Centered


O(h) O(h) O(h2)
Estimate, f(0.5) -1.155 -0.714 -0.934

t(%) -26.5 21.7 -2.5


Higher Accuracy Numerical Differentiation
 SOLUTION 02: Estimate using Forward FDD high-accuracy formula, h= 0.25
High accuracy Forward FDD
 f(x i  2 )  4f(x i  1 )  3f(x i )
f' ' (x i ) 
2h

h  0.25
x i  0.5  f (0.5)  0.925
x i  1  0.75  f (0.75)  0.6363281
x i  2  1.0  f (0)  0.2
- 0.2  4(0.6363281) - 3(0.925)
f' ' (0.5) 
2(0.25)
 -0.859375

ε t  5.82%
Higher Accuracy Numerical Differentiation
 SOLUTION 02: Estimate using Backward FDD high-accuracy formula, h= 0.25
High accuracy Backward FDD
3f(xi )  4f(x i  1 )  f(x i - 2 )
f' ' (x i ) 
2h

h 0.25
x i1  0.25  f (0.25)  1.103516
x i- 2  0  f (0 )  1.2
xi  0.5  f (0.5)  0.925

3(0.925) - 4(1.103516)  1.2


f' ' (0.5) 
2(0.25)
 -0.878125

ε t  3.77%
Higher Accuracy Numerical Differentiation
 SOLUTION 02: Estimate using Centered FDD high-accuracy formula, h= 0.25
High accuracy Centered FDD
- f(xi  2 )  8f(x i  1 ) - 8f(x i - 1 )  f(xi - 2 )
f' ' (x i ) 
12h

h 0.25
x i- 2  0  f ( 0 )  1 .2
x i- 1  0.25  f (0.25)  1.103516
xi  0.5  f (0.5)  0.925
x i 1  0.75  f (0.75)  0.6363281
x i2  0  f (0)  0.2

0.2  8(0.6363281) - 8(1.1035156)  1.2


f' ' (0.5) 
12(0.25)
 -0.9125

ε t  0%
Higher Accuracy Numerical Differentiation
 Comparison of true errors

Normal
Forward Backward Centered
O(h) O(h) O(h2)
Estimate, f(0.5) -1.155 -0.714 -0.934
t(%) -26.5 21.7 -2.5

Higher accuracy
Forward Backward Centered
O(h2) O(h2) O(h4)
Estimate, f(0.5) -0.85937 -0.878125 -0.9125
t(%) -5.82 3.77 0
Higher Accuracy Numerical Differentiation
 Three ways to improve derivative estimate using finite divided differences;

1. Decrease step size (more computation  round off?)


2. Use higher order formula (need more points)
3. Use Richardson Extrapolation
Higher Accuracy Numerical Differentiation
 Richardson Extrapolation
 Combine two lower-accuracy estimates of the derivative to produce a higher-
accuracy estimate.

 Richardson Extrapolation formula;


 For two O(h2) estimates and h2=h1/2  an improved O(h4) estimate may be
formed using:
4 1
D D(h2 )  D(h1 )
3 3

 For two O(h4) estimates and h2=h1/2  an improved O(h6) estimate may be
formed using:
16 1
D D(h2 )  D(h1 )
15 15

 For two O(h6) estimates and h2=h1/2  an improved O(h8) estimate may be
formed using: 64 1
D D(h2 )  D(h1 )
63 63
Higher Accuracy Numerical Differentiation
Example: f(x)= -0.1x4 – 0.15x3 – 0.5x2 – 0.25x + 1.2
 Determine the estimate of first derivative at x=0.5;
1. Using Centered DD with h1=0.5 and h2=0.25
2. Using Richardson Extrapolation.

f(xi  1 )  f(xi  1 )
 Solution 01: Using centered FDD f' ' (xi )   O(h2 )
2h
f(1.0) - f(0) 0.2 - 1.2
h1  0.5  D(0.5)    -1.0
2(0.5) 1
 ε t  9.6%

f(0.75) - f(0.25) 0.6363 - 1.1035


h2  0.25  D(0.5)    -0.9344
2(0.25) 1
 ε t  2.4%
Higher Accuracy Numerical Differentiation
 Solution 02: Using Richardson extrapolation

 Given h1=0.5 & h2=0.25


 Since h2=h1/2
 And there are two O(h2) estimates (i.e. centered DD is O(h2)
 Richardson Extrapolation formula to be used is:-
4 1
D  D(h2 )  D(h1 )
3 3
 Thus estimate using Richardson Extrapolation;

4 1
D (-0.934375)  (-1)
3 3
D  -0.9125,

 ε  0%
t

You might also like