0% found this document useful (0 votes)
24 views53 pages

NM Merged (1-11)

The document outlines a course on Numerical Methods taught by Kannan Iyer at IIT Jammu, focusing on numerical algorithms and their applications in engineering. It covers various topics including non-linear equations, problem-solving methodologies, and error analysis, emphasizing the importance of numerical techniques for complex problems. The grading structure includes assignments, mid-semester, and end-semester exams, with a strong focus on practical applications and algorithmic understanding.

Uploaded by

tushar12jeevjee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views53 pages

NM Merged (1-11)

The document outlines a course on Numerical Methods taught by Kannan Iyer at IIT Jammu, focusing on numerical algorithms and their applications in engineering. It covers various topics including non-linear equations, problem-solving methodologies, and error analysis, emphasizing the importance of numerical techniques for complex problems. The grading structure includes assignments, mid-semester, and end-semester exams, with a strong focus on practical applications and algorithmic understanding.

Uploaded by

tushar12jeevjee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

General-1

Numerical Methods Motivation for study


(Introduction)  To introduce numerical algorithms
Instructor
Kannan Iyer  Name: Kannan Iyer
kannan.iyer@iitjammu.ac.in  E-mail: kannan.iyer@iitjammu.ac.in
 Dean’s Complex
Grading
 CT(2): 20%
Department of Mechanical Engineering  Mid Sem 20% End-Sem: 40%
Indian Institute of Technology, Jammu  Assignments: 20%

General-2 General-3
Books
Emphasis
 Books have been listed in a separate
Document  Engineering rather than mathematical
 Fairly Comprehensive Power Points will  Fundamentals to be stressed
be uploaded as notes.  Theorems and Lemmas not be stressed but
just mentioned
 General concepts to be stressed than very
specific ones
 Extensive use of algorithms will be the focus

1
Material to be covered Problem Solving in Engineering
 Single non-linear equation
 System of linear and non-linear equations  Analytical
 Interpolation, extrapolation and regression  Needs complex mathematics and
applicable only for ideal cases.
 Differentiation and integration
 Ordinary Differential Equations (ODEs)  Experimental
including Initial Value Problem (IVP) and  Limited validity
(BVP)  Expensive and Time consuming
 Partial Differential Equations (PDEs)  Numerical
involving parabolic, elliptic and hyperbolic  Simple to apply
systems
 Quick and economical
 Detailed topics have been outlined on a
separate Document

Advantages of Numerical methods Goals and Objectives


 Complex problems can be solved with  To lay foundation on Numerical Methods so
modest mathematical background that more advance courses can be built on
 Large parametric solutions can be obtained them
economically to reinforce the physical  To give exposure to a wide spectrum of
understanding methods
 Graphical visualization possible to locate hot  To instill confidence in problem solving skills
spots, high velocity zones, etc.

2
Problem Solving Methodology Elements of Numerical Solution
Algorithm Design
Physical
Model Formulation
Math  Aim of the course
problem Governing Equations
Model Program Implementation
 Students to learn independently through
weekly homework
Numerical Techniques Debugging and Testing
Yes No
 Benchmark problems to be utilised
Documentation, Storage and Retrieval
Satisfactory Model Validation
Solution  Should have a clear write-up, stored in
Solution? pendrives and hard discs.
Through Experiments

General Tips Significant Figures


 The best estimate for speed is say 48.9.
 Programs should be structured  The same for odometer is 87324.45.
 Should have several comment statements  The former has an uncertainty in the 3rd digit,
 Should be modular and made of several while the latter has uncertainty in the 7th digit.
functions and subroutines
 Should use structured blocks such as,
IF-THEN-ELSE-ENDIF
DO-ENDDO
 Should not assume to converge

3
Accuracy and Precision Absolute and Relative Error
 Accuracy implies that the  True Error = True Value – Approximate Value
measured value agrees with 𝐸 = 𝑇𝑉 − 𝐴𝑉
true value.
 Most of the time we do not know the true value!
 Precision refers to
 Fractional Relative Error = True error/True value.
measurements that are close
𝐸
to each other. 𝑒 =
𝑇𝑉
 When we make a bunch of
 % Relative Error = et x 100
measurements, accuracy
implies that average value is  Often when estimating a true value, the
close to true value. procedure may need several computations to
increase accuracy
 Precision implies that the
standard deviation is small.

Absolute and Relative Error Number System


 For e.g. 𝑒 = 1 + 𝑥 + + + ….
! !
 Approximate relative error
𝑀𝑜𝑟𝑒 𝑎𝑐𝑐𝑢𝑟𝑎𝑡𝑒𝑣𝑎𝑙𝑢𝑒 − 𝐿𝑒𝑠𝑠 𝑎𝑐𝑐𝑢𝑟𝑎𝑡𝑒 𝑣𝑎𝑙𝑢𝑒
𝑒 =
𝑀𝑜𝑟𝑒 𝑎𝑐𝑐𝑢𝑟𝑎𝑡𝑒 𝑣𝑎𝑙𝑢𝑒
 In such cases we take as many terms that is
required for a prescribed 𝑒 .
 Usually the computation is taken into a recursive
loop and the program comes out of the loop once
the set 𝑒 is satisfied.

4
Integer Representation Real Number Representation
 Integer Representation  Real Number Representation
𝑚×𝑏
e.g., -0.1234 × 10

0 is  Storage in a computer (7-bit)


Represents
positive, 1
is negative 1
 Largest Integer in a 16 bit computer + 0.
2
= 32,767 = +0.5 × 2
 Range
- 32,768 to 32767 - 0 is taken as -32,768

Round-off Error Round-off Error


 Round off error is the error introduced when we  Consider the following:.
store an irrational number in a digital form. 0.577237-0.577128 = 0.000109
 For e.g., 10/11=0.90909090909….and if we have  In a four digit accuracy machine, it will be
the limitation of storing only five digits, it would be
0.5772E0+0.5771E0 = 0.0001E0 = 0.1000E-3
stored as 0.90909.
 The answer has only one significant digit. Note the
 Mathematical operations introduce round off error.
drop in number of significant digits
Let us assume that we have four digit machine
 The relative error = (0.000109-0.0001)/0.000109
e.g. 0.2546E1+0.4346E-3 = 2.546+0.004346
=
= 2.550346 = 0.2550 E3
(0.000009/0.000109)=9/109 =~ 9%
 Note the drop in number of significant digits
 This is is too much of an error in a four digit
 Subtraction of near equal numbers leads to even machine!
more serious issue.

5
Truncation Error Truncation Error-2
 There is another type of error introduced when  The basis for such solution comes from Taylor
working with differential equations. Series
 When solving differential equations such as 𝑦 𝑥 +ℎ
ℎ ℎ
= 𝑓 𝑥 , 𝑤ℎ𝑒𝑟𝑒 𝑓 𝑥 is a complicated function = 𝑦 𝑥 + ℎ𝑦 𝑥 + 𝑦 𝑥 + ⋯ 𝑦 𝑥
2! 2!
 The derivative dy/dx is often dersibed as  Often it is truncated and expressed as:
𝑑𝑦 𝑦 ℎ − 𝑦0 = 𝑦 𝑥 + ℎ = 𝑦 𝑥 + ℎ𝑦 𝑥 + 𝑂(ℎ )
𝑥 = 0 = 𝑓(0) =
𝑑𝑥 ℎ  For x0 = 0, we get
 This is would imply that
𝑦 ℎ = 𝑦0 + 𝑓 0 × ℎ
 𝑦 ℎ = 𝑦0 + 𝑓 0 × ℎ
 The error introduced by truncation of the Taylor
series is called truncation error.
 We shall discuss these later in the course.

6
General-1
Numerical Methods
(Solution of Non-Linear Equations)  There are many applications in TFE that
requires solution of Non-linear equation
Kannan Iyer
kannan.iyer@iitjammu.ac.in Heat generating sphere cooled by
convection and radiation

 
Q  hA T B  T     A T B  T   0
4 4

Department of Mechanical Engineering


Indian Institute of Technology, Jammu
TB T
2/20

General-2 Some Observations-I


 While one linear equation will have a unique
solution, a non-linear equation may have
 Van-der-Waals Equation several solutions e.g. Sin(x) = 0.
𝑎  Some equations may have no solution at all?
𝑝+ 𝑣 − 𝑏 − 𝑅𝑇 = 0 e.g. x-ex = 0 for x > 0.
𝑣  Some equations may have no real solutions
e.g. x2 +1= 0.
 Colebrook Friction Factor Relation  In most physical problems we will be looking
for real solutions.
1 𝜀/𝑑 2.51  No method is foolproof to guarantee a
= −2.0 log + solution
𝑓 3.7 Re 𝑓
 However, it is not very difficult to get a
3/20
solution. Often these are called roots.
4/20

1
Some Observations-II Bracketing
 A non-linear equation is also called a
transcendental equation xN
xP
 We shall discuss some of the most popular
methods
 Broadly the methods can be divided in two- f(x)
groups
 Those that need bracketing of roots
 Those that do not need bracketing
x

5/20 6/20

Bracketing of Roots-I Bracketing of Roots-II


 Experience  Incremental Search
 Speed of an automobile may be 0-120 1. Start from x = xmin
km/hr 2. Check if f(x)*f(x+h)<0
 Temperature of a furnace may be from 3. If (yes) roots bracketed
200-1000 oC 4. If (no) increment x by h
 Common sense f(x)
5. Repeat until x > xmax
 In open channel flow height of free liquid
will be 0<h<D
x
 Incremental search
 Will be discussed in next slide
7/20 8/20

2
Bisection Method 1. Bracket the root and identify xp and xn
2. Do Until I < Imax
xP x xN 1. Xnew = 0.5*(xp+xn)
2. If f(xnew) < Tol then
f(x) 3. Root = xnew and Get out
4. ElseIf f(xnew) < 0 Then
xn = xnew
Else xp = xnew
x
Endif
 Needs bracketing 3. I = I +1
 Guaranteed solution unless there is 4. Enddo
discontinuity
 Slow convergence rate 9/20 10/20

Method of False Position-II


Method of False Position-I
 The guess for the new point can be obtained
 Needs bracketing as follows
 Guaranteed solution =0
unless there is f ( xnew )  f ( xn ) xnew  xn
discontinuity 
 Slow convergence f ( xp )  f ( xn ) xp  xn
f(x) rate but faster than
f ( xn ) f(x)
bisection method
 xnew  xn  xn xnew
x f ( xp )  f ( xn ) xp
x
xp  xn
f ( xn )
xnew  xn 
11/20 m 12/20

3
Logic Secant Method-I
1. Bracket the root and identify xp and xn  Does not need bracketing
2. Do Until I < Imax  Modification of false position method
1. m = (f(xp)-f(xn))/(xp-xn)  The new point replaces the last but one point
2. Xnew = xn- f(xn)/m  Good Convergence
3. If f(xnew) < Tol then  Most preferred method
4. Root = xnew and Get out
5. ElseIf f(xnew) < 0 Then
xn = xnew
f(x)
Else xp = xnew
Endif
3. I = I +1
4. Enddo 13/20 x 14/20

Logic
Newton’s Method-I
 Does not need bracketing
1. Take any two points x1 and x2
 Modification of false position method
2. Do Until I < Imax
 The new point replaces the last but one point
1. m = (f(x2)-f(x1))/(x2-x1)
One can under-  Good Convergence
2. Xnew = x2- f(x2)/m relax here
3. If f(xnew) < Tol then  Also a preferred method
4. Root = xnew and Get out
5. Else
x1 = x2
x2 = xnew f(x)
Endif
3. I = I +1
4. Enddo 16/20
15/20 x

4
Newton’s Method-II Logic
 The basis arises from linearised Taylor Series 1. Take any point x
2. Do Until I < Imax
f ( xn1 )  f ( xn  xn )  f ( xn )  f ( xn )xn 1. m = f’(x)
One can under-
2. Xnew = x- f(x)/ f’(x)
 The new value of xn+1 is arrived by setting relax here
f(xn+1)= 0 3. If f(xnew) < Tol then
4. Root = xnew and Get out
f ( xn1 )  0  f ( xn )  f ( xn )( xn1  xn ) 5. Else x = xnew
Endif
 xn1  xn  f ( xn ) / f ( xn ) 3. I = I +1
4. Enddo

17/20 18/20

Fixed Point Iteration Method - I Fixed Point Iteration Method - II


 In this method, the equation f ( x )  0 is first  The recursive algorithm used is
transformed to the form g(x) = x
 This can be done in several ways. For e.g. xn1  g( xn )
x2-2x+1=0 can be written as,

x  ( x2  1) / 2 Or x  2x  1 X=g(x)
 The choice will be determined from the
convergence rate of the method, which can
be determined only after the problem is
solved! g(x)
 Usually, it is done in a ad-hoc manner , as it
takes little time to check if the method works
or not
19/20 x 20/20

5
Fixed Point Iteration Method - I
 Simplest to program
 Does not converge for every function. We shall
derive the criterion for convergence later

21/20

6
CH-2-16(MO) 4:05 PM 2/18

Numerical Methods Review


(Solution of Non-Linear Equations-2)
 We had seen the following methods
Kannan Iyer  Bisection Method
kannan.iyer@iitjammu.ac.in
 Method of False Position
 Secant Method
 Newton’s Method
 Fixed Point Iteration
Department of Mechanical Engineering
Indian Institute of Technology, Jammu

4:05 PM CH-2-16(MO) Numerical Methods, 1/18


Lecture 3 Non-linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II

4:05 PM 3/18 4:05 PM 4/18


Agenda for Today Order of Convergence
A method is said to converge with order p, if the error
decays as given by the expression
 Today We shall discuss the rate of
convergence of the methods. 𝑒 =𝑎 𝑒
 First we will digest the meaning of the
order of a method. Increasing Order
 Then we shall derive the order of the three eN
popular methods we have discussed in
the last class.

Ln(N)
CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II

1
4:05 PM 5/18 4:05 PM 6/18
Fixed Point Iteration-I Fixed Point Iteration-II
 Our recursive relation was  Eq. (3) can be written as
x n 1  g ( x n ) (1) en  1  g (   e n )  g (  )
 If α is our root then en
2
 g (  )  en g (  )  g (  )  ...  g (  )
  g(  ) (2) 2
 Eqs. (1) and (2) imply that  en g (  ) Using Mean Value Theorem

(3) Where ξ is such that it lies between xn and α


xn  1    g ( xn )  g (  )
 Defining the error at any level i as en  1
  g (  )
ei  x i   (4) en
CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II

4:05 PM 7/18 4:05 PM 8/18


Fixed Point Iteration-III Newton’s Method-I
 For the method to converge,  In this case our recursive relation was
en  1  xn1  xn  f ( xn ) / f ( xn ) (1)
  1 or g (  )  1
en
 In fact this must be true in its entire path of f ( xn )  f (  )
initial guess all the way to the route, as en  1    en   
otherwise, it can be thrown out anywhere f ( x n )
 Since en+1 = c en the method is said to have Note that f(α) = 0 by definition
linear convergence near the root.
f ( x n )  f ( x n  en )
 It implies that the error will decrease linearly en  1    en   
in the error-number of iteration plot f ( x n )
CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II

2
4:05 PM 9/18 4:05 PM 10/18
Bracketing of Roots-II Continuation Method
⇒𝑒 =𝑒  Many times, the equation may be difficult to solve
( ) ( / ) ( ) as the root is not known and the function is
− difficult.
( )
 A method called continuation method is very useful
𝑒 𝑓 (𝜉)  For an arbitrary x0, we can say that x0 is the root of
⇒𝑒 =− the function f(x)-f(x0)
2𝑓 (𝜉)  If we now define our function as
 F(x) = f(x) – β f(x0), and use x0 as the guess for β =
 For the method to converge, 0.9, we can find the root because the guess is
𝑒 𝑒 𝑓 (𝜉) good
⇒ <1⇒ <1  We can proceed in this manner successively by
𝑒 2𝑓 (𝜉) reducing β to 0, root of f(x) can be found
CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II

4:05 PM 11/18 4:05 PM 12/18


Closure Secant Method-I

 The error analysis for this method is


 We understood the meaning of the order of tedious but very illustrative of the
convergence.
power law technique
 Fixed point iteration has linear convergence.
 Newton’s method has quadratic convergence.  In this case our recursive relation was
 Secant method has an order of convergence =1.6.
f ( xn )
(see slides 12-18 for those who are curious)  xn1  xn 
 We also understood the concept of the f ( xn )  f ( xn1 )
continuation method. xn  xn1

CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II

3
4:05 PM 13/18 4:05 PM 14/18
Secant Method-II Secant Method-III
 en1  en
 en1   en  
xn  xn1  f (  en )
f (   en )  f (   en1 ) 
en  en1 ( en f (  )  ( en / 2 ) f (  ))
2

2 2
en  en1
 en1  en =0 ( en  en1 ) f (  )  f (  )
2

en  en1 ( f (  )  en f (  )  ( en / 2 ) f (  ))
2
en+en+1
e f (  )
2
f (  )  en f (  )  ( en / 2 ) f (  )
2
en  n
2 f (  )

 f (  )  en1 f (  )  ( en1 / 2 ) f (  )
2
  en1  en 
e  e f (  )
1  n n1
2 f (  )
CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II

4:06 PM 15/18 4:06 PM 16/18


Secant Method-IV Secant Method-V
enen1 f (  )
 As x approaches the root,  en1  (1)
en  en1 f (  ) 2 f (  )
1  If we assume that the method is of order p, we
2 f (  ) can write
1

 e f (  )  en  en1 f (  )  en  aen1  en1   en 


2 p
en1  aen and
p p
 en1  en  en  n 1 
 
2 f 
 (  )  2 f (  )  a
   Eq.(1) can now be written as
1
 en  en1 f (  ) en f (  )
2
3  e  e  p f (  )

 en  en  en   O( en )
p
 aen  n  n 
 2 f (  ) 2 f (  )  2  a  f (  )
CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II

4
4:06 PM 17/18 4:06 PM 18/18
Secant Method-VI Secant Method-VII
 By reorganising terms, we get  If p < 1, the method will diverge. Thus when the
1 method converges, p > 1, which leads to
1
p 1 1 1 f (  )
 a en  en p
p

2 f (  ) 1 5
p  1.62
 Since the power of n has to be homogeneous, 2
we can write
1  Thus the method is inferior to Newton’s method,
p  1  p  p 1  0
2 but needs only one function evaluation at a step
p and hence is competitive
1 5
p
2
CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II

5
4:29 PM
CH-2-16(MO) 1/12 4:29 PM 2/12

Numerical Methods in Engineering Review


(Solution of Linear Equations-1)  Understood that
 Fixed point iteration method has linear
Kannan Iyer
convergence
Kannan.iyer@iitb.ac.in
 Newton’s method has quadratic
convergence
 Secant Method has the rate of
convergence as 1.62
Department of Mechanical Engineering  For difficult equations we can use
Indian Institute of Technology Jammu continuation method to find the root

CH-2-16(MO) Numerical Methods, Lecture 4 Set of Linear Equations-I CH-2-16(MO) Numerical Methods, Lecture 4 Set of Linear Equations-I

4:29 PM 3/12 4:29 PM 4/12


General Matrix Notation
Consider a set of linear equation
a11 x1  a12 x2  a13 x3  b1
 Solution of a set of Linear Equations
 Network Analysis a21 x1  a22 x2  a23 x3  b2
 Curve Fitting a31 x1  a32 x2  a33 x3  b3
 Solution of ODE’s and PDE’s The above set may be represented by

 Its solution forms a basic part of most  a11 a12 a13   x1  b1 
a    
numerical algorithms
 21 a22 a23   x2   b2 
Coefficient
a31 a32 a33   x  b Source
Matrix  3   3 vector
CH-2-16(MO) Numerical Methods, Lecture 4 Set of Linear Equations-I CH-2-16(MO) Numerical Methods, Lecture 4 Set of Linear Equations-I

1
4:29 PM 5/12 4:29 PM 6/12
Types of Coefficient Matrices-I Types of Coefficient Matrices-II

Diagonal Matrix Identity Matrix Tri-diagonal Matrix Lower Diagonal Matrix [L] Upper Diagonal Matrix [U]

 a11 a12 0 0   a11 0 0 0  a11 a12 a13 a14 


d11 0 0  1 0 0  a
0 a22 a23 0  a a22 0 0  0 a22 a23 a24 
 d 22 0  0 1 0   21  21 
  0 a32 a33 a34  a31 a32 a33 0  0 0 a33 a34 
 0 0 d 33  0 0 1      
0 0 a43 a44  a41 a42 a43 a44  0 0 0 a44 

CH-2-16(MO) Numerical Methods, Lecture 4 Set of Linear Equations-I CH-2-16(MO) Numerical Methods, Lecture 4 Set of Linear Equations-I

4:29 PM 7/12 4:29 PM 8/12


Matrix Operations-I Matrix Operations-II

Multiplication
Addition and Subtraction Q

A  B  C  aij  bij  cij APXQ X BQXR  C PXR   aik bkj  cij


k 1
3X2 3X4
3 6 2  2 6 4   5 12 6  2X4
4 1 7  +  3 7 1 =  7 8 8 1 2  19 16 13 10 
      3 4   9 8 7 6   
8 5 1 6 1 5  14 6 6    5 4 3 2   47 40 33 26 
5 6    75 64 53 42 
 
CH-2-16(MO) Numerical Methods, Lecture 4 Set of Linear Equations-I CH-2-16(MO) Numerical Methods, Lecture 4 Set of Linear Equations-I

2
4:29 PM
Determinant
9/12 4:29 PM
Towards Solution of Linear 10/12

Equations-I
a11 a12 a13 d 11 0 0   x1  b1 
0    
a21 a22 a23  a11C11  a12C12  a13C13 If  d 22 0   x2   b2 
a31 a32 a33  0 0 d 33   x  b 
 3  3

a22 a23 a a23 b1 b2 b3


a11 ( 1 )2  a12 ( 1 )3 21  Then x1  x2 
d 22
x3 
d 33
a32 a33 a31 a33 d11
a21 a22 bi
a13 ( 1 )4
a31 a32
Logic  For i  1, N xi 
d ii
CH-2-16(MO) Numerical Methods, Lecture 4 Set of Linear Equations-I CH-2-16(MO) Numerical Methods, Lecture 4 Set of Linear Equations-I

4:29 PM
Towards Solution of Linear 11/12 4:29 PM
Towards Solution of Linear 12/12

Equations-II Equations-III
l11 0 0  x1  b1  u11 u12 u13   x1  b1 
If l 0 
    If 0 u u23 
   
 x2   b2 
 21 l22  x2   b2   22

 x  b   0 u33   x  b 
l31 l32 l33   3  3 x1 
b1 0  3   3  x N  bN
b1 l11 b3 u NN
x1  x3 
u33
 l11
For i  2 , N  Logic  For i  N  1, 1
b l x
x2  2 21 1
Logic  i 1
b u x
x2  2 23 3 N

l22 bi   lij x j
j 1
u22 bi  u
j i 1
ij xj
b3  l31 x1  l32 x2 xi  b1  u12 x2  u13 x3 xi 
x3  lii x1  uii
l33 u11
CH-2-16(MO) Numerical Methods, Lecture 4 Set of Linear Equations-I CH-2-16(MO) Numerical Methods, Lecture 4 Set of Linear Equations-I

3
MEC001P1M
5:33 PM 1/18 5:33 PM 2/18

Numerical Methods in Engineering Review


(Solution of Linear Equations-2)  Began the Solution of Linear equations
 The motivation was from several
Kannan Iyer engineering applications
Kannan.iyer@iitb.ac.in  Understood the definitions of diagonal,
tri-diagonal, upper triangular and lower
triangular matrices
 Understood the logic for solution of
equations when the coefficient matrix is
Department of Mechanical Engineering either diagonal, upper triangular or
Indian Institute of Technology Jammu lower triangular

CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II

5:33 PM 3/18 5:33 PM 4/18


Agenda for Today Cramer’s Rule-I
3 -1 2   x   12  3 
Understand Direct Solvers  1  
   
Non-Iterative 1 2 3   x  =  11  Sol = 1 
 2 -2 - 1   2   2 
x   2   
Subject to Round-off errors 3
Solution
Used for a smaller system (N<10), unless the 12 -1 2 3 12 2
Matrix is conditioned 11 2 3 1 11 3
Involves reduction of the coefficient matrix to 2 -2 -1 2 2 -1
a diagonal matrix or an upper triangluar x1 = x2 =
matrix, or a lower triangular matrix. 3 -1 2 3 -1 2
This followed by the sweeps discussed earlier 1 2 3 1 2 3
2 -2 -1 2 -2 -1
CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II
CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II

1
5:33 PM 5/18 5:33 PM 6/18
Cramer’s Rule-II Gauss Elimination-I
3 -1 12
1 2 11 3 - 1 2   x1  12 3 
       
x3 =
2 -2 2
1 2 3  x2  = 11  Sol = 1
3 -1 2 2 - 2 - 1 x   2  2 
 3    
1 2 3
Augmented Matrix
2 -2 -1
Number of multiplication operations ~ (n-1) (n+1)! 3 - 1 2 12
For n = 10, M ~ 3.6*108
1 2 3 11
Severe Round 
For n = 100, M ~ 10157 Off Error 2 - 2 - 1 2 
CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II

5:33 PM 7/18 5:33 PM 8/18


Gauss Operations Comments on Gauss Elimination
Augmented Matrix
3 - 1 2 12  Involves 1/3 N3 - N2 – 1/3 N (approx)
3 - 1 2 12  Multiplicative operations
1 2
 3 11  3R -R 2 1 0 7 7 21 
 For N=10, This is eq ~ 430
 0 - 4 - 7 - 18
2 - 2 - 1 2  3R3-2R1
 Errors propagate and hence, not used
for N > 10
3 
3 - 1 2 12     One should always check for Residues
 0 7 7 21   Sol = 1  Pivoting and Scaling reduces error
 2  propagation
7R3+ 4R2 0 0 - 21 - 42  

CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II

2
5:33 PM 9/18 5:33 PM 10/18
Pivoting and Scaling Gauss Jordan Elimination
 Diagonal dominance reduces error Augmented Matrix
propagation R1*= R1/3 1 - 1 / 3 2 / 3 4 
3 - 1 2 12
0 7 / 3
 Rearranging the coefficient to get diagonal 1 2  R -R *
7 
11
2 1
 7/3
dominance is called pivoting  3
2 - 2 - 1 R3-2R1* 0 - 4 / 3 - 7 / 3 - 6 
 Switching of only rows to improve diagonal 2 
dominance is called partial pivoting
R2**=3R2/7
 Dividing all the coefficients by the maximum
value in a row is called scaling 1 - 1 / 3 2 / 3 4  R1-R2**/(-3) 1 0 1 5 
0 7 1/ 3 73  0 1 1
 These can easily be incorporated and is  7 1/ 3
 3 
always recommended 0 - 4 / 3 - 7 / 3 - 6 R3-4R2**/(-3) 0 0 - 1 - 2 
CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II

5:33 PM
Comments on Gauss Jordan 11/18 5:33 PM
Towards Solution of Linear 12/18

Elimination Equations-III
R1-R3* 1 0 0 3 3 - 1 2 1 0 0
0 1 0 1 2 0 1 0 
R2=R3*  1  3
R3 = R3/-1 0 0 1
* 2 2 - 2 - 1 0 0 1

 MGJ=0.5N3+N2-0.5N, For N=10, MGJ=595 1 0 0 - 0.5714 0.7143 1


 Requires 50% more effort than that for  0 1 0 -1 1 1 

Gauss procedure 0 0 1 0.8571 - 0.5714 - 1
 Procedure is useful for getting inverse
CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II

3
5:33 PM 13/18 5:33 PM 14/18
L-U Decomposition-I Crout’s Decomposition
 If the problem has to be repeated with several 1 3 5
source vectors for the same coefficient vector, l11 0 0 1 u12 u13  2  a11 a12 a13 
L-U decomposition is recommended 
l 0  0 1 u23  = 4 a21 a22 a23 
 21 l22 
A = L U  l31 l32 l33  0 0 1  a31 a32 a33 
 a11= l11 , a21= l21 , a31= l31
 Such a decomposition speeds up calculation
 a12= l11 u12 , a13= l11 u13
 In general two methods are available
 a22= l21 u12 + l22 , a32= l31 u12 + l32
 Crout’s Decomposition
 Dolittle’s Decomposition  a23= l21 u13 + l22 u23
 a33= l31 u13 + l32 u23 + l33
CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II

5:33 PM 15/18 5:33 PM 16/18


Logic for Crout’s Method Solution for Crout’s Method
li1 = ai1 , for i = 1, n
u1j = a1j / l11 , for j = 2,n
Ax = b  L U x = b
for j = 2 , n-1
j -1 Introducing U x = d  1
lij = aij -  lik ukj ( for i = j , n ) This implies Ld  = b
2
k =1
Since [L] and {b} are known, {d} can be
 j -1

u ji =  a ji -  l jk uki  / l jj ( for i = j +1, n ) found from Eq.(2)by forward sweep
 k =1
n
 Once {d} is found out, {x} can be found from
lnn = ann -  lnk ukn Eq. (1) by backward sweep
k =1
CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II

4
5:33 PM 17/18 5:33 PM 18/18
Comments on Crout’s Method-I Comments on Crout’s Method-II
 It is possible to store L and U in A itself and
 Mcrout = MGauss conserve memory and logic written
accordingly
 But, back substitution M = n2 - n
𝑙 𝑢 𝑢
 Therefore for a large set one may
save substantial effort (n3 vs n2 ) 𝑙 𝑙 𝑢
𝑙 𝑙 𝑢
 It is possible to store the coefficients
of [L] and [U] in [A] itself as [A] is no
longer required. This saves memory  Indices have to be carefully addressed
 Since memory is cheap, this no longer may
 Other decompositions are similar
be required
CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II

5
11:22 PM CH-2-16(MO) 1/18 11:22 PM 2/18

Numerical Methods Review


(Solution of Linear Equations-3)  Began with the direct solution of linear
equations
 Studied Gauss Elimination
Kannan Iyer
 Understood that Pivoting improves accuracy.
Kannan.iyer@iitb.ac.in Concrete example will be studied later during
analysis of error propagation
 Studied Gauss Jordan method and
understood that it can be used to compute
inverse
Department of Mechanical Engineering  Studied Crout’s Decomposition and
Indian Institute of Technology Jammu understood that it is an efficient method for
repetitive solution for different RHS
CH-2-16(MO) Numerical Methods, Lecture 6 Set of Linear Equations-III CH-2-16(MO) Numerical Methods, Lecture 6 Set of Linear Equations-III

11:22 PM 3/18 11:22 PM 4/18


Agenda for Today Tridiagonal Matrix Solution
 a11 a12 0 0   x1  b1 
a a22 a23 0   x2  b2 
 Understand the method to solve Tridiagonal  21   
System of equations. 0 a32 a33 a34   x3  b3 
 
 Study Iterative Methods
0 0 a43 a44   x4  b4 
 Jacobi Method
 Gauss-Siedel Method
 Successive Over Relaxation Method 1 𝑎 /𝑎 0 0 𝑏 /𝑎
𝑎 𝑎 𝑎 0 𝑏
 Discuss possible termination Criteria. 0 𝑎 𝑎 𝑎 𝑏
0 0 𝑎 𝑎 𝑏

CH-2-16(MO) Numerical Methods, Lecture 6 Set of Linear Equations-III


CH-2-16(MO) Numerical Methods, Lecture 6 Set of Linear Equations-III

1
11:22 PM 5/18 11:22 PM 6/18
Tridiagonal Matrix Solution Tridiagonal Matrix Solution
1 𝑎∗ 0 0 𝑏∗ 1 𝑎∗ 0 0 𝑏∗
R2 – a21 R1 𝑎 𝑎 𝑎 0 𝑏 0 𝑎 − 𝑎 𝑎∗ 𝑎 0 𝑏 −𝑎 𝑏 ∗
0 𝑎 𝑎 𝑎 𝑏 0 𝑎 𝑎 𝑎 𝑏
0 0 𝑎 𝑎 𝑏 0 0 𝑎 𝑎 𝑏
𝑎 𝑏 𝑏∗

Note that 𝑎 = , 𝑏∗ = 1 𝑎∗ 0 0
𝑎 𝑎 𝑎 𝑏 − 𝑎 𝑏∗
0 1 0
1 𝑎∗ 0 0 𝑏∗ 𝑎 − 𝑎 𝑎∗ 𝑎 − 𝑎 𝑎∗
0 𝑎 − 𝑎 𝑎∗ 𝑎 0 𝑏 − 𝑎 𝑏∗ 0 𝑎 𝑎 𝑎 𝑏
0 𝑎 𝑎 𝑎 𝑏 0 0 𝑎 𝑎 𝑏
0 0 𝑎 𝑎 𝑏

CH-2-16(MO) Numerical Methods, Lecture 6 Set of Linear Equations-III CH-2-16(MO) Numerical Methods, Lecture 6 Set of Linear Equations-III

11:22 PM 7/18 11:22 PM 8/18


Tridiagonal Matrix Solution Thomas Algorithm (TDMA)-I
1
0
𝑎∗
1
0
𝑎∗
0
0
𝑏∗
𝑏∗
*
a12  a12 / a11 b1*  b1 / a11
0 𝑎 𝑎 𝑎 𝑏 For I = 2 to N-1
0 0 𝑎 𝑎 𝑏
𝑎 𝑏 − 𝑎 𝑏∗ ai ,i 1
𝑎∗ =
𝑎 − 𝑎 𝑎∗
𝑏∗ =
𝑎 − 𝑎 𝑎∗  ai*,i 1 
ai ,i  ai*1,i ai ,i 1
One can proceed this way and show that the
final matrix shall be
1 𝑎∗ 0 0 𝑏

bi  bi*1ai ,i 1
0 1 𝑎∗ 0 𝑏

b *
i
0 0 1 𝑎∗ 𝑏

ai ,i  a*i 1,i ai ,i 1
0 0 0 1 𝑏∗
CH-2-16(MO) Numerical Methods, Lecture 6 Set of Linear Equations-III
CH-2-16(MO) Numerical Methods, Lecture 6 Set of Linear Equations-III

2
11:22 PM 9/18 11:22 PM 10/18
Thomas Algorithm (TDMA)-II Thomas Algorithm (TDMA)-III
1 𝑎∗ 0 0 𝑥 𝑏∗ • It is possible to store A as (N,3) to conserve
𝑥 𝑏∗ memory and logic written accordingly
0 1 𝑎∗ 0
𝑥 =
𝑏∗
0
0
0
0
1
0
𝑎∗
1 𝑥 𝑏∗ a i ,i  1  a i ,1 a i ,i  a i ,2 a i ,i  1  a i ,3
⇒𝑥 = 𝑏∗ 𝑥 = 𝑏∗ − 𝑥 𝑎∗ ,
Back Substitution Logic 𝑎 𝑎 0 0 0 𝑎 𝑎
xN  b * 𝑎 𝑎 𝑎 0 𝑎 𝑎 𝑎
N
0 𝑎 𝑎 𝑎 𝑎 𝑎 𝑎
For I = N-1, N-2, …,1
0 0 𝑎 𝑎 𝑎 𝑎 0
xi  bi*  xi 1ai*,i 1
CH-2-16(MO) Numerical Methods, Lecture 6 Set of Linear Equations-III CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II

11:22 PM 11/18 11:22 PM 12/18


Closing Remarks on Direct Solvers Iterative Methods-I
 For large systems, which are sparse
 These methods are subject to error Iterative methods are most widely used
propagation
 These naturally occur during the solution of
 The error propagation can be indicated by a ODE’s and PDE’s.
term called condition number
 These methods do not suffer from propagation
 Ill conditioned systems are difficult to solve of round-off errors
 Several specialised methods exist
 The set of equations have to be diagonal
 Refer your book and advanced Linear dominant to obtain convergence
Algebra Texts.
 This is generally a limitation but where they
 This exposure is sufficient for most general are used, it can be achieved by some
problems techniques
CH-2-16(MO) Numerical Methods, Lecture 6 Set of Linear Equations-III CH-2-16(MO) Numerical Methods, Lecture 6 Set of Linear Equations-III

3
11:22 PM 13/18 11:22 PM 14/18
Iterative Methods-II Jacobi Iteration
 Consider  
x 1N  b1  a12 x 2N 1  a13 x 3N 1 / a11 
a11 x1  a12 x2  a13 x3  b1
x 2N  b  a
2 21 x 1N 1  a23 x 3N 1  / a 22
a21 x1  a22 x2  a23 x3  b2
x3 N
 b  a N 1
x1  a32 x 2N 1
 / a
a31 x1  a32 x2  a33 x3  b3 3 31 33

x1  b1  a12 x2  a13 x3  / a11  By adding and subtracting xiN-1 on both sides

 x2  b2  a21 x1  a23 x3  / a22  


x 1N  x 1N 1  b1  a11 x 1N 1  a12 x 2N 1  a13 x 3N 1 / a11 
x3  b3  a31 x1  a32 x2  / a33 x 2N  x 2N 1  b  a
2 21 x 1N 1  a22 x 2N 1  a23 x 3N 1  / a 22

 One can start with a guess and iterate x3  x


N N 1
3  b  a
3 31
N 1
x1  a32 x 2N 1
a x N 1
33 3  / a33
CH-2-16(MO) Numerical Methods, Lecture 6 Set of Linear Equations-III CH-2-16(MO) Numerical Methods, Lecture 6 Set of Linear Equations-III

11:22 PM 15/18 11:22 PM 16/18


Gauss Siedel Iteration Successive Over Relaxation (SOR)
 Sufficient Condition for convergence for both
 Here new values are used as soon as they Jacobi and Gauss-Siedel Iterations is
are available N
aii   aij
 
x 1N  x 1N 1  b1  a11 x 1N 1  a12 x 2N 1  a13 x 3N 1 / a11  j 1
for i = 1, N

 b  a  / a  Where the above criteria is satisfied it is


x 2N  x 2N 1 2 21 x 1N  a22 x 2N 1  a23 x 3N 1 22 possible to accelerate it further by
x 3N  x3N 1  b  a
3 31 x 1N  a32 x 2N  a33 x3N 1  / a33 introducing over-relaxation factor
 The value of the over-relaxation factor lies
 Where Jacobi converges, Gauss-Siedel between 1-2. Optimum values are available
converges faster for some specific form of the coefficient
matrix. In general it should be found by trials
CH-2-16(MO) Numerical Methods, Lecture 6 Set of Linear Equations-III CH-2-16(MO) Numerical Methods, Lecture 6 Set of Linear Equations-III

4
11:22 PM 17/18 11:22 PM 18/18
Logic for SOR Norms of Vectors
 p norm of a vector is defined as
1
 For i = 1, N
 N  p
  x j
p
  N  x
xi  xi    bi    aij x j   / aii p 
   j1 
  j 1 
x1 Sum of absolute values of components

x   Absolute value of the largest component


x 2  Euclidean Norm
CH-2-16(MO) Numerical Methods, Lecture 6 Set of Linear Equations-III CH-2-16(MO) Numerical Methods, Lecture 6 Set of Linear Equations-III

11:22 PM 19/18
Termination Criterion

x N 1  x N 

x N 1  x N
N


x

rN 

CH-2-16(MO) Numerical Methods, Lecture 6 Set of Linear Equations-III

5
11:21 PM CH-2-16(MO) 1/12 11:21 PM 2/12

Numerical Methods Review


(Solution of a Set of Non-Linear Equations)
 Looked at the principle of TDMA
Kannan Iyer  Got a glimpse of iterative methods
Kannan.iyer@iitb.ac.in  Understood a few termination criteria.

 Today we shall look at methods for the solution


of a set of non-linear solutions
Department of Mechanical Engineering
Indian Institute of Technology Jammu

CH-2-16(MO), Numerical Methods, Lecture 7 Set of Non-Linear Equations CH-2-16(MO), Numerical Methods, Lecture 7 Set of Non-Linear Equations

11:21 PM 3/12 11:21 PM 4/12


General Iteration Method-I
Consider a set of linear equation:
f 1 ( x1 , x2 ,..xn )  0  We can rewrite the above equations as:
f 2 ( x1 , x2 ,..xn )  0 10  x 2
x  g1 ( x, y )
............................ y
f n ( x1 , x2 ,..xn )  0 y  57  3xy 2  g 2 ( x, y )
Simple example in two variables is:
 xy  10  0
x2 + x2  The above equations with an initial
Solution  y3
guess of x=1.5 and y=3.5 diverges
y  3 xy 2  57  0
CH-2-16(MO), Numerical Methods, Lecture 7 Set of Non-Linear Equations
CH-2-16(MO), Numerical Methods, Lecture 7 Set of Non-Linear Equations

1
11:21 PM 5/12 11:21 PM 6/12
Iteration Method-II Newton-Raphson Method-I
 If we rewrite the equations as: • Principle
x  10  xy  g1 ( x, y )
Expanding the functions linearly in the
57  y neighbourhood of (x,y)
y  g 2 ( x, y )
3x f1 f
 The above equations converge to the correct f1 ( x  x, y  y )  f1 ( x, y )  x  1 y
solution with an initial guess of x=1.5 and x ( x . y ) y ( x . y )
y=3.5
g1 g 2 g1 g 2 For an arbitrary (x,y) we seek x and y such that
 The condition   1 and  1
for convergence x x y y f 1 ( x  x , y  y )  0
is
CH-2-16(MO), Numerical Methods, Lecture 7 Set of Non-Linear Equations CH-2-16(MO), Numerical Methods, Lecture 7 Set of Non-Linear Equations

11:21 PM 7/12 11:21 PM 8/12


Newton-Raphson Method-II Newton-Raphson Method-III
f1 f • The above concepts can be extended for a set
 x  1 y   f1 ( x, y ) of N equations
x ( x. y ) y ( x . y )
By similar reasoning we can write  xf1 xf1 .... xf1   x1    f 1 
 f 21 f 21 f 2 
n
x   f 
f 2 f 2  x1 x2 .... xn   2  2
x  y   f 2 ( x , y )  .... .... .... ....   
x ( x.y ) y ( x.y )  f n f n   ...   .... 
f n
 x1 x2 .... xn  xn   f n 
 The above equations can be solved using a
linear solver
Jacobian Matrix
 The derivatives can be found by finite
difference
CH-2-16(MO), Numerical Methods, Lecture 7 Set of Non-Linear Equations
CH-2-16(MO), Numerical Methods, Lecture 7 Set of Non-Linear Equations

2
11:21 PM 9/12 11:21 PM 10/12
Newton-Raphson Method-IV Continuation Method-I
 The method converges with good initial guesses.  Here a set whose roots are known is considered
 The simplest way is to generate from the original
 The problem is to give good guesses.
function with an arbitrary initial guess
 Continuation method is one powerful method to
overcome this problem.
F1 ( x , y )  f 1 ( x , y )   f 1 ( x0 , y0 )
F2 ( x , y )  f 2 ( x , y )   f 2 ( x0 , y0 )
 For Theta = 1, (x0,y0) are the roots for F1 and F2
 The value of Theta is gradually reduced from 1 to
0 and the set is solved every time with the
previous roots as the guess.
CH-2-16(MO), Numerical Methods, Lecture 7 Set of Non-Linear Equations CH-2-16(MO), Numerical Methods, Lecture 7 Set of Non-Linear Equations

11:21 PM 11/12 11:21 PM 12/12


Continuation Method-II Continuation Method-III
Consider the example in Slide 3 is: • Define a new set
𝑓 𝑥, 𝑦 = 𝑥 + 𝑥𝑦 − 10 = 0, 𝐹 𝑥, 𝑦 = 𝑥 + 𝑥𝑦 − 10 − 𝜃(−10) = 0,
𝑓 (𝑥, 𝑦) = 𝑦 + 3𝑥𝑦 − 57 = 0 𝐹 𝑥, 𝑦 = 𝑦 + 3𝑥𝑦 − 57 − 𝜃(−57) = 0
Let guess be x0 = 0, y0 = 0 Note that for 𝜃 = 1, (0, 0) is the root
𝑓 𝑥 , 𝑦 = −10, For 𝜃 = 0.9 we assume (0,0) is close to the root
𝑓 (𝑥 , 𝑦 ) = −57 and we solve for the roots with this initial guess.
We proceed similarly with diminishing 𝜃 till
𝜃 = 0. These roots are the solution of our
Original set.
CH-2-16(MO), Numerical Methods, Lecture 7 Set of Non-Linear Equations CH-2-16(MO), Numerical Methods, Lecture 7 Set of Non-Linear Equations

3
11:23 PM 1/19 11:23 PM 2/19
CH-2-16(MO) General-I
Numerical Methods
 Many a time we have values of a function at
(Interpolation) some discrete intervals of a variable and we
need to obtain the value of the function at
Kannan Iyer some value of the variable within the range.
Kannan.iyer@iitb.ac.in  The question is whether one should
interpolate or should one estimate by curve
fitting using regression?
 The answer is another question? How
accurate is the data?
Department of Mechanical Engineering  If very accurate – interpolate, if not, fit a
Indian Institute of Technology Jammu curve by regression

11:23 PM 3/19 11:23 PM 4/19


General-II General-III

`
 What is common between both?
 Both involve functional approximation
 What is the difference?
 In interpolation the curve passes through all y y
points, while in regression, it may not pass
through any point!

x x

Interpolation Regression

1
11:23 PM 5/19 11:23 PM 6/19
Interpolation Lagrange Interpolation-I

 Interpolation fits a curve passing through the


existing data First order Lagrange interpolation
 The number of degrees of fredom would be
equal to the number of points x  x2 x  x1
f1( x )  f ( x1 )  f ( x2 )
 The functions chosen are at the discretion of x1  x2 x2  x1
the user Second order Lagrange interpolation
 Polynomials are the most common functions ( x  x2 )( x  x3 ) ( x  x1 )( x  x3 )
f2( x )  f ( x1 )  f ( x2 )
 Nth order polynomial is unique which passes ( x1  x2 )( x1  x3 ) ( x2  x1 )( x2  x3 )
through N+1 points.
( x  x1 )( x  x2 )
 Many concepts exist. Lagrangian  f ( x3 )
interpolation is popular and most easy to ( x3  x1 )( x3  x2 )
code.

Lagrange Interpolation-II
Newton’s Forward Difference Polynomial-I

General form The forward difference operator is illustrated


N 1
in the following table
f N ( x )   Li ( x ) f ( xi )
i 1
x f f 2 f 3 f
N 1 x  xj f 0 2 f 0 (f3-3f2+3f1-f0)= 3 f 0
Where, Li ( x )   xo f0 (f1-f0)= (f2-2f1+f0)=
j 1
j i
xi  x j x1 f1 (f2-f1)= f 1 (f3-2f2+f1)= 2 f1 --
x2 f2 (f3-f2)= f 2 -- --
 Usually it is better to use piecewise lower
order polynomials than using a single higher x3 f3 -- -- --
order polynomial
11:23 PM 7/19 11:23 PM 8/19

2
11:23 PM 10/19

Newton’s Forward Difference Polynomial-I Newton’s Forward Difference Polynomial-III


The concept that is employed is to expand
Newton’s forward difference polynomials can the function at a point x around x0.
be expressed as sh h
 First Order
x  x0 x
P1 ( x )  f ( 0 )  sf ( 0 ) where s  X-2 X-1 X0 x1 x2 x3
h
 Second Order 2
s( s  1 ) 2 f ( x0  sh)  f ( x0 )  f ( x0 ) sh  f ( x0 ) ( sh2!)  ...
P2 ( x )  f ( 0 )  sf ( 0 )   f (0 )
2! n n1
 Nth Order  f n ( x0 ) ( shn!)  f n1 () ( shn)1!
N
PN ( x )  f ( x0 )   ( sCi )i f ( x0 ) It should also be clear that a polynomial of a
11:23 PM
i 1
9/19 nth order using n+1 points is unique

11:23 PM 11/19

Newton’s Forward Difference Polynomial-IV Example-I


 The Newton’s forward interpolating Polynomial f=1/x
formula using certain n+1 points is equivalent
to the Taylor series formula that satisfies the x f f 2 f 3 f
function at the n+1 points 3.4 0.294118 -0.008404 0.000468 0.000040
 To check this out for the first order polynomial 3.5 0.285714 -0.007936 0.000428 --
is straight forward as
3.6 0.277778 -0.007508 -- --
𝑓(𝑥 ) − 𝑓(𝑥 )
⇒ 𝑓(𝑥 + 𝑠ℎ) = 𝑓(𝑥 ) + 𝑠ℎ 3.7 0.270270 -- -- --

𝑃 (𝑥) = 𝑓 + 𝑠Δ𝑓 Find the value of function at 3.44
11:23 PM 12/19

3
Example-II Comments
 For a table having four points we could
s = (3.44-3.40)/0.1 = 0.4 construct up to four terms
 The n+1th term corresponds to term of sn if
(1/3.44) = 0.294418 0.294418 expanded in Taylor Series. Further s~1
0.290756  Therefore, from Taylor series expansion we
+ (0.4) (-0.008404) can conclude that the leading error term will
+ [(0.4)(0.4-1)/2] (0.000468) 0.290700 be (hn+1/n!) (dn+1f/dxn+1)
 Such an approximation is said to be n+1th
+ [(0.4)(0.4-1)(0.4-2)/6] (0.000040) 0.290698 order approximation
 This implies that if h is reduced by a factor of
Exact Value 0.290697674 2 the error will decrease by a factor 2n+1
11:23 PM 13/19 11:23 PM 14/19

Newton’s Backward Difference Polynomial-I Newton’s Backward Difference Polynomial-II


The backward difference operator is
illustrated in the following table Similarly Newton’s backward difference
polynomials can be expressed as
x f f 2 f 3 f  First Order
xo f0 -- -- -- x  x0
P1 ( x )  f ( 0 )  sf ( 0 ) where s 
x1 f1 (f1-f0)= f 1 -- -- h
 Second Order
x2 f2 (f2-f1)= f 2 (f2-2f1+f0)=  2 f2 -- ( s )( s  1 ) 2
P2 ( x )  f ( 0 )  sf ( 0 )   f (0 )
2!
Nth Order
x3 f3 (f3-f2)= f 3 (f3-2f2+f1)=  2 f 3 (f3-3f2+3f1-f0)=  3 f 3 

11:23 PM 15/19
( xPM)  f ( x0 )   (( s  i  1 )Ci ) i f ( x0 )
PN11:23 16/19
i 0

4
Example-I Example-II
f=1/x
s = (3.50-3.44)/0.1 = -0.6
x f f  f
2
 f
3

3.2 0.312500 (1/3.44) = 0.285714 0.285714


3.3 0.303030 -0.009470 + (-0.6) (-0.008404) 0.290756
3.4 0.294118 -0.008912 0.000558 + [(-0.6+1)(-0.6)/2] (0.000508) 0.290695
3.5 0.285714 -0.008404 0.000508 0.000050
+[(-0.6+2)(-0.6+1)(-0.6)/6] 0.290698
(0.000050)
Find the value of function at 3.44 Exact Value 0.290697674
11:23 PM 17/19 11:23 PM 18/19

Physical Interpretations

Backward s=-0.6
Backward s=-2.6
3.44

3.2 3.3 3.4 3.5 3.6 3.7

11:23 PM
Forward s=0.1
19/19

5
11:11 PM 1/15 11:11 PM 2/15
CH-2-16(MO) General
Numerical Methods
(Linear Regression)  When imprecise data is handled,
regression is recommended
Kannan Iyer
 Linear regression is most common
Kannan.iyer@iitb.ac.in
 Among the functions used polynomial
is most preferred
 Piecewise lower order is always better
that a single higher order polynomial
Department of Mechanical Engineering  In regression lower order polynomial is
Indian Institute of Technology Jammu
passed through a large number of
points

11:11 PM 3/15 11:11 PM 4/15


Criterion for Regression Linear Regression-I
 Maximum error minimised?  If the function to be fitted is
f ( x )  a1 f 1 ( x )  a2 f 2 ( x )
 Sum of error minimised?
 Then sum of squares of error implies
 Square of the sum of error minimised? N

 Ei  ( yi  ( a1 f1( xi )  a2 f 2 ( xi ) ))2
2

 ei = yi - yfit i 1

 a1 and a2 are determined by using


 Least squares
  Ei   Ei
2 2
criterion implies
 0
ei  minimum
2
a1 a2

1
11:11 PM 5/15 11:11 PM 6/15
Linear Regression-II Linear Regression-III
  Ei
2 N
 N N
 N 
a1
 2 ( y i  ( a1 f 1 ( xi )  a2 f 2 ( xi ) )) f 1 ( xi )  0   ( f 1 ( xi ))
2
 f ( x ) f ( x )
1 i 2 i
 a1   
y( xi ) f 1( xi ) 

  N
i 1
 N i 1 i 1
N  i 1

 f (x )f ( x ) 2   a 2   y ( x ) f ( x )
  
N N 2 N
( f ( x ))
  ( yi f 1 ( xi )  a1  ( f 1 ( xi ))  a2  f 2 ( xi ) f 1 ( xi )  0 i 1
1 i 2 i
i 1
2 i
 i 1
i 2 i

i 1 i 1 i 1

  Ei  If f1(x) = 1 and f2(x) = x, then we have a


2

Similarly will lead to linear fit


a2
N N N  Note that the coefficient matrix is symmetric
  ( yi f 2 ( xi )  a1  f 1 ( xi ) f 2 ( xi )  a2  ( f 2 ( xi ))2  0  a1 and a2 can be obtained by Gauss
i 1 i 1 i 1
elimination

11:11 PM 7/15 11:11 PM 8/15


 The above procedure can be generalized Generalized least Squares-II
for any number of functions. This is called
 N N

generalized least squares
 Let us denote the F matrix as
  ( f 1 ( xi ))2  f ( x ) f ( x )
1 i 2 i
F T F   N i 1 i 1
N 
 f1( x1 ) f 2 ( x1 )  f (x )f (x ) 
  ( f ( x )) 2
 f ( x ) f ( x ) nx2 1 i 2 i 2 i

i 1 i 1
 1 2 2 2 

F   ..... .....   y( x1 ) 
   y( x )   N 
 ..... ..... 
 2 
  y( xi ) f1( xi ) 
 f 1( xn ) f 2 ( xn )   
F T  ..    iN1 
 ..   y( xi ) f 2 ( xi )
f (x ) f 1( x2 ) .... .... f 1 ( xn )  2 x n    i 1 
 FT   1 1
 f 2 ( x1 ) f 2 ( x2 ) .... .... f 2 ( xn )  y( xn )

2
11:11 PM 9/15 11:11 PM 10/15
Generalized least Squares-III Goodness of Fit
 The quality of fit is normally given by
coefficient of regression, which is an
 The linear least square can therefore be obscure quantity
generalized as  In engineering parlance, the more relevant
 y( x1 )  parameter that can be easily connected is
 y( x )  the RMS error
 a1   2 
T  n
F F    F  .. 
T

a 2   .. 
 
e
i 1
i
2

 y( xn ) n
 Often the error is normalised with the true
value to express the RMS error as a fraction
or as percentage

11:11 PM 11/15 11:11 PM 12/15


Power Law Fit-I Power Law Fit-II
 Functions of the form f = c Ren can be  Multiple functions of the form Nu = c Ren Prm
fitted with linear regression can be fitted with linear regression
 Taking log of both sides, we get ln (Nu) = ln(c) + n ln(Re) + m ln(Pr)
ln(f) = ln(c) + n ln(Re)
 z  a0 f 0 ( x , y )  a1 f 1( x , y )  a2 f 2 ( x , y )
 This is of the form y = a + bx, where,
 In the above equation
y = ln(f), x = ln(Re)
z = ln (Nu), f0(x,y) = 1, f1(x,y) = ln(Re)
f2(x,y) = ln(Pr)
 Once the solution for a0, a1, and a2 are
obtained, note that c = ea0 ,n = a1 and m = a2

3
11:11 PM 13/15 11:11 PM 14/15
Other Functions-I Other Functions-II
 Other than power laws, exponential and  Saturation Function 1 1 b x
saturation functions are also popular, which 
y a x
can be transformed into linear regression
1 b1 1
ln( y )  ln( a )  bx  
y ax a
y  a e bx y x
y ln(y) ya 1/y
Slope = b b x Slope = b/a

Intercept= ln(a) Intercept= 1/a


x
x x 1/x

11:11 PM 15/15
Summary
 Learnt regression methodology.
 Understood the method of generalized least
squares, which is a powerful technique.

4
11:13 PM 1/19 11:13 PM 2/19
CH-2-16(MO) Review
Numerical Methods
 Interpolation is used to approximate the
(Spline Interpolation) value of a function from a set of discrete
values between two variables
Kannan Iyer
 Looked at Lagrange Interpolation as the
Kannan.iyer@iitb.ac.in easiest polynomial interpolation
 Looked at Newton’s backward and forward
interpolating polynomials
 The values are obtained using difference
tables
Department of Mechanical Engineering
 Understood that a polynomial of Nth order
Indian Institute of Technology Jammu
has an error of the order of hN+1

11:13 PM 3/19 11:13 PM 4/19


Spline Interpolation-I Spline Interpolation-II
 It is one of the most powerful interpolation  If the data is inherently wiggly, it will capture
function all twists and turns
 Commonly used in graphics software

 It has all the characteristics of a good


interpolation procedure

y y y

x x x

1
11:13 PM 5/19 11:13 PM 6/19
Cubic Spline
Principle of Splines
Y1 = a1x3 + b1x2 + c1x +d1
 The concept of spline is to pass a specified Y2 = a2x3 + b2x2 + c2x +d2
order curve through a pair of points Conditions
 We know that we can pass a piece-wise
linear curve between two pairs of points Y2
Y1 2 Y1(x1)=y1 , Y1(x2)=y2
 Can we pass a second or third order curve
between every pair of points? y 3 Y2(x2)=y2, Y2(x3)=y3
1
 The question is how? Y1’(x2) = Y2’(x2)
 Quadratic spline fits a quadratic between Y1’’(x2) = Y2’’(x2)
every pair of point and cubic spline fits a x1 x2 x1 Y1’’(x1) = Y2’’(x3) = 0
cubic between every pair
x

11:13 PM 7/19

i
Natural Cubic Spline-I
Yi-1 Yi
 In general if we have N+1 points, we shall
i+1 have N intermediate curves. This will need
Y2 i-1 4N conditions
N  For every interior point we shall have four
YN
y 2 conditions . Thus we shall have 4(N-1)
conditions
N+1
Y1  The functional values at the end points give
1 two more conditions
 Assuming the second derivatives to be zero
at the end points closes the relations for a
natural spline.
x 11:13 PM 8/19

2
11:13 PM 9/19 11:13 PM 10/19
Natural Cubic Spline-II Natural Cubic Spline-III
 Rewriting first term
xi 1  x x  xi
 The curve Yi(x) joins i and i+1 Yi( x )  Yi( xi )  Yi( xi 1 ) 1
Yi-1
Yi xi 1  xi xi 1  xi
 Since the curve is cubic, i
the second derivative Y”i(x) y i+1 Yi( x )  Ai Yi( xi )  Bi Yi( xi 1 )
will be a straight line i-1 Where,
 Using Lagrange interpolation, xi 1  x x  xi
Ai  , Bi 
we can write xi 1  xi xi 1  xi
x Note that,
x  xi 1 x  xi
Yi( x )  Yi( xi )  Yi( xi 1 ) xi 1  x xi 1  xi  xi 1  x
xi  xi 1 xi 1  xi 1  Ai  1    Bi
xi 1  xi xi 1  xi

11:13 PM 11/19 11:13 PM 12/19


Natural Cubic Spline-IV Natural Cubic Spline-V
 Integrating Eq. (1)
 Satisfying the functional values at terminal points
Y ( x )  ( xi 1  x )2 Yi( xi 1 ) ( x  xi )2  Yi(xi) = yi, Yi(xi+1) = yi+1,
Yi( x )  i i   C1 2
xi 1  xi 2 xi 1  xi 2  Note that Ai(xi) = 1, Bi(xi) = 0, Ai(xi+1) = 0, Bi(xi+1) = 1,

 Yi( xi ) Ai ( xi 1  x ) Yi( xi 1 )Bi ( x  xi ) Yi( xi ) ( xi 1  xi )2


   C1 yi   C1 xi  C2 4
2 2 xi 1  xi 6
 Integrating Eq.(2), we get Yi( xi 1 ) ( xi 1  xi )2
yi  1   C1 xi 1  C2
Yi( xi ) ( xi 1  x )3 Yi( xi 1 ) ( x  xi )3 xi 1  xi 6 5
Yi ( x )    C1 x  C2
xi 1  xi 6 xi 1  xi 6  (Eq. (5)-Eq.(4))/(xi+1-xi)
Yi( xi ) Ai ( xi 1  x )2 Yi( xi 1 )Bi ( x  xi )2 𝑦 − 𝑦 (𝑌 𝑥 − 𝑌 𝑥 )(𝑥 −𝑥 )
   C1 x  C2 3 𝐶 = −
6 6 𝑥 −𝑥 6

3
11:13 PM 13/19 11:13 PM 14/19
Natural Cubic Spline-VI Natural Cubic Spline-VII
 Now we shall match the first derivative at the interior
 Similarly, xi+1Eq. (4) - xiEq.(5)/(xi+1-xi) point (xi). From Slide 10, we have Ai and Bi
𝑥 𝑦 −𝑥 𝑦 𝑥 𝑌 (𝑥 ) + 𝑥 𝑌 (𝑥 ) (𝑥 −𝑥 ) xi 1  x x  xi 6+
𝐶 = − Ai  , Bi 
𝑥 −𝑥 6 xi 1  xi xi 1  xi

 Substitution of C1 and C2 in Eq. (3) and rearranging, dAi 1 dBi 1


 , 
we get dx xi 1  xi dx xi 1  xi
 Yi( xi )( Ai3  Ai )  ( xi 1  xi )2  Further,
Yi ( x )  Ai yi  Bi yi 1   
6 dYi dY dAi dYi dBi
  Y ( x )( B 3  B )  6
 i i 1 i i Yi( x )   i 
dx dAi dx dBi dx

11:13 PM 15/19 11:13 PM 16/19


Natural Cubic Spline-VIII Natural Cubic Spline-IX
 Note that Yi ( xi )  Yi 1 ( xi )
 Differentiating Eq. (6) with x, we have
 Further, Ai(xi) = 1, Bi(xi) = 0,
  dA    2Yi( xi ) 
 Yi ( xi )( 3 Ai  1 ) i
2
  
 ( xi 1  xi )  xi 1  xi  ( xi 1  xi )2
2
dAi dBi  dx yi yi  1
Yi( x )  yi  yi  1  Yi( xi )    
dx dx  dBi  6 xi 1  xi xi 1  xi  Yi( xi 1 )  6
  Yi( xi 1 )( 3 Bi  1 )  
2
 7
 dx   x x 
 i 1 i 

 ( 3 Ai2  1 )   Changing i to i-1, Ai-1(xi) = 0, Bi-1(xi) = 1,


  Yi( xi ) 
yi yi  1  xi 1  xi  ( xi 1  xi )2
    Yi1 ( xi 1 ) 
xi 1  xi xi 1  xi    
  Yi( xi 1 ) ( 3 Bi  1 ) 
2
6 8
  yi 1 yi  xi  xi 1  ( xi  xi 1 )2
 xi 1  xi  Yi1( xi )     
xi  xi 1 xi  xi 1 6
  2Yi1 ( xi ) 
 x x 
 i i 1 

4
11:13 PM 17/19 11:13 PM 18/19
Natural Cubic Spline-X Natural Cubic Spline-XI
 Eqs. (7) and (8) are rewritten as
Yi( xi ) 
 yi  yi  1
xi 1  xi
 
(x x )
  2Yi( xi )  Yi( xi 1 ) i 1 i
6
9  The above relation is valid for I = 2, 3, …, N
 For natural spline, Y" (1), Y" (N+1) = 0
Yi1 ( xi ) 
 yi  1  yi
xi  xi 1
  ( x  xi 1 )
 Yi1 ( xi 1 )  2Yi1 ( xi ) i
6 10  Eq. (11) is a list set of N-1 eqs., for Y" (2), Y" (3), …,
Y" (N).
 Since LHS of Eqs. (9) and (10) are same, we can  Once these are computed, the value of y for any x is
equate the RHS and rearrange to get found using Eq. (6). The Ai and Bi are found using
( x  xi 1 ) (x x ) (x x ) Eq. (6+)
Y  ( xi 1 ) i  2Y  ( xi ) i 1 i 1  Y ( xi 1 ) i 1 i
6 6 6
( y  yi ) ( yi  y i  1 ) 11
 i 1 
( xi 1  xi ) ( xi  xi 1 )

11:13 PM 19/19
Algorithm

 Input x(i), y(i) for i = 1, 2, 3, …, N, N+1


 Formulate TDMA Coefficients and RHS. (Refer Eq.
(11)
 For natural spline, Y" (1), Y" (N+1) = 0
 Solve for Y" (2) to Y" (N) Using TDMA
 Search for the interval where the interpolation is
required
 Once these are computed, the value of y is found
using Eq. (6). The Ai and Bi are found using Eq. (6+)

5
MEC001P1M
Numerical Methods in Engineering Numerical Differentiation
(Numerical Differentiation and
Integration) Motivation for study
– Obtaining derivative at a point from a table of
Kannan Iyer functional data
kannan.iyer@iitjammu.ac.in  Obtaining ‘cp’ from the measurement of h-T
 Obtaining ‘f’ from a tabulated ‘v-y’ data
 Generating methods for solving ODE/PDE

Department of Mechanical Engineering


Indian Institute of Technology, Jammu

Derivatives from Polynomials Derivatives from Polynomials (Cont’d)

• Numerical derivatives can be obtained from Second Order


polynomials and their function values s( s  1 ) 2
P2 ( x )  f ( 0 )  sf ( 0 )   f (0 )
2!
 2s  1 2 1
 First Order
x  x0  P2 ( x)  f (0)   f (0)
P1 ( x )  f ( 0 )  sf ( 0 ) where s   2! h
h
Higher order approximations can similarly be
 df ds 1 ds 1 obtained.
Therefore P1 ( x)   f (0)  
ds dx h dx h
Since each term is divided by h the accuracy of
the derivative would be order hn and not hn+1

1
Example-I Derivatives from Polynomials (Cont’d)
f=1/x Example: f=1/x
x f f  f 2
 f
3
s = (3.44-3.40)/0.1 = 0.4
3.4 0.294118 -0.008404 0.000468 0.000040
f ’(3.44) = -0.008404/0.1 -0.08404
3.5 0.285714 -0.007936 0.000428 --
3.6 0.277778 -0.007508 -- -- +[{2(0.4)-1}/2] (0.000468)/0.1 -0.084508
3.7 0.270270 -- -- -- + [{3(0.4)2-6(0.4)+2)}/6]
-0.084503
(0.000040)/0.1
Find the derivative value of function at 3.44
5/19
Exact Value -0.084505

Derivatives from Polynomials (Cont’d) Numerical Integration


Similarly we can obtain derivatives using
The function f(x) may be a set of discrete
backward interpolating polynomial.
values as in the case of properties
We have shown that they would be It can be a complex function, in which case
equivalent by choosing proper value of ‘s’. the function can be evaluated at some
We shall make use of these to derive finite discrete values and integrated suitably
difference relations later used in ODEs We shall derive the procedures using
Newton’s forward interpolating polynomial
We can similarly obtain higher derivatives.
Unlike differentiation, integration is an
As pointed earlier, the accuracies will reduce accurate process and the order of accuracy
further due to divisions by higher orders of ‘h’. increases.

2
TRAPEZIODAL RULE (First Order) Trapezoidal Rule (Cont’d)
x  x0 As we have used first order polynomial the
P1 ( x )  f ( 0 )  sf ( 0 ) where s 
h error term for polynomial is O(h2)
high 1
Since the integral involves a multiplication
 f ( x )dx   f ( s )hds
low 0
or dx  hds with h, the order increases to h3 locally.
1 1
1
 s2  s( s  1 ) 2
   f (0)  sf (0) hds  h  f (0) s  f (0)  Error Term   h f (  )hds
0  2 0 0
2
 f (0)   1  h3
  f (0)   h   f (0)   f (1)  f (0)  h  f (  )
 2   2  12
h
  f (0)  f (1) 
2

Trapezoidal Rule (Cont’d) Simpson’s 1/3 Rule


2
 ( s 2  s) 2 
   f (0)  sf (0)   f (0) hds
0
2 
f(s)
Piecewise linear  
2
s 2 f (0)  s 3 s 2  2
  sf (0)      f (0) h
 2 6 4 0
0 1 2 n-1 n
n 1 n 1
h h  1 
I   I i    f i  f i 1    f0  2 f 1  2 f 1  ...2 f n 1  f n   2 f (0)  2( f (1)  f (0))  ( f ( 2)  2 f (1)  f (0)) h
i 1 i 1 2 2  3 
n 1 3
h   h3  x  x  h3 
Error    f i (  )  n f ( )    n 0  f ( )  h
i 0 12  12   h  12    f (0)  4 f (1)  f (2)
3
Globally O(h2)

3
Simpson’s 1/3 Rule Simpson’s 1/3 Rule
2
s ( s  1)( s  2) 3
Error Term   h f ( )hds  0
0
6
f(s)
Piecewise quadratic 2
s ( s  1)( s  2)( s  3) 4
 h f ( )hds
0
24
0 1 2 n-1 n
h 1 5
I   Ii    fi  4 fi 1  fi 2   h f ( )
3 90
h 5 iv ( x  x ) h 5 iv
h Global Error    fi (  )   n 0 fi (  )
  f 0  4 f1  2 f 2  4 f 3...  2 f n 2  4 f n 1  f n  90 2h 90
3
Globally O(h4)

Simpson’s 3/8 Rule Deferred Approach to the Limit


Simpson’s 1/3 rule can be applied if only odd Richardson’s Extrapolation, also called ‘Deferred
number of data points are available Approach to the Limit’ is a method to improve
the accuracy of lower order methods.
To integrate when even number of points are In the domain of integration this method is
available, the first 4 points can be integrated called Romberg Integration
by Simpson’s 3/8 rule and the remaining by
Simpson’s 1/3 rule. It can be shown that in using trapezoidal
3h integration, the truncation errors can be
I  f0  3 f 1  3 f 2  f 3  written as C1 h2 + C2 h4 + C3 h6 + …
8
3h 5 iv
Local Error   fi (  )
80
Globally O(h4)

4
Romberg Integration-I Romberg Integration-II
If the integration procedure is carried out for
intervals h and 2h, we can write The above can be generalized by assuming a
I = I(h) + C1 h2 + C2 + …,
h4 (1) power law form
I = I(2h) + C1 (2h)2 + C2 (2h)4 + …, (2) I = I(h) + C1 hn + C2 hm …, (1)
Multiplying Eq. (1) by 4 and subtracting I = I(2h) + C1 (2h)n + C2 (2h)m …, (2)
Eq.(2) and then dividing by 3, we get Multiplying Eq. (1) by 2n and subtracting
I = [4I(h) – I(2h)]/3 + O(h)4 Eq.(2) and then dividing by 2n-1 we get
This can be rewritten as I = [2nI(h) – I(2h)]/(2n-1)+ O(h)m
I = I(h) + [I(h) – I(2h)]/3 + O(h)4 This can be rewritten as
Thus we have a higher order solution from I = I(h) + [I(h) – I(2h)]/(2n-1)+ O(h)m
lower order solutions. This is called the
Richardson extrapolation

Romberg Integration-III Romberg Integration-IV


 Thus, by using trapezoidal rule repeatedly, the
accuracy can be further improved by computing Level Steps Trapez Richard Richard Richard
with say h/2 and eliminating the next constant in O(h2) O(h4) O(h6) O(h8)
the previous slide. 1 1 (20) I1,1
 In general, the correction formula is
2 2 (21) I2,1 I2,2
Improved Value = More accurate value
+ [More accurate value – Less accurate value] 3 4(22) I3,1 I3,2 I3,3
____________________________________
2n-1 4 8(23) I4,1 I4,2 I4,3 I4,4
 Such a recursive algorithm is called Romberg
Integration Procedure

5
Romberg Integration-V
I1,1 I=1
aint(I,I)=(ahigh-alow)*(f(alow)+f(ahigh))/2.
error=2.*errmax | Just to make algorithm proceed
do while(I.lt.nromax.and.dabs(error).gt.errmax)
I=I+1
I2,1 nsteps=2**(I-1)
aint(I,1)=trapez(alow,ahigh,nsteps)
Do j = 2,I
aint(I,j)=aint(i,j-1)+(aint(i,j-1)-aint(i-1,j-1))
1 /(2**(2*(j-1))-1)
enddo
best=aint(I,I)
I3,1 error=abs((best-aint(I,I-1))/best)
enddo
stop
end

You might also like