NM Merged (1-11)
NM Merged (1-11)
General-2 General-3
Books
Emphasis
Books have been listed in a separate
Document Engineering rather than mathematical
Fairly Comprehensive Power Points will Fundamentals to be stressed
be uploaded as notes. Theorems and Lemmas not be stressed but
just mentioned
General concepts to be stressed than very
specific ones
Extensive use of algorithms will be the focus
1
Material to be covered Problem Solving in Engineering
Single non-linear equation
System of linear and non-linear equations Analytical
Interpolation, extrapolation and regression Needs complex mathematics and
applicable only for ideal cases.
Differentiation and integration
Ordinary Differential Equations (ODEs) Experimental
including Initial Value Problem (IVP) and Limited validity
(BVP) Expensive and Time consuming
Partial Differential Equations (PDEs) Numerical
involving parabolic, elliptic and hyperbolic Simple to apply
systems
Quick and economical
Detailed topics have been outlined on a
separate Document
2
Problem Solving Methodology Elements of Numerical Solution
Algorithm Design
Physical
Model Formulation
Math Aim of the course
problem Governing Equations
Model Program Implementation
Students to learn independently through
weekly homework
Numerical Techniques Debugging and Testing
Yes No
Benchmark problems to be utilised
Documentation, Storage and Retrieval
Satisfactory Model Validation
Solution Should have a clear write-up, stored in
Solution? pendrives and hard discs.
Through Experiments
3
Accuracy and Precision Absolute and Relative Error
Accuracy implies that the True Error = True Value – Approximate Value
measured value agrees with 𝐸 = 𝑇𝑉 − 𝐴𝑉
true value.
Most of the time we do not know the true value!
Precision refers to
Fractional Relative Error = True error/True value.
measurements that are close
𝐸
to each other. 𝑒 =
𝑇𝑉
When we make a bunch of
% Relative Error = et x 100
measurements, accuracy
implies that average value is Often when estimating a true value, the
close to true value. procedure may need several computations to
increase accuracy
Precision implies that the
standard deviation is small.
4
Integer Representation Real Number Representation
Integer Representation Real Number Representation
𝑚×𝑏
e.g., -0.1234 × 10
5
Truncation Error Truncation Error-2
There is another type of error introduced when The basis for such solution comes from Taylor
working with differential equations. Series
When solving differential equations such as 𝑦 𝑥 +ℎ
ℎ ℎ
= 𝑓 𝑥 , 𝑤ℎ𝑒𝑟𝑒 𝑓 𝑥 is a complicated function = 𝑦 𝑥 + ℎ𝑦 𝑥 + 𝑦 𝑥 + ⋯ 𝑦 𝑥
2! 2!
The derivative dy/dx is often dersibed as Often it is truncated and expressed as:
𝑑𝑦 𝑦 ℎ − 𝑦0 = 𝑦 𝑥 + ℎ = 𝑦 𝑥 + ℎ𝑦 𝑥 + 𝑂(ℎ )
𝑥 = 0 = 𝑓(0) =
𝑑𝑥 ℎ For x0 = 0, we get
This is would imply that
𝑦 ℎ = 𝑦0 + 𝑓 0 × ℎ
𝑦 ℎ = 𝑦0 + 𝑓 0 × ℎ
The error introduced by truncation of the Taylor
series is called truncation error.
We shall discuss these later in the course.
6
General-1
Numerical Methods
(Solution of Non-Linear Equations) There are many applications in TFE that
requires solution of Non-linear equation
Kannan Iyer
kannan.iyer@iitjammu.ac.in Heat generating sphere cooled by
convection and radiation
Q hA T B T A T B T 0
4 4
1
Some Observations-II Bracketing
A non-linear equation is also called a
transcendental equation xN
xP
We shall discuss some of the most popular
methods
Broadly the methods can be divided in two- f(x)
groups
Those that need bracketing of roots
Those that do not need bracketing
x
5/20 6/20
2
Bisection Method 1. Bracket the root and identify xp and xn
2. Do Until I < Imax
xP x xN 1. Xnew = 0.5*(xp+xn)
2. If f(xnew) < Tol then
f(x) 3. Root = xnew and Get out
4. ElseIf f(xnew) < 0 Then
xn = xnew
Else xp = xnew
x
Endif
Needs bracketing 3. I = I +1
Guaranteed solution unless there is 4. Enddo
discontinuity
Slow convergence rate 9/20 10/20
3
Logic Secant Method-I
1. Bracket the root and identify xp and xn Does not need bracketing
2. Do Until I < Imax Modification of false position method
1. m = (f(xp)-f(xn))/(xp-xn) The new point replaces the last but one point
2. Xnew = xn- f(xn)/m Good Convergence
3. If f(xnew) < Tol then Most preferred method
4. Root = xnew and Get out
5. ElseIf f(xnew) < 0 Then
xn = xnew
f(x)
Else xp = xnew
Endif
3. I = I +1
4. Enddo 13/20 x 14/20
Logic
Newton’s Method-I
Does not need bracketing
1. Take any two points x1 and x2
Modification of false position method
2. Do Until I < Imax
The new point replaces the last but one point
1. m = (f(x2)-f(x1))/(x2-x1)
One can under- Good Convergence
2. Xnew = x2- f(x2)/m relax here
3. If f(xnew) < Tol then Also a preferred method
4. Root = xnew and Get out
5. Else
x1 = x2
x2 = xnew f(x)
Endif
3. I = I +1
4. Enddo 16/20
15/20 x
4
Newton’s Method-II Logic
The basis arises from linearised Taylor Series 1. Take any point x
2. Do Until I < Imax
f ( xn1 ) f ( xn xn ) f ( xn ) f ( xn )xn 1. m = f’(x)
One can under-
2. Xnew = x- f(x)/ f’(x)
The new value of xn+1 is arrived by setting relax here
f(xn+1)= 0 3. If f(xnew) < Tol then
4. Root = xnew and Get out
f ( xn1 ) 0 f ( xn ) f ( xn )( xn1 xn ) 5. Else x = xnew
Endif
xn1 xn f ( xn ) / f ( xn ) 3. I = I +1
4. Enddo
17/20 18/20
x ( x2 1) / 2 Or x 2x 1 X=g(x)
The choice will be determined from the
convergence rate of the method, which can
be determined only after the problem is
solved! g(x)
Usually, it is done in a ad-hoc manner , as it
takes little time to check if the method works
or not
19/20 x 20/20
5
Fixed Point Iteration Method - I
Simplest to program
Does not converge for every function. We shall
derive the criterion for convergence later
21/20
6
CH-2-16(MO) 4:05 PM 2/18
Ln(N)
CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II
1
4:05 PM 5/18 4:05 PM 6/18
Fixed Point Iteration-I Fixed Point Iteration-II
Our recursive relation was Eq. (3) can be written as
x n 1 g ( x n ) (1) en 1 g ( e n ) g ( )
If α is our root then en
2
g ( ) en g ( ) g ( ) ... g ( )
g( ) (2) 2
Eqs. (1) and (2) imply that en g ( ) Using Mean Value Theorem
2
4:05 PM 9/18 4:05 PM 10/18
Bracketing of Roots-II Continuation Method
⇒𝑒 =𝑒 Many times, the equation may be difficult to solve
( ) ( / ) ( ) as the root is not known and the function is
− difficult.
( )
A method called continuation method is very useful
𝑒 𝑓 (𝜉) For an arbitrary x0, we can say that x0 is the root of
⇒𝑒 =− the function f(x)-f(x0)
2𝑓 (𝜉) If we now define our function as
F(x) = f(x) – β f(x0), and use x0 as the guess for β =
For the method to converge, 0.9, we can find the root because the guess is
𝑒 𝑒 𝑓 (𝜉) good
⇒ <1⇒ <1 We can proceed in this manner successively by
𝑒 2𝑓 (𝜉) reducing β to 0, root of f(x) can be found
CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II
CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II
3
4:05 PM 13/18 4:05 PM 14/18
Secant Method-II Secant Method-III
en1 en
en1 en
xn xn1 f ( en )
f ( en ) f ( en1 )
en en1 ( en f ( ) ( en / 2 ) f ( ))
2
2 2
en en1
en1 en =0 ( en en1 ) f ( ) f ( )
2
en en1 ( f ( ) en f ( ) ( en / 2 ) f ( ))
2
en+en+1
e f ( )
2
f ( ) en f ( ) ( en / 2 ) f ( )
2
en n
2 f ( )
f ( ) en1 f ( ) ( en1 / 2 ) f ( )
2
en1 en
e e f ( )
1 n n1
2 f ( )
CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II
4
4:06 PM 17/18 4:06 PM 18/18
Secant Method-VI Secant Method-VII
By reorganising terms, we get If p < 1, the method will diverge. Thus when the
1 method converges, p > 1, which leads to
1
p 1 1 1 f ( )
a en en p
p
2 f ( ) 1 5
p 1.62
Since the power of n has to be homogeneous, 2
we can write
1 Thus the method is inferior to Newton’s method,
p 1 p p 1 0
2 but needs only one function evaluation at a step
p and hence is competitive
1 5
p
2
CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 3 Non-linear Equations-II
5
4:29 PM
CH-2-16(MO) 1/12 4:29 PM 2/12
CH-2-16(MO) Numerical Methods, Lecture 4 Set of Linear Equations-I CH-2-16(MO) Numerical Methods, Lecture 4 Set of Linear Equations-I
Its solution forms a basic part of most a11 a12 a13 x1 b1
a
numerical algorithms
21 a22 a23 x2 b2
Coefficient
a31 a32 a33 x b Source
Matrix 3 3 vector
CH-2-16(MO) Numerical Methods, Lecture 4 Set of Linear Equations-I CH-2-16(MO) Numerical Methods, Lecture 4 Set of Linear Equations-I
1
4:29 PM 5/12 4:29 PM 6/12
Types of Coefficient Matrices-I Types of Coefficient Matrices-II
Diagonal Matrix Identity Matrix Tri-diagonal Matrix Lower Diagonal Matrix [L] Upper Diagonal Matrix [U]
CH-2-16(MO) Numerical Methods, Lecture 4 Set of Linear Equations-I CH-2-16(MO) Numerical Methods, Lecture 4 Set of Linear Equations-I
Multiplication
Addition and Subtraction Q
2
4:29 PM
Determinant
9/12 4:29 PM
Towards Solution of Linear 10/12
Equations-I
a11 a12 a13 d 11 0 0 x1 b1
0
a21 a22 a23 a11C11 a12C12 a13C13 If d 22 0 x2 b2
a31 a32 a33 0 0 d 33 x b
3 3
4:29 PM
Towards Solution of Linear 11/12 4:29 PM
Towards Solution of Linear 12/12
Equations-II Equations-III
l11 0 0 x1 b1 u11 u12 u13 x1 b1
If l 0
If 0 u u23
x2 b2
21 l22 x2 b2 22
x b 0 u33 x b
l31 l32 l33 3 3 x1
b1 0 3 3 x N bN
b1 l11 b3 u NN
x1 x3
u33
l11
For i 2 , N Logic For i N 1, 1
b l x
x2 2 21 1
Logic i 1
b u x
x2 2 23 3 N
l22 bi lij x j
j 1
u22 bi u
j i 1
ij xj
b3 l31 x1 l32 x2 xi b1 u12 x2 u13 x3 xi
x3 lii x1 uii
l33 u11
CH-2-16(MO) Numerical Methods, Lecture 4 Set of Linear Equations-I CH-2-16(MO) Numerical Methods, Lecture 4 Set of Linear Equations-I
3
MEC001P1M
5:33 PM 1/18 5:33 PM 2/18
CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II
1
5:33 PM 5/18 5:33 PM 6/18
Cramer’s Rule-II Gauss Elimination-I
3 -1 12
1 2 11 3 - 1 2 x1 12 3
x3 =
2 -2 2
1 2 3 x2 = 11 Sol = 1
3 -1 2 2 - 2 - 1 x 2 2
3
1 2 3
Augmented Matrix
2 -2 -1
Number of multiplication operations ~ (n-1) (n+1)! 3 - 1 2 12
For n = 10, M ~ 3.6*108
1 2 3 11
Severe Round
For n = 100, M ~ 10157 Off Error 2 - 2 - 1 2
CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II
CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II
2
5:33 PM 9/18 5:33 PM 10/18
Pivoting and Scaling Gauss Jordan Elimination
Diagonal dominance reduces error Augmented Matrix
propagation R1*= R1/3 1 - 1 / 3 2 / 3 4
3 - 1 2 12
0 7 / 3
Rearranging the coefficient to get diagonal 1 2 R -R *
7
11
2 1
7/3
dominance is called pivoting 3
2 - 2 - 1 R3-2R1* 0 - 4 / 3 - 7 / 3 - 6
Switching of only rows to improve diagonal 2
dominance is called partial pivoting
R2**=3R2/7
Dividing all the coefficients by the maximum
value in a row is called scaling 1 - 1 / 3 2 / 3 4 R1-R2**/(-3) 1 0 1 5
0 7 1/ 3 73 0 1 1
These can easily be incorporated and is 7 1/ 3
3
always recommended 0 - 4 / 3 - 7 / 3 - 6 R3-4R2**/(-3) 0 0 - 1 - 2
CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II
5:33 PM
Comments on Gauss Jordan 11/18 5:33 PM
Towards Solution of Linear 12/18
Elimination Equations-III
R1-R3* 1 0 0 3 3 - 1 2 1 0 0
0 1 0 1 2 0 1 0
R2=R3* 1 3
R3 = R3/-1 0 0 1
* 2 2 - 2 - 1 0 0 1
3
5:33 PM 13/18 5:33 PM 14/18
L-U Decomposition-I Crout’s Decomposition
If the problem has to be repeated with several 1 3 5
source vectors for the same coefficient vector, l11 0 0 1 u12 u13 2 a11 a12 a13
L-U decomposition is recommended
l 0 0 1 u23 = 4 a21 a22 a23
21 l22
A = L U l31 l32 l33 0 0 1 a31 a32 a33
a11= l11 , a21= l21 , a31= l31
Such a decomposition speeds up calculation
a12= l11 u12 , a13= l11 u13
In general two methods are available
a22= l21 u12 + l22 , a32= l31 u12 + l32
Crout’s Decomposition
Dolittle’s Decomposition a23= l21 u13 + l22 u23
a33= l31 u13 + l32 u23 + l33
CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II
4
5:33 PM 17/18 5:33 PM 18/18
Comments on Crout’s Method-I Comments on Crout’s Method-II
It is possible to store L and U in A itself and
Mcrout = MGauss conserve memory and logic written
accordingly
But, back substitution M = n2 - n
𝑙 𝑢 𝑢
Therefore for a large set one may
save substantial effort (n3 vs n2 ) 𝑙 𝑙 𝑢
𝑙 𝑙 𝑢
It is possible to store the coefficients
of [L] and [U] in [A] itself as [A] is no
longer required. This saves memory Indices have to be carefully addressed
Since memory is cheap, this no longer may
Other decompositions are similar
be required
CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II
5
11:22 PM CH-2-16(MO) 1/18 11:22 PM 2/18
1
11:22 PM 5/18 11:22 PM 6/18
Tridiagonal Matrix Solution Tridiagonal Matrix Solution
1 𝑎∗ 0 0 𝑏∗ 1 𝑎∗ 0 0 𝑏∗
R2 – a21 R1 𝑎 𝑎 𝑎 0 𝑏 0 𝑎 − 𝑎 𝑎∗ 𝑎 0 𝑏 −𝑎 𝑏 ∗
0 𝑎 𝑎 𝑎 𝑏 0 𝑎 𝑎 𝑎 𝑏
0 0 𝑎 𝑎 𝑏 0 0 𝑎 𝑎 𝑏
𝑎 𝑏 𝑏∗
∗
Note that 𝑎 = , 𝑏∗ = 1 𝑎∗ 0 0
𝑎 𝑎 𝑎 𝑏 − 𝑎 𝑏∗
0 1 0
1 𝑎∗ 0 0 𝑏∗ 𝑎 − 𝑎 𝑎∗ 𝑎 − 𝑎 𝑎∗
0 𝑎 − 𝑎 𝑎∗ 𝑎 0 𝑏 − 𝑎 𝑏∗ 0 𝑎 𝑎 𝑎 𝑏
0 𝑎 𝑎 𝑎 𝑏 0 0 𝑎 𝑎 𝑏
0 0 𝑎 𝑎 𝑏
CH-2-16(MO) Numerical Methods, Lecture 6 Set of Linear Equations-III CH-2-16(MO) Numerical Methods, Lecture 6 Set of Linear Equations-III
2
11:22 PM 9/18 11:22 PM 10/18
Thomas Algorithm (TDMA)-II Thomas Algorithm (TDMA)-III
1 𝑎∗ 0 0 𝑥 𝑏∗ • It is possible to store A as (N,3) to conserve
𝑥 𝑏∗ memory and logic written accordingly
0 1 𝑎∗ 0
𝑥 =
𝑏∗
0
0
0
0
1
0
𝑎∗
1 𝑥 𝑏∗ a i ,i 1 a i ,1 a i ,i a i ,2 a i ,i 1 a i ,3
⇒𝑥 = 𝑏∗ 𝑥 = 𝑏∗ − 𝑥 𝑎∗ ,
Back Substitution Logic 𝑎 𝑎 0 0 0 𝑎 𝑎
xN b * 𝑎 𝑎 𝑎 0 𝑎 𝑎 𝑎
N
0 𝑎 𝑎 𝑎 𝑎 𝑎 𝑎
For I = N-1, N-2, …,1
0 0 𝑎 𝑎 𝑎 𝑎 0
xi bi* xi 1ai*,i 1
CH-2-16(MO) Numerical Methods, Lecture 6 Set of Linear Equations-III CH-2-16(MO) Numerical Methods, Lecture 5 Set of Linear Equations-II
3
11:22 PM 13/18 11:22 PM 14/18
Iterative Methods-II Jacobi Iteration
Consider
x 1N b1 a12 x 2N 1 a13 x 3N 1 / a11
a11 x1 a12 x2 a13 x3 b1
x 2N b a
2 21 x 1N 1 a23 x 3N 1 / a 22
a21 x1 a22 x2 a23 x3 b2
x3 N
b a N 1
x1 a32 x 2N 1
/ a
a31 x1 a32 x2 a33 x3 b3 3 31 33
x1 b1 a12 x2 a13 x3 / a11 By adding and subtracting xiN-1 on both sides
4
11:22 PM 17/18 11:22 PM 18/18
Logic for SOR Norms of Vectors
p norm of a vector is defined as
1
For i = 1, N
N p
x j
p
N x
xi xi bi aij x j / aii p
j1
j 1
x1 Sum of absolute values of components
11:22 PM 19/18
Termination Criterion
x N 1 x N
x N 1 x N
N
x
rN
5
11:21 PM CH-2-16(MO) 1/12 11:21 PM 2/12
CH-2-16(MO), Numerical Methods, Lecture 7 Set of Non-Linear Equations CH-2-16(MO), Numerical Methods, Lecture 7 Set of Non-Linear Equations
1
11:21 PM 5/12 11:21 PM 6/12
Iteration Method-II Newton-Raphson Method-I
If we rewrite the equations as: • Principle
x 10 xy g1 ( x, y )
Expanding the functions linearly in the
57 y neighbourhood of (x,y)
y g 2 ( x, y )
3x f1 f
The above equations converge to the correct f1 ( x x, y y ) f1 ( x, y ) x 1 y
solution with an initial guess of x=1.5 and x ( x . y ) y ( x . y )
y=3.5
g1 g 2 g1 g 2 For an arbitrary (x,y) we seek x and y such that
The condition 1 and 1
for convergence x x y y f 1 ( x x , y y ) 0
is
CH-2-16(MO), Numerical Methods, Lecture 7 Set of Non-Linear Equations CH-2-16(MO), Numerical Methods, Lecture 7 Set of Non-Linear Equations
2
11:21 PM 9/12 11:21 PM 10/12
Newton-Raphson Method-IV Continuation Method-I
The method converges with good initial guesses. Here a set whose roots are known is considered
The simplest way is to generate from the original
The problem is to give good guesses.
function with an arbitrary initial guess
Continuation method is one powerful method to
overcome this problem.
F1 ( x , y ) f 1 ( x , y ) f 1 ( x0 , y0 )
F2 ( x , y ) f 2 ( x , y ) f 2 ( x0 , y0 )
For Theta = 1, (x0,y0) are the roots for F1 and F2
The value of Theta is gradually reduced from 1 to
0 and the set is solved every time with the
previous roots as the guess.
CH-2-16(MO), Numerical Methods, Lecture 7 Set of Non-Linear Equations CH-2-16(MO), Numerical Methods, Lecture 7 Set of Non-Linear Equations
3
11:23 PM 1/19 11:23 PM 2/19
CH-2-16(MO) General-I
Numerical Methods
Many a time we have values of a function at
(Interpolation) some discrete intervals of a variable and we
need to obtain the value of the function at
Kannan Iyer some value of the variable within the range.
Kannan.iyer@iitb.ac.in The question is whether one should
interpolate or should one estimate by curve
fitting using regression?
The answer is another question? How
accurate is the data?
Department of Mechanical Engineering If very accurate – interpolate, if not, fit a
Indian Institute of Technology Jammu curve by regression
`
What is common between both?
Both involve functional approximation
What is the difference?
In interpolation the curve passes through all y y
points, while in regression, it may not pass
through any point!
x x
Interpolation Regression
1
11:23 PM 5/19 11:23 PM 6/19
Interpolation Lagrange Interpolation-I
Lagrange Interpolation-II
Newton’s Forward Difference Polynomial-I
2
11:23 PM 10/19
11:23 PM 11/19
3
Example-II Comments
For a table having four points we could
s = (3.44-3.40)/0.1 = 0.4 construct up to four terms
The n+1th term corresponds to term of sn if
(1/3.44) = 0.294418 0.294418 expanded in Taylor Series. Further s~1
0.290756 Therefore, from Taylor series expansion we
+ (0.4) (-0.008404) can conclude that the leading error term will
+ [(0.4)(0.4-1)/2] (0.000468) 0.290700 be (hn+1/n!) (dn+1f/dxn+1)
Such an approximation is said to be n+1th
+ [(0.4)(0.4-1)(0.4-2)/6] (0.000040) 0.290698 order approximation
This implies that if h is reduced by a factor of
Exact Value 0.290697674 2 the error will decrease by a factor 2n+1
11:23 PM 13/19 11:23 PM 14/19
11:23 PM 15/19
( xPM) f ( x0 ) (( s i 1 )Ci ) i f ( x0 )
PN11:23 16/19
i 0
4
Example-I Example-II
f=1/x
s = (3.50-3.44)/0.1 = -0.6
x f f f
2
f
3
Physical Interpretations
Backward s=-0.6
Backward s=-2.6
3.44
11:23 PM
Forward s=0.1
19/19
5
11:11 PM 1/15 11:11 PM 2/15
CH-2-16(MO) General
Numerical Methods
(Linear Regression) When imprecise data is handled,
regression is recommended
Kannan Iyer
Linear regression is most common
Kannan.iyer@iitb.ac.in
Among the functions used polynomial
is most preferred
Piecewise lower order is always better
that a single higher order polynomial
Department of Mechanical Engineering In regression lower order polynomial is
Indian Institute of Technology Jammu
passed through a large number of
points
Ei ( yi ( a1 f1( xi ) a2 f 2 ( xi ) ))2
2
ei = yi - yfit i 1
1
11:11 PM 5/15 11:11 PM 6/15
Linear Regression-II Linear Regression-III
Ei
2 N
N N
N
a1
2 ( y i ( a1 f 1 ( xi ) a2 f 2 ( xi ) )) f 1 ( xi ) 0 ( f 1 ( xi ))
2
f ( x ) f ( x )
1 i 2 i
a1
y( xi ) f 1( xi )
N
i 1
N i 1 i 1
N i 1
f (x )f ( x ) 2 a 2 y ( x ) f ( x )
N N 2 N
( f ( x ))
( yi f 1 ( xi ) a1 ( f 1 ( xi )) a2 f 2 ( xi ) f 1 ( xi ) 0 i 1
1 i 2 i
i 1
2 i
i 1
i 2 i
i 1 i 1 i 1
F ..... ..... y( x1 )
y( x ) N
..... .....
2
y( xi ) f1( xi )
f 1( xn ) f 2 ( xn )
F T .. iN1
.. y( xi ) f 2 ( xi )
f (x ) f 1( x2 ) .... .... f 1 ( xn ) 2 x n i 1
FT 1 1
f 2 ( x1 ) f 2 ( x2 ) .... .... f 2 ( xn ) y( xn )
2
11:11 PM 9/15 11:11 PM 10/15
Generalized least Squares-III Goodness of Fit
The quality of fit is normally given by
coefficient of regression, which is an
The linear least square can therefore be obscure quantity
generalized as In engineering parlance, the more relevant
y( x1 ) parameter that can be easily connected is
y( x ) the RMS error
a1 2
T n
F F F ..
T
a 2 ..
e
i 1
i
2
y( xn ) n
Often the error is normalised with the true
value to express the RMS error as a fraction
or as percentage
3
11:11 PM 13/15 11:11 PM 14/15
Other Functions-I Other Functions-II
Other than power laws, exponential and Saturation Function 1 1 b x
saturation functions are also popular, which
y a x
can be transformed into linear regression
1 b1 1
ln( y ) ln( a ) bx
y ax a
y a e bx y x
y ln(y) ya 1/y
Slope = b b x Slope = b/a
11:11 PM 15/15
Summary
Learnt regression methodology.
Understood the method of generalized least
squares, which is a powerful technique.
4
11:13 PM 1/19 11:13 PM 2/19
CH-2-16(MO) Review
Numerical Methods
Interpolation is used to approximate the
(Spline Interpolation) value of a function from a set of discrete
values between two variables
Kannan Iyer
Looked at Lagrange Interpolation as the
Kannan.iyer@iitb.ac.in easiest polynomial interpolation
Looked at Newton’s backward and forward
interpolating polynomials
The values are obtained using difference
tables
Department of Mechanical Engineering
Understood that a polynomial of Nth order
Indian Institute of Technology Jammu
has an error of the order of hN+1
y y y
x x x
1
11:13 PM 5/19 11:13 PM 6/19
Cubic Spline
Principle of Splines
Y1 = a1x3 + b1x2 + c1x +d1
The concept of spline is to pass a specified Y2 = a2x3 + b2x2 + c2x +d2
order curve through a pair of points Conditions
We know that we can pass a piece-wise
linear curve between two pairs of points Y2
Y1 2 Y1(x1)=y1 , Y1(x2)=y2
Can we pass a second or third order curve
between every pair of points? y 3 Y2(x2)=y2, Y2(x3)=y3
1
The question is how? Y1’(x2) = Y2’(x2)
Quadratic spline fits a quadratic between Y1’’(x2) = Y2’’(x2)
every pair of point and cubic spline fits a x1 x2 x1 Y1’’(x1) = Y2’’(x3) = 0
cubic between every pair
x
11:13 PM 7/19
i
Natural Cubic Spline-I
Yi-1 Yi
In general if we have N+1 points, we shall
i+1 have N intermediate curves. This will need
Y2 i-1 4N conditions
N For every interior point we shall have four
YN
y 2 conditions . Thus we shall have 4(N-1)
conditions
N+1
Y1 The functional values at the end points give
1 two more conditions
Assuming the second derivatives to be zero
at the end points closes the relations for a
natural spline.
x 11:13 PM 8/19
2
11:13 PM 9/19 11:13 PM 10/19
Natural Cubic Spline-II Natural Cubic Spline-III
Rewriting first term
xi 1 x x xi
The curve Yi(x) joins i and i+1 Yi( x ) Yi( xi ) Yi( xi 1 ) 1
Yi-1
Yi xi 1 xi xi 1 xi
Since the curve is cubic, i
the second derivative Y”i(x) y i+1 Yi( x ) Ai Yi( xi ) Bi Yi( xi 1 )
will be a straight line i-1 Where,
Using Lagrange interpolation, xi 1 x x xi
Ai , Bi
we can write xi 1 xi xi 1 xi
x Note that,
x xi 1 x xi
Yi( x ) Yi( xi ) Yi( xi 1 ) xi 1 x xi 1 xi xi 1 x
xi xi 1 xi 1 xi 1 Ai 1 Bi
xi 1 xi xi 1 xi
3
11:13 PM 13/19 11:13 PM 14/19
Natural Cubic Spline-VI Natural Cubic Spline-VII
Now we shall match the first derivative at the interior
Similarly, xi+1Eq. (4) - xiEq.(5)/(xi+1-xi) point (xi). From Slide 10, we have Ai and Bi
𝑥 𝑦 −𝑥 𝑦 𝑥 𝑌 (𝑥 ) + 𝑥 𝑌 (𝑥 ) (𝑥 −𝑥 ) xi 1 x x xi 6+
𝐶 = − Ai , Bi
𝑥 −𝑥 6 xi 1 xi xi 1 xi
4
11:13 PM 17/19 11:13 PM 18/19
Natural Cubic Spline-X Natural Cubic Spline-XI
Eqs. (7) and (8) are rewritten as
Yi( xi )
yi yi 1
xi 1 xi
(x x )
2Yi( xi ) Yi( xi 1 ) i 1 i
6
9 The above relation is valid for I = 2, 3, …, N
For natural spline, Y" (1), Y" (N+1) = 0
Yi1 ( xi )
yi 1 yi
xi xi 1
( x xi 1 )
Yi1 ( xi 1 ) 2Yi1 ( xi ) i
6 10 Eq. (11) is a list set of N-1 eqs., for Y" (2), Y" (3), …,
Y" (N).
Since LHS of Eqs. (9) and (10) are same, we can Once these are computed, the value of y for any x is
equate the RHS and rearrange to get found using Eq. (6). The Ai and Bi are found using
( x xi 1 ) (x x ) (x x ) Eq. (6+)
Y ( xi 1 ) i 2Y ( xi ) i 1 i 1 Y ( xi 1 ) i 1 i
6 6 6
( y yi ) ( yi y i 1 ) 11
i 1
( xi 1 xi ) ( xi xi 1 )
11:13 PM 19/19
Algorithm
5
MEC001P1M
Numerical Methods in Engineering Numerical Differentiation
(Numerical Differentiation and
Integration) Motivation for study
– Obtaining derivative at a point from a table of
Kannan Iyer functional data
kannan.iyer@iitjammu.ac.in Obtaining ‘cp’ from the measurement of h-T
Obtaining ‘f’ from a tabulated ‘v-y’ data
Generating methods for solving ODE/PDE
1
Example-I Derivatives from Polynomials (Cont’d)
f=1/x Example: f=1/x
x f f f 2
f
3
s = (3.44-3.40)/0.1 = 0.4
3.4 0.294118 -0.008404 0.000468 0.000040
f ’(3.44) = -0.008404/0.1 -0.08404
3.5 0.285714 -0.007936 0.000428 --
3.6 0.277778 -0.007508 -- -- +[{2(0.4)-1}/2] (0.000468)/0.1 -0.084508
3.7 0.270270 -- -- -- + [{3(0.4)2-6(0.4)+2)}/6]
-0.084503
(0.000040)/0.1
Find the derivative value of function at 3.44
5/19
Exact Value -0.084505
2
TRAPEZIODAL RULE (First Order) Trapezoidal Rule (Cont’d)
x x0 As we have used first order polynomial the
P1 ( x ) f ( 0 ) sf ( 0 ) where s
h error term for polynomial is O(h2)
high 1
Since the integral involves a multiplication
f ( x )dx f ( s )hds
low 0
or dx hds with h, the order increases to h3 locally.
1 1
1
s2 s( s 1 ) 2
f (0) sf (0) hds h f (0) s f (0) Error Term h f ( )hds
0 2 0 0
2
f (0) 1 h3
f (0) h f (0) f (1) f (0) h f ( )
2 2 12
h
f (0) f (1)
2
3
Simpson’s 1/3 Rule Simpson’s 1/3 Rule
2
s ( s 1)( s 2) 3
Error Term h f ( )hds 0
0
6
f(s)
Piecewise quadratic 2
s ( s 1)( s 2)( s 3) 4
h f ( )hds
0
24
0 1 2 n-1 n
h 1 5
I Ii fi 4 fi 1 fi 2 h f ( )
3 90
h 5 iv ( x x ) h 5 iv
h Global Error fi ( ) n 0 fi ( )
f 0 4 f1 2 f 2 4 f 3... 2 f n 2 4 f n 1 f n 90 2h 90
3
Globally O(h4)
4
Romberg Integration-I Romberg Integration-II
If the integration procedure is carried out for
intervals h and 2h, we can write The above can be generalized by assuming a
I = I(h) + C1 h2 + C2 + …,
h4 (1) power law form
I = I(2h) + C1 (2h)2 + C2 (2h)4 + …, (2) I = I(h) + C1 hn + C2 hm …, (1)
Multiplying Eq. (1) by 4 and subtracting I = I(2h) + C1 (2h)n + C2 (2h)m …, (2)
Eq.(2) and then dividing by 3, we get Multiplying Eq. (1) by 2n and subtracting
I = [4I(h) – I(2h)]/3 + O(h)4 Eq.(2) and then dividing by 2n-1 we get
This can be rewritten as I = [2nI(h) – I(2h)]/(2n-1)+ O(h)m
I = I(h) + [I(h) – I(2h)]/3 + O(h)4 This can be rewritten as
Thus we have a higher order solution from I = I(h) + [I(h) – I(2h)]/(2n-1)+ O(h)m
lower order solutions. This is called the
Richardson extrapolation
5
Romberg Integration-V
I1,1 I=1
aint(I,I)=(ahigh-alow)*(f(alow)+f(ahigh))/2.
error=2.*errmax | Just to make algorithm proceed
do while(I.lt.nromax.and.dabs(error).gt.errmax)
I=I+1
I2,1 nsteps=2**(I-1)
aint(I,1)=trapez(alow,ahigh,nsteps)
Do j = 2,I
aint(I,j)=aint(i,j-1)+(aint(i,j-1)-aint(i-1,j-1))
1 /(2**(2*(j-1))-1)
enddo
best=aint(I,I)
I3,1 error=abs((best-aint(I,I-1))/best)
enddo
stop
end