0% found this document useful (0 votes)
7 views33 pages

Lecture 6

The document discusses the approximation of functions in engineering, focusing on regression and interpolation methods. It explains least squares regression, including linear and polynomial regression, detailing how to minimize error and derive coefficients for best-fit lines. Additionally, it covers the quantification of error and the correlation coefficient to assess the fit of the regression model.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views33 pages

Lecture 6

The document discusses the approximation of functions in engineering, focusing on regression and interpolation methods. It explains least squares regression, including linear and polynomial regression, detailing how to minimize error and derive coefficients for best-fit lines. Additionally, it covers the quantification of error and the correlation coefficient to assess the fit of the regression model.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

ES 361 - Computing

Methods in Engineering
Lecture 6
Approximation of Functions

Why do we need to approximate a function?


• To understand the trend underlying by a complex process

• To predict future response from a process (i.e. extrapolate)

• To determine the intermediate value of a process between data points (i.e. interpolate)

Regression: Mainly used with data that contains a significant amount of error such as experimental data.
A function that best represents the data points is investigated.

Interpolation: Used with precise known data. A function passes through all data points is investigated.
LEAST SQUARES REGRESSION
Linear Regression
The simplest example of a least squares approximation is fitting a straight line to a set of paired
observations: (x1, y1), (x2, y2), . . . , (xn, yn). The mathematical expression for the straight line is:

𝑦 = 𝑎0 + 𝑎1 𝑥 + 𝑒

where a0 and a1 are coefficients representing the intercept and the


slope, respectively, and e is the error, or residual, between the model
and the observations.
𝑒 = 𝑦 − 𝑎0 − 𝑎1 𝑥

The error should be minimized to obtain the best fit. There are several possibilities to bring the error to
minimum value:

• Minimize the sum of error for each data point.

• Minimize the sum of absolute value of error for each data point.

• Minimize the maximum error.

• Minimize the sum of the squares of error for each data point.

First three approaches above have certain shortcomings so we will be minimizing the sum of the squares of
residuals:
𝑛 𝑛 𝑛

𝑆𝑟 = 𝑒𝑖2 = (𝑦𝑖,𝑚𝑒𝑎𝑠𝑢𝑟𝑒𝑑 − 𝑦𝑖,𝑚𝑜𝑑𝑒𝑙 )2 = (𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 )2


𝑖=1 𝑖=1 𝑖=1
To determine values for a0 and a1, Sr is differentiated with respect to each coefficient:
𝜕𝑆𝑟
= −2 (𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 )
𝜕𝑎0

𝜕𝑆𝑟
= −2 (𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 )𝑥𝑖
𝜕𝑎1

And set these two equations equal to zero:

0= 𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖

0= 𝑦𝑖 𝑥𝑖 − 𝑎0 𝑥𝑖 − 𝑎1 𝑥𝑖2

Knowing that 𝑎0 = 𝑛𝑎0

𝑛𝑎0 + 𝑥𝑖 𝑎1 = 𝑦𝑖 ∴ 𝑎0 = 𝑦 − 𝑎1 𝑥 where 𝑦 and 𝑥


are the means of y
𝑛 𝑥𝑖 𝑦𝑖 − 𝑥𝑖 𝑦𝑖 and x, respectively.
𝑥𝑖 𝑎0 + 𝑥𝑖2 𝑎1 = 𝑥𝑖 𝑦𝑖 ∴ 𝑎1 =
𝑛 𝑥𝑖2 − 𝑥𝑖 2
Example:
Fit a straight line to the x and y values given in the table below.

xi yi
1 0.5
2 2.5
3 2.0
4 4.0
5 3.5
6 6.0
7 5.5
Example:

𝑛=7 𝑥𝑖 𝑦𝑖 = 119.5 𝑥𝑖2 = 140

28
𝑥𝑖 = 28 𝑥= =4
7
24
𝑦𝑖 = 24 𝑦= = 3.428571
7
𝑛 𝑥𝑖 𝑦𝑖 − 𝑥𝑖 𝑦𝑖
7 119.5 − (28)(24) 𝑎1 =
𝑎1 = 2
= 0.8392857 𝑛 𝑥𝑖2 − 𝑥𝑖 2
7 140 − (28)
𝑎0 = 3.428571 − 0.8392857(4) = 0.07142857 𝑎0 = 𝑦 − 𝑎1 𝑥

Result:

𝑦 = 0.07142857 + 0.8392857𝑥
Example:
Observed data

xi yi (yi-a0-a1xi)2
1 0.5 0.1687
2 2.5 0.5625
3 2 0.3473
4 4 0.3265
5 3.5 0.5896
6 6 0.7972
7 5.5 0.1993
∑ 2.9911 Sr: sum of squares of residuals
Quantification of Error of Linear Regression

Standard deviation for the regression line can be determined as:

𝑆𝑟
𝑆𝑦/𝑥 =
𝑛−2

where Sy/x is called the standard error of the estimate. The subscript notation y/x designates that the error is
for a predicted value of y corresponding to a particular value of x. We divide Sr by n − 2, n being the number
of data points because two data-derived estimates a0 and a1were used to compute Sr thus two degrees of
freedom were lost.
Quantification of Error of Linear Regression
St is the total sum of the squares of the residuals between the data points and the mean:

𝑆𝑡 = (𝑦𝑖 − 𝑦)2

This is the magnitude of the residual error associated with the dependent variable prior to regression. After
performing the regression, we can compute Sr, the sum of the squares of the residuals around the
regression line. This characterizes the residual error that remains after the regression. The difference
between the two, St − Sr, quantifies the improvement or error reduction due to describing the data in terms
of a straight line rather than as an average value. Because the magnitude of this quantity is scale-
dependent, the difference is normalized to St to yield:
𝑛
𝑆𝑡 − 𝑆𝑟
2 𝑆𝑟 = (𝑦𝑖 − 𝑦𝑖,𝑚𝑜𝑑𝑒𝑙 )2
𝑟 =
𝑆𝑡 𝑖=1

where r is the correlation coefficient. For a perfect fit, Sr = 0 and r = r2 = 1, signifying that the line explains 100
% of the variability of the data. For r = r2 = 0, Sr = St and the fit represents no improvement. An alternative
statistical formulation for r is:
𝑛 𝑥𝑖 𝑦𝑖 − ( 𝑥𝑖 )( 𝑦𝑖 )
𝑟=

𝑛 𝑥𝑖2 − ( 𝑥𝑖 )2 𝑛 𝑦𝑖2 − ( 𝑦𝑖 )2
Polynomial Regression
The least-squares procedure can be readily extended to fit the data to a higher-order polynomial. For
example:
𝑦 = 𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2 + 𝑒

For this case the sum of the squares of the residuals is:
𝑛

𝑆𝑟 = (𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 − 𝑎2 𝑥𝑖2 )2
𝑖=1
Take the derivative with respect to each of the unknown coefficients of the polynomial:

𝜕𝑆𝑟
= −2 (𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 − 𝑎2 𝑥𝑖2 )
𝜕𝑎0
𝜕𝑆𝑟
= −2 𝑥𝑖 (𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 − 𝑎2 𝑥𝑖2 )
𝜕𝑎1

𝜕𝑆𝑟
= −2 𝑥𝑖2 (𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 − 𝑎2 𝑥𝑖2 )
𝜕𝑎2
Polynomial Regression
According to the principle of least-squares, the ‘best fit’ polynomial is the one minimizing the sum of
squared errors, Sr. So the coefficients aj must satisfy:

𝜕𝑆𝑟
=0
𝜕𝑎𝑗

After setting the derivative equations to be equal to zero and rearranging the expressions:

𝑛 𝑎0 + 𝑥𝑖 𝑎1 + 𝑥𝑖2 𝑎2 = 𝑦𝑖
𝑛 𝑥𝑖 𝑥𝑖2 𝑦𝑖

𝑥𝑖 𝑎0 + 𝑥𝑖2 𝑎1 + 𝑥𝑖3 𝑎2 = 𝑥𝑖 𝑦𝑖 In matrix notation 𝑎0


𝑥𝑖 𝑥𝑖2 𝑥𝑖3 𝑎1 = 𝑥𝑖 𝑦𝑖
𝑎2

𝑥𝑖2 𝑎0 + 𝑥𝑖3 𝑎1 + 𝑥𝑖4 𝑎2 = 𝑥𝑖2 𝑦𝑖 𝑥𝑖2 𝑥𝑖3 𝑥𝑖4 𝑥𝑖2 𝑦𝑖

Can be solved with Gauss Elimination


Polynomial Regression
The quadratic case can be easily extended to an mth order polynomial. Then:

𝑦 = 𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2 + ⋯ + 𝑎𝑚 𝑥 𝑚 + 𝑒

For this case the sum of the squares of the residuals is:
𝑛

𝑆𝑟 = (𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 − 𝑎2 𝑥𝑖2 − ⋯ − 𝑎𝑚 𝑥𝑖𝑚 )2


𝑖=1
Take the derivative with respect to each of the unknown coefficients of the polynomial:

𝜕𝑆𝑟
= −2 (𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 − 𝑎2 𝑥𝑖2 − ⋯ − 𝑎𝑚 𝑥𝑖𝑚 )
𝜕𝑎0
𝜕𝑆𝑟
= −2 𝑥𝑖 (𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 − 𝑎2 𝑥𝑖2 − ⋯ − 𝑎𝑚 𝑥𝑖𝑚 )
𝜕𝑎1

𝜕𝑆𝑟
= −2 𝑥𝑖2 (𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 − 𝑎2 𝑥𝑖2 − ⋯ − 𝑎𝑚 𝑥𝑖𝑚 )
𝜕𝑎2
Polynomial Regression

⋮ ⋮ ⋮

𝜕𝑆𝑟
= −2 𝑥𝑖𝑚 (𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 − 𝑎2 𝑥𝑖2 − ⋯ − 𝑎𝑚 𝑥𝑖𝑚 )
𝜕𝑎𝑚
After setting the derivative equations to be equal to zero and rearranging the expressions:

𝑛 𝑎0 + 𝑥𝑖 𝑎1 + 𝑥𝑖2 𝑎2 + ⋯ + 𝑥𝑖𝑚 𝑎𝑚 = 𝑦𝑖

𝑥𝑖 𝑎0 + 𝑥𝑖2 𝑎1 + 𝑥𝑖3 𝑎2 + ⋯ + 𝑥𝑖𝑚+1 𝑎𝑚 = 𝑥𝑖 𝑦𝑖

𝑥𝑖2 𝑎0 + 𝑥𝑖3 𝑎1 + 𝑥𝑖4 𝑎2 + ⋯ + 𝑥𝑖𝑚+2 𝑎𝑚 = 𝑥𝑖2 𝑦𝑖

⋮ ⋮ ⋮

𝑥𝑖𝑚 𝑎0 + 𝑥𝑖𝑚+1 𝑎1 + 𝑥𝑖𝑚+2 𝑎2 + ⋯ + 𝑥𝑖𝑚+𝑚 𝑎𝑚 = 𝑥𝑖𝑚 𝑦𝑖


Polynomial Regression
Which is equivalent to the following in matrix notation:

𝑛 𝑥𝑖 𝑥𝑖2 … 𝑥𝑖𝑚 𝑦𝑖

𝑥𝑖 𝑥𝑖2 𝑥𝑖3 … 𝑥𝑖𝑚+1 𝑎0 𝑥𝑖 𝑦𝑖


𝑎1
𝑎2 =
𝑥𝑖2 𝑥𝑖3 𝑥𝑖4 … 𝑥𝑖𝑚+2 ⋮ 𝑥𝑖2 𝑦𝑖
𝑎m
⋮ ⋮ ⋮ … ⋮ ⋮
𝑥𝑖𝑚 𝑥𝑖𝑚+1 𝑥𝑖𝑚+2 … 𝑥𝑖2𝑚 𝑥𝑖𝑚 𝑦𝑖

Notice that first 3 rows and first 3 columns are equivalent to the system given for a quadratic equation.
The standard error is formulated as:

𝑆𝑟
𝑆𝑦/𝑥 =
𝑛 − (𝑚 + 1)
Example:
Determine the coefficients of the 2nd order polynomial that best fits the given data. Calculate the
coefficient of correlation.
x f(x)
0 0
0.2 0.198
0.4 0.359 Model: 𝑦 = 𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2
0.6 0.558
0.8 0.653
1 0.821
1.2 0.921
1.4 0.950
Example:
System of equations:

𝑛 𝑥𝑖 𝑥𝑖2 𝑦𝑖
𝑎0
𝑥𝑖 𝑥𝑖2 𝑥𝑖3 𝑎1 = 𝑥𝑖 𝑦𝑖
𝑎2
𝑥𝑖2 𝑥𝑖3 𝑥𝑖4 𝑥𝑖2 𝑦𝑖

𝑛=8
8 5.600 5.600 𝑎0 4.460
𝑥 =5.600 𝑥 4 =7.482 5.600 5.600 6.272 𝑎1 = 4.297
5.600 6.272 7.482 𝑎2 4.693
𝑥 2 =5.600 𝑥𝑦=4.297
𝑎0 −0.009
𝑥 3 =6.272 𝑥 2 𝑦= 4.693 Gauss elimination gives: 𝑎1 = 1.08
𝑎2 −0.275

So the model is, 𝑦 = −0.009 + 1.08 𝑥 − 0.275𝑥 2


Observed data

x y = f(x) 𝑓 Ei=f-𝑓 ̃ f-𝑓


2
0 0 -0.009 0.009 -0.558 𝑆𝒕 = 𝑓𝒊 − 𝑓 = 0.884
0.2 0.198 0.196 0.002 -0.360
0.4 0.359 0.379 -0.020 -0.199 2
𝑆𝒓 = 𝑓𝒊 − 𝑓 = 0.0032
0.6 0.558 0.54 0.018 0.001
0.8 0.653 0.679 -0.026 0.096
1 0.821 0.796 0.025 0.264 2
••••••••0.0032
𝑟 =•• •• = 0.996
1.2 0.921 0.891 0.030 0.364 0.884
1.4 0.950 0.964 -0.014 0.393

Obtained based on regression results

r2 is close to 1.0, so the polynomial is a very accurate estimator for f(x).


Linearization of Nonlinear Relationships
The coefficients for a best fit line are calculated by in the case of a linear equation:

𝑛 𝑥𝑖 𝑦𝑖
𝑎0
𝑎1 =
𝑥𝑖 𝑥𝑖2 𝑥𝑖 𝑦𝑖

The coefficients of certain nonlinear relationships can be determined by the method of least squares
after linearization:

Exponential Model
𝑦 = 𝛼 𝑒 𝛽𝑥
𝑙𝑛 𝑦 = 𝑙𝑛 𝛼 + 𝛽𝑥 Take logarithm of both sides for linearization

𝑦 ′ = 𝑎0 + 𝑎1 𝑥′
Linearization of Nonlinear Relationships
Exponential Model

Calculate (x’i , y’i) data points by the equations:

𝑥′𝑖 = 𝑥𝑖 𝑦′𝑖 = 𝑙𝑛 𝑦𝑖

Calculate ao and a1 by following the procedure of least-squares as explained above, then calculate α and
β from:

𝛼 = 𝑒 𝑎0 𝛽 = 𝑎1
Linearization of Nonlinear Relationships
Simple Power Model
𝑦 = 𝛼 𝑥𝛽 Take ln of both sides for linearization
𝑙𝑛 𝑦 = 𝑙𝑛 𝛼 + 𝛽 𝑙𝑛𝑥 Can be rewritten in the following form:

𝑦 ′ = 𝑎0 + 𝑎1 𝑥′

Calculate (x’i , y’i) data points by the equations:

𝑥′𝑖 = ln(𝑥𝑖 ) 𝑦′𝑖 = 𝑙𝑛 (𝑦𝑖 )

Calculate ao and a1 by following the procedure of least-squares as explained above, then calculate α and
β from:

𝛼 = 𝑒 𝑎0 𝛽 = 𝑎1
Linearization of Nonlinear Relationships
Hyperbolic Relationship
𝑥
𝑦= 𝛼 Take inverse of both sides for linearization
𝛽+𝑥
1 1 𝛽 1 Can be rewritten in the following form:
= +
𝑦 𝛼 𝛼 𝑥

𝑦 ′ = 𝑎0 + 𝑎1 𝑥′
Calculate (x’i , y’i) data points by the equations:

1 1
𝑥′𝑖 = 𝑦′𝑖 =
𝑥𝑖 𝑦𝑖
Calculate ao and a1 by following the procedure of least-squares as explained above, then calculate α and
β from:

1 𝑎1
𝛼= 𝛽=
𝑎0 𝑎0
Example:
𝑥
Estimate the coefficients of the model 𝑦 ≅𝑎
1 + 𝑏𝑥
That fits the data below:

x f(x) 𝑦 ′ = 𝑎0 + 𝑎1 𝑥′ After linearization, where


0 0
0.2 0.198 1 1 𝑏 1
𝑥′𝑖 = 𝑦′𝑖 = 𝑎0 = 𝑎1 =
0.4 0.359 𝑥𝑖 𝑦𝑖 𝑎 𝑎
0.6 0.558
0.8 0.653
x y = f(x) x' y'
1 0.821
0 0 ignore ignore
1.2 0.921
0.2 0.198 5.000 5.051
1.4 0.950
0.4 0.359 2.500 2.786
0.6 0.558 1.667 1.792
0.8 0.653 1.250 1.531
1 0.821 1.000 1.218
1.2 0.921 0.833 1.086
1.4 0.950 0.714 1.053
𝑛=7 as first data point was ignored.

𝑥 =12.96 𝑦 =14.52
𝑥 2 =37.80 𝑥𝑦=39.99

The approximation is:

7 12.96 𝑎0 14.52
=
12.96 37.80 𝑎1 39.99

Solve by Gauss Elimination: 1


𝑎= = 1.05
𝑎0 0.316 𝑎1
𝑎1 =
0.949
𝑏 = 𝑎 𝑎0 = 0.333

𝑥
𝑦 ≅ 1.05
1 + 0.333𝑥
Interpolation by Lagrange Interpolating Polynomials
Interpolation is the estimation of the value of a function by taking weighted average of neighboring
points.

For a set of data with n data points, (xi, yi), a (n-1) order polynomial can be fitted to the data with
exact fit at each data point using Lagrange Polynomials.
Product sign
𝑛 𝑛
𝑥 − 𝑥𝑗
𝑓𝑛 𝑥 = 𝐿𝑖 𝑥 𝑓(𝑥𝑖 ) 𝐿𝑖 𝑥 =
𝑥𝑖 − 𝑥𝑗
𝑖=0 𝑗=0
𝑗≠𝑖
To fit a first order polynomial (line) in the range x0≤x≤ x1 :
𝑥 − 𝑥1 𝑥 − 𝑥0
𝑓1 𝑥 = 𝑓 𝑥0 + 𝑓 𝑥1
𝑥0 − 𝑥1 𝑥1 − 𝑥0

To fit a second order polynomial in the range x0≤x≤ x2 :

(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ) (𝑥 − 𝑥0 )(𝑥 − 𝑥2 ) (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )


𝑓2 𝑥 = 𝑓 𝑥0 + 𝑓 𝑥1 + 𝑓 𝑥2
(𝑥0 −𝑥1 )(𝑥0 − 𝑥2 ) (𝑥1 −𝑥0 )(𝑥1 − 𝑥2 ) (𝑥2 −𝑥0 )(𝑥2 − 𝑥1 )
Interpolation by Lagrange Interpolating Polynomials
Note that:
1. For the 1st order version:
x=x0 f1(x)=f(x0)
x=x1 f1(x)=f(x1)

2. For the 2nd order version:


x=x0 f2(x)=f(x0)
x=x1 f2(x)=f(x1)
x=x2 f2(x)=f(x2)

3. This method should be used for interpolation. The error in extrapolation can be quite large.

4. Choose the set of xi closest to x considered in estimation of f(x).

5. The error for estimation of f(x) can be calculated by:

𝑓𝑛 −𝑓𝑛−1
𝜀𝑎 = 100%
𝑓𝑛
Example:
Considering the data given below, estimate f(13) by using a second order Lagrange Interpolation
Polynomial and estimate the error of this approximation

x f(x)
4, 9, 16 are the closest 3 values to 13
0 0
1 1
𝑥 − 9 𝑥 − 16 𝑥 − 4 𝑥 − 16 (𝑥 − 4)(𝑥 − 9)
4 2 𝑓2 𝑥 ≅ 𝑓(4) + 𝑓(9) + 𝑓(16)
4 − 9 4 − 16 9 − 4 9 − 16 16 − 4 16 − 9
9 3
16 4
4 −3 9 −3 9 4
25 5 𝑓2 13 ≅ 2 +3 +4 = 3.628 Evaluate at x=13
−5 −12 5 −7 12 7
36 6
Needed for error computations only
𝑥 − 16 𝑥−9
𝑓1 𝑥 ≅ 𝑓(9) + 𝑓(16)
9 − 16 16 − 9
3.628 − 3.571
𝜀𝑓2 = = 1.6%
−3 4 3.628
𝑓1 13 ≅ 3 +4 = 3.571
−7 7
Spline Interpolation
In the previous part, nth order polynomials were used to interpolate between (n+1) data points. For
example, for 8 points, we can derive a perfect 7th order polynomial.
This curve would capture all the wanderings suggested by the points. However, there are cases
where these functions can lead to erroneous results.
An alternative approach is to apply lower-order polynomials to subsets of data points. Such
connecting polynomials are called spline functions.

Lagrange interpolating
polynomials

Linear spline
Spline Interpolation
Notation used to derive quadratic splines. Notice that there are n intervals and n+1 data points.
The example shown is for n =3.
𝑓𝑗 𝑥 = 𝑎𝑗 𝑥 2 + 𝑏𝑗 𝑥 + 𝑐𝑗
3 unknowns for each interval
n intervals  3n unknowns

1. fj(xi)=fj+1(xi) = f (xi) n-1 interior points  2n-2 eqns.


2. f j (xi)=f j+1 (xi) n-1 interior points  n-1 eqns.
3. fi(x0)=f (x0) , fi(xlast)=f (xlast) 2 end points  2 eqns
4. fi(x0)=0 only first point  1 eqn

3n equations in total  solve for unknown coefficients


Example
Interpolate in between the following points by using quadra c spline. Evaluate f(x) for x=4.5.
x f(x)
3 2.5
5 1
7.5 2
9 1.2
i = 3:
i = 1:
64a3 + 8b3 + c3 = 0.5
16a1 + 4b1 + c1 = 1.5
16a2 + 4b2 + c2 = 1.5
i = 1 (Con nuity of deriva ves):
8a1 + b1 = 8a2 + b2
i = 2:
42.25a2 + 6.5b2 + c2 = 2.5
At i = 2 (Con nuity of deriva ves):
42.25a3 + 6.5b3 + c3 = 2.5
13a1 + b1 = 13a2 + b2

i = 0:
The second deriva ve at the first point is zero:
6.25a1 + 2.5b1 + c1 = 3
a1 = 0
Expressed in matrix form as
4 1 0 0 0 0 0 0 𝑏 1.5 𝑏 −1
⎡0 ⎡ ⎤ ⎡ ⎤ ⎡𝑐 ⎤ ⎡
0 16 4 1 0 0 0⎤ 𝑐 1.5 5.5 ⎤
⎢0 ⎢ ⎥ ⎢ ⎥ ⎢𝑎 ⎥ ⎢
0 42.25 6.5 1 0 0 0⎥ ⎢𝑎 ⎥ ⎢2.5⎥ ⎢ ⎥ ⎢ 0.56 ⎥
⎢ ⎥ ⎥
⎢0 0 0 0 0 42.25 6.5 1⎥ ⎢ 𝑏 ⎥ = ⎢2.5⎥ ⎢𝑏 ⎥=⎢ −5.48 ⎥
⎢2.5 1 0 0 0 0 0 0⎥ ⎢ 𝑐 ⎥ ⎢3⎥ ⎢𝑐 ⎥ ⎢ 14.46 ⎥
⎢0 0 0 0 0 64 8 1⎥ ⎢𝑎 ⎥ ⎢0.5⎥ ⎢𝑎 ⎥ ⎢ −2.09 ⎥
⎢1 0 −8 −1 0 0 0 0⎥ ⎢ 𝑏 ⎥ ⎢0⎥ ⎢𝑏 ⎥ ⎢ 28.96 ⎥
⎣0 0 13 1 0 −13 −1 0⎦ ⎣ 𝑐 ⎦ ⎣0⎦ ⎣𝑐 ⎦ ⎣−97.46⎦

f1 = -x + 5.5 2.5 ≤ x ≤ 4.0

f2 = 0.56x²-5.48x + 14.46 4.0 ≤ x ≤ 6.5

f3 = -2.09x²-28.96x – 97.46 6.5 ≤ x ≤ 8.0

f2(4.5) = 0.56*(4.5)²-5.48*(4.5) + 14.46 = 1.14


Spline Interpolation
Cubic Spline
The objective in cubic splines is to derive a 3rd polynomial for each interval between knots:

𝑓𝑖 𝑥 = 𝑎𝑖 𝑥 3 + 𝑏𝑖 𝑥 2 + 𝑐𝑖 𝑥 + 𝑑𝑖

Thus, for n + 1 data points (i = 0, 1, 2, . . . , n), there are n intervals and 4n unknown constants to
evaluate. 4n conditions are therefore required to evaluate the unknowns that originate from the
following rules that need to be satisfied:

1. fj(xi)=fj+1(xi) = f (xi) n-1 interior points  2n-2 eqns.


2. f j (xi)=f j+1 (xi) n-2 interior points  n-1 eqns.
3. f j (xi) = f j+1 (xi) n-2 interior points  n-1 eqns.
4. fi(x0)=f (x0) , fi(xlast)=f (xlast) 2 end points  2 eqns
5. fi(x0)=0 , fi (xlast)=0 2 end points  2 eqns

4n equations in total  solve for unknown coefficients


But reduction in number of equations is possible by using Langrange’s interpolation for second
derivatives.
Spline Interpolation
Cubic Spline
This cubic equation can be applied to each interval:

𝑓𝑖′′ 𝑥𝑖−1 3 𝑓𝑖′′ 𝑥𝑖 3 𝑓(𝑥𝑖−1 ) 𝑓′′ (𝑥𝑖−1 )(𝑥𝑖 −𝑥𝑖−1 )


𝑓𝑖 𝑥 = (𝑥𝑖 − 𝑥) + (𝑥 − 𝑥𝑖−1 ) + − (𝑥𝑖 − 𝑥)+
6 𝑥𝑖 −𝑥𝑖−1 6 𝑥𝑖 −𝑥𝑖−1 𝑥𝑖 −𝑥𝑖−1 6
𝑓(𝑥𝑖 ) 𝑓 ′′ (𝑥𝑖 )(𝑥𝑖 − 𝑥𝑖−1 )
− (𝑥 − 𝑥𝑖−1 )
𝑥𝑖 − 𝑥𝑖−1 6

The second derivatives at the end of each interval (𝑓𝑖′′ 𝑥𝑖−1 and 𝑓𝑖′′ (𝑥𝑖 )) are the only unknowns in
the above equation. The following 2nd equation can be used to compute these second derivatives:

6
𝑥𝑖 − 𝑥𝑖−1 𝑓 ′′ 𝑥𝑖−1 + 2 𝑥𝑖+1 − 𝑥𝑖−1 𝑓 ′′ 𝑥𝑖 + 𝑥𝑖+1 − 𝑥𝑖 𝑓 ′′ 𝑥𝑖+1 = 𝑓 𝑥𝑖+1 − 𝑓(𝑥𝑖 ) +
𝑥𝑖+1 − 𝑥𝑖
6
𝑓 𝑥𝑖−1 − 𝑓(𝑥𝑖 )
𝑥𝑖 − 𝑥𝑖−1

These equations are based on Lagrange interpolating polynomials and the 5 rules listed in the
previous slide. The related proof can be found in Box 18.3 of your textbook.

You might also like