Tema 7
Tema 7
C.O.S. Sorzano
Biomedical Engineering
December 3, 2013
D. Lay. Linear algebra and its applications (3rd ed). Pearson (2006). Chapter 6.
Bedford, J. L. Sinogram analysis of aperture optimization by iterative least-squares in volumetric modulated arc therapy. Physics in Medicine and Biology,
Theorem 1.1
If we considered u and v to be column vectors (∈ Mn×1 ), then
u · v = uT v
Example
Let u = (2, −5, −1) and v = (3, 2, −3).
u · v = 2 · 3 + (−5) · 2 + 1 · (−3) = −1
Theorem 1.2
For any three vectors u, v, w ∈ Rn and any scalar r ∈ R it is verified that
1 u·v=v·u
2 (u + v) · w = u · w + v · w
3 (r u) · v = r (u · v) = u · (r v)
4 u·u≥0
5 u·u=0⇔u=0
Corollary
(r1 u1 + r2 u2 + ... + rp up ) · v = r1 (u1 · v) + r2 (u2 · v) + ... + rp (up · v)
Theorem 1.3
Given any vector v ∈ Rn
p
kvk = v12 + v22 + ... + vn2
Example
The length of v = (1, −2, 2, 0) is
p
kvk = 12 + (−2)2 + 22 + 02 = 3
Theorem 1.4
For any vector v and any scalar r it is verified that
kr vk = |r |kvk
Proof
It will be given only for v ∈ Rn :
p p
kr vk = √ (rvp 2 2 2
1 ) + (rv2 ) + ... + (rvn ) = r 2 (v12 + v22 + ... + vn2 )
2 2 2 2
(q.e.d.)
= r v1 + v2 + ... + vn = |r |kvk
Example (continued)
Find a vector of unit length that has the same direction as v = (1, −2, 2, 0).
Solution
q
v
= 13 , − 23 , 23 , 0 ⇒ kuv k = 19 + 49 + 94 + 0 = 1
uv = kvk
Example
Calculate the distance between 2 and 8 as well as between -3 and 4.
Example
Calculate the distance between u = (7, 1) and v = (3, 2)
√ √
d(u, v) = k(7, 1) − (3, 2)k = k(4, −1)k = 42 + 12 = 17
Example
For any two vectors in R3 , u and v, the distance can be calculated through
= ku − vk = k(u1 − v1 , u2 − v2 , u3 − v3 )k =
d(u, v)p
(u1 − v1 )2 + (u2 − v2 )2 + (u3 − v3 )2
Example
Any two vectors in R2 , u and v, are orthogonal if d(u, v) = d(u, −v)
Corollary
0 is orthogonal to any other vector.
Example
Let W be a plane in R3 passing through the origin and L be a line, passing
through the origin and perpendicular to W . For any vector w ∈ W and any vector
z ∈ L we have
w·z=0
Therefore,
L = W ⊥ ⇔ W = L⊥
Theorem 1.7
ai · x for i = 1, 2, ..., m ⇒ Ax = 0
So, x ∈ Nul{A}
Theorem 1.8
For any two vectors u and v in a vector space V , the angle between the two can
be measured through the dot product:
u · v = kukkvk cos θ
Exercises
From Lay (3rd ed.), Chapter 6, Section 1:
6.1.15
6.1.22
6.1.24
6.1.26
6.1.28
6.1.30
6.1.32 (computer)
Example
Let u1 = (3, 1, 1), u2 = (−1, 2, 1), u3 = (− 12 , −2, 72 ). Check whether the set
S = {u1 , u2 , u3 } is orthogonal.
Solution
u1 · u2 = 3 · (−1) + 1 · 2 + 1 · 1 = 0
u1 · u3 = 3 · (− 12 ) + 1 · (−2) + 1 · ( 72 ) = 0
u2 · u3 = (−1) · (− 12 ) + 2 · (−2) + 1 · ( 27 ) = 0
Theorem 2.1
If S is an orthogonal set of non-null vectors, then S is linearly independent and,
consequently, it is a basis of the subspace spanned by S.
Proof
Let ui (i = 1, 2, ..., p) be the elements of S. Let us assume that S is linearly
dependent. Then, there exists coefficients c1 , c2 , ..., cp not all of them null such
that
0 = c1 u1 + c2 u2 + ... + cp up
Now, we compute the inner product with u1
0 · u1 = (c1 u1 + c2 u2 + ... + cp up ) · u1
0 = c1 (u1 · u1 ) + c2 (u2 · u1 ) + ... + cp (up · u1 ) = c1 ku1 k2 ⇒ c1 = 0
Multiplying by ui (i = 2, 3, ..., p) we can show that all ci ’s are 0, and, therefore,
the set S is linearly independent.
Theorem 2.2
Let {u1 , u2 , ..., up } be an orthogonal basis for a vector space V , for each x ∈ V
we have
x·u1 x·u2 x·up
x= ku1 k2 u1 + ku2 k2 u2 + ... + kup k2 up
Proof
If x is in V , then it can be expressed as a linear combination of the vectors in a
basis of V
x = c1 u1 + c2 u2 + ... + cp up
Example
Let u1 = (3, 1, 1), u2 = (−1, 2, 1), u3 = (− 12 , −2, 72 ), and B = {u1 , u2 , u3 } be an
orthogonal basis of R3 . Let x = (6, 1, −8). The coordinates of x in B are given by
x · u1 = 11 x · u2 = −12 x · u1 = −33
ku1 k2 = 11 ku2 k2 = 6 ku3 k2 = 33
2
−12 −33
x = 11
11 u1 + 6 u2 + 33 u3
2
= u1 − 2u2 − 2u3
The coordinates of x in the basis B are
[x]B = (1, −2, −2)
y = ŷ + z = αu + z ⇒
z = y − ŷ
Example
Let y = (7, 6) and u = (4, 2). Then,
y·u 8 40
ŷ = = kuk2 u
= 2u = 20 u
4
y · u = 40
⇒ 7 8 −1
kuk2 = 20 z = y − ŷ = − =
6 p4 2 √
d(y, ŷ) = ky − ŷk = kzk = (−1) + 22 = 5
2
Example
Show that the set {u1 , u2 , u3 } is orthonormal, with
3 −1 −1
u1 = √111 1 u2 = √16 2 u3 = √1 −4
66
1 1 7
Solution
Let’s check that they are orthogonal:
u1 · u2 = √1 √1 (3 · (−1) + 1 · 2 + 1 · 1) = 0
11 6
u1 · u3 = √1 √1 (3 · (−1) + 1 · (−4) + 1 · 7) = 0
11 66
u2 · u3 = √1 √1 ((−1) · (−1) + (2) · (−4) + (1) · 7) =0
6 66
Example (continued)
Now, let’s check that they have unit length:
r 2 q
ku1 k = √1 (32 + 12 + 12 ) = 9+1+1
=1
11
r 11
2 q
ku2 k = √1 ((−1)2 + 22 + 12 ) = 1+4+1 =1
6 6
r 2 q
ku3 k = √1 ((−1)2 + (−4)2 + 72 ) = 1+16+49 =1
66 66
Theorem 2.3
If S = {u1 , u2 , ..., un } is an orthonormal set, then it is an orthonormal basis of
Span{S}.
Example
{e1 , e2 , ..., en } is an orthonormal basis of Rn .
7. Orthogonality and least squares December 3, 2013 30 / 119
Orthonormal basis
Theorem 2.4
Let S = {u1 , u2 , ..., un } is an orthogonal set of vectors, then the set
S 0 = {u01 , u02 , ..., u0n } where
ui
u0i = kui k
But this product is obviusly 0 because the ui vectors are orthogonal. Let’s check
now that the u0i vectors have unit length:
ui kui k
ku0i k = kui k = kui k =1
Theorem 2.6
Let U ∈ Mn×n be an orthonormal matrix and ∀x, y ∈ Rn , then
1 kUxk = kxk
2 (Ux) · (Uy) = x · y
3 (Ux) · (Uy) = 0 ⇔ x · y = 0
Example
√1 2
3
√
√12 2
Let U = − 23 and x = .
3
2
1
0 3
U is an orthonormal matrix because
1 1 √1 2
√
2
√
2
0 2 3
1 0
UT U = 2 √12 − 23 =
3 0 1
− 23 1
3 0 1
3
Theorem 2.7
Let U be an orthonormal and square matrix. Then,
1 U −1 = U T
2 U T is also an orthonormal matrix (i.e., the rows of U also form an
orthonormal set of vectors).
7. Orthogonality and least squares December 3, 2013 34 / 119
Exercises
Exercises
From Lay (3rd ed.), Chapter 6, Section 2:
6.2.1
6.2.10
6.2.15
6.2.25
6.2.26
6.2.29
6.2.35 (computer)
z = y − ŷ
z⊥W
Example
Let {u1 , u2 , ..., u5 } be an orthogonal basis of R5 . Consider the subspace
W = Span{u1 , u2 }. Given any vector y ∈ R5 , we can decompose it as the sum of
a vector in W and a vector perpendicular to W
y = ŷ + z
Solution
If {u1 , u2 , ..., u5 } is a basis of R5 , then any vector y ∈ R5 can be written as
y = c1 u1 + c2 u2 + ... + c5 u5
We may decompose this sum as
ŷ = c1 u1 + c2 u2
z = c3 u3 + c4 u4 + c5 u5
Example (continued)
It is obvious that ŷ ∈ W . Now we need to show that z ∈ W ⊥ . For doing so, we
will show that
z · u1 = 0
z · u2 = 0
To show the first equation we note that
z · u1 = (c3 u3 + c4 u4 + c5 u5 ) · u1
= c3 (u3 · u1 ) + c4 (u4 · u1 ) + c5 (u5 · u1 )
= c3 · 0 + c4 · 0 + c5 · 0
= 0
We would proceed analogously for z · u2 = 0.
Proof
ŷ is obviously in W since it has been written as a linear combination of vectors in
a basis of W . z is perpendicular to W because
y·u1 y·u2 y·up
z · u1 = y − ku 1k
u
2 1 + u
ku2 k2 2 + ... + u
kup k2 p · u1
y·u1 y·u2 y·u
= y · u1 − ku1 k2 (u1 · u1 ) − ku2 k2 (u2 · u1 ) − ... − kup kp 2 (up · u1 )
[{ui } is an orthogonal set]
y·u1
= y · u1 − ku 1k
2 (u1 · u1 )
y·u1
= y · u1 − ku1 k2 ku1 k2
= y · u1 − y · u1
= 0
We could proceed analogously for all elements in the basis of W .
Example
Let u1 = (2, 5, −1) and u2 = (−2, 1, 1). Let W be the subspace spanned by u1
and u2 . Let y = (1, 2, 3) ∈ R3 . The orthogonal projection of y onto W is
y·u1 y·u2
ŷ = ku1 k2 u1 + ku2 k2 u2
2 −2
1·2+2·5+3·(−1) 5 + 1·(−2)+2·1+3·1 1
= 22 +52 +(−1)2 (−2)2 +12 +12
−1 1
2
2 −2 −5
9
= 30
5 + 15
30
1 = 2
−1 1 15
1 − 25 7
5
z = y − ŷ = 2 −
2 = 0
1 14
3 5 5
Geometrical interpretation
ŷ can be understood as the sum of the orthogonal projection of y onto each one
of the elements of the basis of W .
Theorem 3.2
If y belongs to W , then the orthogonal projection of y onto W is itself:
ŷ = y
7. Orthogonality and least squares December 3, 2013 44 / 119
Properties of orthogonal projections
Theorem 3.3 (Best approximation theorem)
The orthogonal projection of y onto W is the point in W with minimum distance
to y, i.e.,
ky − ŷk ≤ ky − vk
y − v = (y − ŷ) + (ŷ − v)
Theorem 3.4
If {u1 , u2 , ..., up } is an orthonormal basis of W , then the orthogonal projection of
y onto W is
Since the basis is in this case orthonormal, then kuk = 1 and consequently
ŷ = hy, u1 i u1 + hy, u2 i u2 + ... + hy, up i up
hup , yi
(q.e.d.)
Corollary
Let U = u1 u2 ... up be a n × p matrix with orthonormal columns and
W = Col{U} its column space. Then,
∀x ∈ Rp U T Ux = x No effect
∀y ∈ Rn UU T y = ŷ Orthogonal projection of y onto W
If U is a n × n, then W = Rn and the projection has no effect
∀y ∈ Rn UU T y = ŷ = y No effect
Exercises
From Lay (3rd ed.), Chapter 6, Section 3:
6.3.1
6.3.7
6.3.15
6.3.23
6.3.24
6.3.25 (computer)
v1 = x1 = (3, 6, 0)
For the second vector in the basis, we need to keep the component of x2 that is
orthogonal to x1 . For doing so we calculate the projection of x2 onto x1 (p), and
we decompose x2 as
Example (continued)
The set {v1 , v2 } is an orthogonal basis of W .
Example
Let W = Span{x1 , x2 , x3 } with x1 = (1, 1, 1, 1), x2 = (0, 1, 1, 1) and
x3 = (0, 0, 1, 1). Let’s look for an orthogonal basis of W .
Solution
We may keep the first vector for the basis. Then we construct a subspace (W1 )
with a single element in its basis
v1 = x1 = (1, 1, 1, 1) W1 = Span{v1 }
For the second vector in the basis, we need to keep the component of x2 that is
orthogonal to W1 . With the already computed basis vectors, we construct a new
subspace (W2 ) with two elements in its basis
v2 = x2 − ProjW1 (x2 ) = (− 43 , 41 , 14 , 14 ) W2 = Span{v1 , v2 }
For the third vector in the basis, we repeat the same procedure
v3 = x3 − ProjW2 (x3 ) = (0, − 32 , 13 , 13 ) W3 = Span{v1 , v2 , v3 }
v1 = x1 W1 = Span{v1 }
v2 = x2 − ProjW1 (x2 ) W2 = Span{v1 , v2 }
...
vp = xp − ProjWp−1 (xp ) Wp = Span{v1 , v2 , ..., vp } = W
Proof
Consider Wk = Span{v1 , v2 , ..., vk } and let us assume that {v1 , v2 , ..., vk } is a
basis of Wk . Now we construct
vk+1 = xk+1 − ProjWk (xk+1 ) Wk+1 = Span{v1 , v2 , ..., vk+1 }
Orthonormal basis
Once we have an orthogonal basis, we simply have to normalize each vector to
have an orthonormal basis.
Example
Let W = Span{x1 , x2 } with x1 = (3, 6, 0) and x2 = (1, 2, 2). Let’s look for an
orthonormal basis of W .
Solution
In Slide 52 we learned that an orthogonal basis was given by
v1 = (3, 6, 0)
v2 = (0, 0, 2)
Example (continued)
To find R we multiply on both sides of the factorization by Q T
A = QR ⇒ Q T A = Q T QR = R
1 0 0
1 1 1 1
2 2 2 2
3 1 1 1 1 1 0
= − √12 √12 √12 √12
R = QT A 1
1 1
0 − √26 √16 √1
6 1 1 1
3
2 2 1
3 2
= 0 √12 √12
0 0 √1
6
Exercises
From Lay (3rd ed.), Chapter 6, Section 4:
6.4.7
6.4.13
6.4.19
6.4.22
6.4.24
∀x ∈ Rn kb − Ax̂k ≤ kb − Axk
Theorem 5.1
The set of least-squares solutions of Ax = b is the same as the set of solutions of
the normal equations
AT Ax = AT b
Example
4 0 2
Find a least-squares solution to Ax = b with A = 0 2 and b = 0 .
1 1 11
Solution
Let’s solve the normal equations AT Ax̂ = AT b
T 17 1 T 19
A A= A b=
1 5 11
−1
17 1 19 17 1 19 1
x̂ = ⇒ x̂ = =
1 5 11 1 5 11 2
Let’s check that x̂ is not a solution of the original equation system but a
least-squares solution
4 0 4 2
1
Ax̂ = 0 2 = 4 = b̂ 6= b = 0
2
1 1 3 11
Example (continued)
In this case:
σ2 = k(4, 4, 3) − (2, 0, 11)k = k(2, 4, −8)k ≈ 9.165
Example
Unfortunately, the least-squares solution may not be unique as shown in the next
example
(arising in ANOVA). Find
aleast-squares solution to Ax = b with
1 1 0 0 −3
1 1 0 0 −1
1 0 1 0
and b = 0 .
A= 1 0 1 0 2
1 0 0 1 5
1 0 0 1 1
Solution
6 2 2 2 4
2 2 0 0 −4
AT A = T
2 0 2 0 A b = 2
2 0 0 2 6
Example (continued)
The augmented matrix is
6 2 2 2 4 1 0 0 1 3
2 2
0 0 −4
∼ 0 1 0
−1 −5
2 0 2 0 2 0 0 1 −1 −2
2 0 0 2 6 0 0 0 0 0
Any point of the form
3 −1
−5 1
x̂ =
−2 + x4 1
∀x4 ∈ R
0 1
is a least-squares solution of the problem.
Theorem 5.2
The matrix AT A is invertible iff the columns of A are linearly independent. In this
case, the equation system Ax = b has a unique least-squares solution given by
x̂ = A+ b
Ax̂ = AR −1 Q T b = QRR −1 Q T b = QQ T b.
But Q is an orthonormal basis of Col{A} (Theorem 4.2 and Corollary in Slide 49)
and consequently QQ T b is the orthogonal projection of b onto Col{A}, that is, b̂.
So, x̂ = R −1 Q T b is a least-squares solution of Ax = b. Additionally, since the
columns of A are linearly independent, by Theorem 5.2, this solution is unique.
7. Orthogonality and least squares December 3, 2013 73 / 119
Least squares and QR decomposition
L
1 3 5 3
1 1 0 5
et A =
1
and b = . Its QR decomposition is
1 2 7
1 3 3 −3
1 1 1
2 2 2
1 −1 −1 2 4 5
A = QR = 2 2 2 0 2 3
1 −1 1
2 2 2 0 0 2
1 1
2 2
− 12
6 2 4 5 6 10
Q T b = −6 ⇒ 0 2 3 x̂ = −6 ⇒ x̂ = −6
4 0 0 2 4 2
Exercises
From Lay (3rd ed.), Chapter 6, Section 5:
6.5.1
6.5.19
6.5.20
6.5.21
6.5.24
Weight = β0 + β1 Height
A. Schneider, G. Hommel, M. Blettner. Linear Regression Analysis. Dtsch Arztebl Int. 2010 November; 107(44): 776–782.
Example (continued)
For each observation we have an equation
Least-squares regression
Each one of the observed data points (xj , yj ) gives an equation. All together
provide an equation system
Xβ = y
that is an overdetermined, linear equation system of the form Ax = b. The matrix
X is called the system matrix and it is related to the independent (predictor)
variables (the height in this case). The vector y is called the observation vector
and collects the values of the dependent (predicted) variable (the weight in this
case). The model
y = β 0 + β1 x +
Example
Suppose we have observed the following values of height and
weight (1.70,57),
1 1.70
(1.53,43), (1.90,94). We construct the system matrix X = 1 1.53 and the
1 1.90
57
observation vector y = 43. Now we look the normal equations
94
T T
X β = y⇒ X Xβ = X y
3.00 5.13 194.00 −173.14
XTX = XTy = β̂ = (X T X )−1 X T y =
5.13 8.84 341.29 137.90
Weight = −173.39 + 139.21Height
Example
110
MATLAB:
100
X=[1 1.70; 1 1.53; 1 1.90];
90
y=[57; 43; 94];
80
beta=inv(X’*X)*X’*y
Weight (kg)
70
x=1.5:0.01:2.00;
60
yp=beta(1)+beta(2)*x;
50 plot(x,yp,X(:,1),y,’o’)
40 xlabel(’Height (m)’)
30 ylabel(’Weight (kg)’)
1.5 1.6 1.7 1.8 1.9 2
Height (m)
Fitting a parabola
y1 = f0 (x1 ) + β1 f1 (x1 ) + β2 f2 (x1 )
f0 (x ) = 1
y2 = f0 (x2 ) + β1 f1 (x2 ) + β2 f2 (x2 )
f1 (x ) = x ⇒
...
f2 (x ) = x 2
y n = f
0 (xn ) + β1 f1 (x
n ) + β2 f2 (x
n)
y1 1 x1 x12 1
y2 1 x2 x22 β0
β1 + 2
=
... ... ... ... ... ⇒ y = Xβ +
2 β2
yn 1 xn xn n
Fitting a parabola
In this example they model the deformation of the wall of the zebra fish embryo as
a function of strain.
Z. Lua, P. C.Y. Chen, H. Luo, J. Nam, R. Ge, W. Lin. Models of maximum stress and strain of zebrafish embryos under indentation. J. Biomechanics 42
7. Orthogonality
http://www.fhp.tu- darmstadt.de/nt/index.php?id=531&L=1Signal and least
Processing squares
Group, Technische Universitat Darmstadt December 3, 2013 87 / 119
Exercises
Exercises
From Lay (3rd ed.), Chapter 6, Section 6:
6.6.1
6.6.5
6.6.9
6.6.12 (computer)
Example
For instance in Weighted Least Squares (WLS) we may use an inner product in
R2 defined as:
3 hcu, vi = c hu, vi
hcu, vi = 4cu1 v1 + 5cu2 v2 [by definition]
= c4v1 u1 + c5v2 u2 [commutativity of scalar multiplication]
= c(4v1 u1 + 5v2 u2 ) [distributivity of scalar multiplication]
= c hu, vi [by definition]
4 hu, ui ≥ 0 and hu, ui = 0 iff u = 0.
1 hu, ui ≥ 0
hu, ui = 4u12 + 5u22 [by definition]
which is obviously larger than 0.
2 hu, ui = 0 iff u = 0.
hu, ui = 0 ⇔ 4u12 + 5u22 = 0 ⇔ u1 = u2 = 0
Example
Consider two vectors p and q the vector space of polynomials of degree n (Pn ).
Let t0 , t1 , ..., tn be n distinct real numbers and K any scalar. The inner product
between p and q is defined as
hp, qi = K (p(t0 )q(t0 ) + p(t1 )q(t1 ) + ... + p(tn )q(tn ))
Example
Consider two vectors p and q the vector space of polynomials of degree n (Pn ).
Assume that we regularly space the n + 1 points in the interval [−1, 1]
and set K = ∆T , then the inner product between the two polynomials becomes
n
P
hp, qi = (p(t0 )q(t0 ) + p(t1 )q(t1 ) + ... + p(tn )q(tn )) ∆T = p(ti )q(ti )∆T
i=0
d(u, v) = ku − vk
Finally, two vectors u and v are said to be orthogonal iff
hu, vi = 0
Example
In the vector space of polynomials in the interval [0, 1], P[0, 1], let’s define the
inner product
R1
hp, qi = 0 p(t)q(t)dt
What is the length of the vector p(t) = 3t 2 ?
Solution
qR qR qR
p 1 2 1 2 )2 dt = 1
kpk = hp, pi = 0
p (t)dt = 0
(3t 0
9t 4 dt
r
5
1 q
9 t5 = 9 15 − 0 = √35
=
0
{1, t, t 2 }
Let’s orthogonalize it
p0 (t) = 1 R1
ht,p0 (t)i tdt
p1 (t) = t− kp0 k2 p0 (t) = t − R−11 1 = t − 02 1 = t
dt
−1
ht 2 ,p0 (t)i
2 ht 2 ,p1 (t)i
p2 (t) = t − kp0 k2 p0 (t) − kp1 k2 p1 (t)
R1 2 R1 2
t dt t tdt 2
2 1
−1
= t − 1
R − R−11 2 t = t 2 − 23 = t 2 − 3
dt t dt
−1 −1
Example
What is the best approximation in P2 [−1, 1] of p(t) = t 3 ?
Solution
We know the answer is the orthogonal projection of p(t) onto P2 [−1, 1]. An
orthogonal basis of P2 [−1, 1] is {1, t, t 2 − 13 }. Therefore, this projection can be
calculated as
hp,p0 i hp,p1 i hp,p2 i
p̂(t) = ProjP2 [−1,1] {p(t)} = kp0 k2 p0 (t) + kp1 k2 p1 (t) + kp2 k2 p2 (t)
1
t3
0.8
3/5t
0.6
0.4
0.2
−0.2
−0.4
−0.6
−0.8
−1
−1 −0.5 0 0.5 1
Example
In this example we exploited the best approximation property of orthogonal
wavelets to speed-up and make more robust angular alignments of projections in
3D Electron Microscopy.
C.O.S.Sorzano, S. Jonic, C. El-Bez, J.M. Carazo, S. De Carlo, P. Thévenaz, M. Unser. A multiresolution approach to orientation assignment in 3-D
electron microscopy of single particles. Journal of Structural Biology 146(3): 381-392 (2004, cover article)
But by the Pythagorean Theorem (Theorem 7.1) we have kProjW {v}k ≤ kvk.
Consequently,
|hv,ui|
kuk ≤ kvk ⇒ | hv, ui | ≤ kukkvk (q.e.d.)
(q.e.d.)
Exercises
From Lay (3rd ed.), Chapter 6, Section 7:
6.7.1
6.7.13
6.7.16
6.7.18
Let us collect all observed values into a vector y and do analogously with the
predictions ŷ. Let us define the diagonal matrix
w1 0 0 ... 0
0 w2 0 ... 0
W = 0 0 w3 ... 0
... ... ... ... ...
0 0 0 ... wn
Then, the previous objective function becomes
n
(wj yj − wj ŷj )2 = kW y − W ŷk2
P
j=1
Now, suppose that ŷ is calculated from the columns of a matrix A, that is,
ŷ = Ax. The objective function becomes
n
(wj yj − wj ŷj )2 = kW y − WAxk2
P
j=1
The minimum of this objective function is attained for x̂ that is the least-squares
solution of the equation system
WAx = W y
Example
In this work they used Weighted Least Squares to calibrate a digital system to
measure maximum respiratory pressures.
J.L. Ferreira, F.H. Vasconcelos, C.J. Tierra-Criollo. A Case Study of Applying Weighted Least Squares to Calibrate a Digital Maximum Respiratory
Example
Theorem 8.1
Consider the vector space of continuous functions in the interval [0, 2π], C [0, 2π].
The set
S = {1, cos(t), sin(t), cos(2t), sin(2t), ..., cos(Nt), sin(Nt)}
Example
In this work we used Fourier space to simulate and to align electron microscopy
images
S. Jonic, C.O.S.Sorzano, P. Thévenaz, C. El-Bez, S. De Carlo, M. Unser. Spline-Based image-to-volume registration for three-dimensional electron
Exercises
From Lay (3rd ed.), Chapter 6, Section 8:
6.8.1
6.8.6
6.8.8
6.8.11