0% found this document useful (0 votes)
14 views6 pages

PDE14

The document discusses the concept of orthogonality in the context of real-valued continuous functions and their inner products, particularly focusing on eigenfunctions of differential operators under various boundary conditions. It presents several theorems regarding the orthogonality of eigenfunctions corresponding to distinct eigenvalues, the reality of eigenvalues, and the conditions under which negative eigenvalues do not exist. Additionally, it introduces notions of convergence for infinite series and the least-square approximation for orthogonal sets of functions.

Uploaded by

Locke Cole
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views6 pages

PDE14

The document discusses the concept of orthogonality in the context of real-valued continuous functions and their inner products, particularly focusing on eigenfunctions of differential operators under various boundary conditions. It presents several theorems regarding the orthogonality of eigenfunctions corresponding to distinct eigenvalues, the reality of eigenvalues, and the conditions under which negative eigenvalues do not exist. Additionally, it introduces notions of convergence for infinite series and the least-square approximation for orthogonal sets of functions.

Uploaded by

Locke Cole
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Lecture 14

March 7, 21

1 Orthogonality

If f (x) and g(x) are two real-valued continuous functions dened on an interval
a ≤ x ≤ b, we dene their inner product to be the integral of their product:
Z b
(f, g) ≡ f (x)g(x)dx.
a

We'll call f (x) and g(x) orthogonal if (f, g) = 0. No function is orthogonal


to itself except f (x) ≡ 0. The key observation in each case discussed in Sec. 5.1
is that every eigenfunction is orthogonal to every other eigenfunction.
We are studing the operator A = − dx d2
2 with some boundary conditions

(either Dirichlet or Neumann or · · · ). Let X1 (x) and X2 (x) be two dierent


eigenfunctions. Thus
−d2 X1
−X100 = = λ1 X1
dx2
−d2 X2
−X200 = = λ2 X2 ,
dx2
where both functions satisfy the boundary conditions. Let's assume that
λ1 6= λ2 . We integrate to get
Z b
(−X100 X2 + X1 X200 )dx = (−X10 X2 + X1 X20 )|ba . (1)
a

This is sometimes called Green's second identity.


Case1: Dirichlet. This means that both functions vanish at both ends: X1 (a) =
X1 (b) = X2 (a) = X2 (b) = 0. So the right side of (1) is zero.
Case2: Neumman. The rst derivatves vanish at both ends X10 (a) = X10 (b) =
X20 (a) = X20 (b) = 0. It is once again zero.
Case3: Periodic. Xj (a) = Xj (b), Xj0 (a) = Xj0 (b) for both j = 1, 2. Again you
get zero.

1
Case4: Robin. Xj0 (a) = cXj (a), Xj0 (b) = cXj (b) for both j = 1, 2.

−X10 (b)X2 (b) + X1 (b)X20 (b) = −cX1 (b)X2 (b) + cX1 (b)X2 (b) = 0
−X10 (a)X2 (a) + X1 (a)X20 (a) = −cX1 (a)X2 (a) + cX1 (a)X2 (a) = 0.

On the other hand,


Z b Z b
(−X100 X2 + X1 X200 )dx = (λ1 X1 X2 − λ2 X1 X2 )dx. (2)
a a

Combining Equation (2) and (1), we get in all the above four cases
Z b
(λ1 − λ2 ) X1 X2 dx = 0.
a

Therefore, X1 and X2 orthogonal if λ1 6= λ2 .


The right side of (1) is not always zero. For example, X(a) = X(b), X 0 (a) =
2X 0 (b).
The right side of (1) is X10 (b)X2 (b) − X1 (b)X20 (b) which is not zero.
For any pair of boundary conditions
α1 X(a) + β1 X(b) + γ1 X 0 (a) + δ1 X 0 (b) = 0
0
α2 X(a) + β2 X(b) + γ2 X (a) + δ2 X (b) 0
= 0 (3)
involving eight real constants. Such a set of boundary conditions is called
symmetric if
f 0 (x)g(x) − f (x)g 0 (x)|x=a
x=b
= 0

for any pair of functions f (x) and g(x) both of which satisfy the pair of
boundary conditions (3).
Theorem 1. If you have symmetric boundary conditions, then any two eigen-
functions that correspond to distinct eigenvalues are orthogonal. Therefore, if
any functioin is expanded in a series of these eigenfunctions, the coecients are
determined.
Proof. The rst part is obvious from the above argument. If Xn (x) denotes
the orthogonal eigenfunctions with eigenvalue λn and suppose that φ has the
following convergent series
X
φ(x) = An Xn (x).
n

Then
X X
(φ, Xm ) = ( An Xn , Xm ) = An (Xn , Xm ) = Am (Xm , Xm ).
n n

2
So we have the formula for the coecients Am

(φ, Xm )
Am = .
(Xm , Xm )

Remark 2. We have so far avoided all questions of convergence.


Remark 3. If there are two eigenfunctions, say X1 (x) and X2 (x), but their
eigenvalues are the same λ1 = λ2 , then they do not have to be orthogonal. For
example in the case of periodic boundary condtions sin( nπx l ) and cos( l ) +
nπx

sin( l ) are eigenvalues to the operator A with the same eigenvalue λ = n.


nπx

They are not orthogonal. But they can be make so by the Gram-Schmidt
orthogonalization procedure. The two eigenfunctions sin( nπxl ) and cos( l ) are
nπx

orthogonal on (−l, l).


If f (x) and g(x) are two complex-valued functions, we dene the inner prod-
uct on (a, b) as
Z b
(f, g) = f (x)g(x)dx.
a

The bar denotes the complex conjugate. The two functions are called or-
thogonal if (f, g) = 0.
Now suppose that you have the boundary conditions (3) with eight real
constants. They are called symmetric (or hermitian) if
f 0 (x)g(x) − f (x)g 0 (x)|ba = 0

for all f , g satisfying the BCs.


Note that the set of functions are symmetric in the real sense implies the
symmetric in the complex sense.
Theorem 4. Under the same conditions as Theorem 1, all the eigenvalues
are real numbers. Furthermore, all the eigenfunctions can be chosen to be real
valued.
00
Proof. If −X 00 = λX then −X = λX plus BCs. Now use Green's second
identity with the funtions X and X . Thus
Z b Z b
00 0
(λ − λ) XXdx = (−X 00 X + XX )dx = (−X 0 X + XX )|ba = 0.
a a

But XX = |X|2 ≥ 0 and X(x) is not allowed to be zero function. So the


integral can not vanish. Therefore, λ = λ = 0, which means exactly that λ is
real.
Then suppose the eigenfunction X(x) is complex, we can write it as X(x) =
Y (x) + iZ(x), where Y (x) and Z(x) are real. Then −Y 00 − iZ 00 = λY + iλZ .

3
So we get that −Y 00 = λY and −Z 00 = λZ . The boundary conditioins still hold
for both Y and Z . It is easy to see that X(x) is also a eigenfunctions. So the
linear combination of X(x) and X(x) can be replaced by the linear combination
of Y (x) and Z(x). Thus we can replace the set of complex eigenfunctions X(x)
and X(x) by the set of the corresponding real eigenfunctions Y and Z .
Theorem 5. Assume the same conditions as in Theorem 1. If
f (x)f 0 (x)|ba ≤ 0

for all (real-valued) functions f (x) satisfying the BCs, then there is no neg-
ative eigenvalue.
Proof. Suppose there is a negative eigenvalue γ < 0 and eigenfunction such that
−X 00 (x) = γX(x).

Then we have that


Z b Z b Z b
0> γX 2 (x)dx = − X 00 (x)X(x)dx = −X 0 (b)X(b)|ba + (X 0 )2 dx ≥ 0.
a a a

This is a contradiction. Thus there is no negative eigenvalue.


From the previous computation, we have for one dimensional eigenvalue
problem
X 00 + λX = 0

in (a, b) with any symmetric BC.


Theorem 6. There are an innite number of eigenvalues. They form a sequence
λn → +∞. Moreover, we may list the eigenvalues as

λ1 ≤ λ2 ≤ λ3 · · · → +∞

with the corresponding eigenfunctions


X1 , X2 , X3 ··· ,

which are pairwise orthogonal.


So for any function f (x) on (a, b), its Fourier coecients are dened as
Rb
(f, Xn ) a
f (x)Xn (x)dx
An = = Rb .
(Xn , Xn ) |Xn (x)2 |dx
a

Its Fourier series is the series


X
An Xn (x).
n

4
2 Three notions of convergence.


Denition 7. We say that an innite series fn (x) converges to f (x) point-
P
n=1
wise in (a, b) if for each a < x < b
N
X
|f (x) − fn (x)| → 0 as N → ∞.
n=1

Denition 8. The series converges uniformly to f (x) in [a, b] if


N
X
max |f (x) − fn (x)| → 0 as N → ∞.
a≤x≤b
n=1

Denition 9. We say the series converges in the mean-square (or L2 ) sense to


f (x) in (a, b) if
Z b N
X
|f (x) − fn (x)|2 dx → 0 as N → ∞.
a n=1

Example 10. Let fn (x) = (1 − x)xn−1 on the interval (0, 1). Then the partial
sums are
N
X
fn (x) = 1 − xN → 1 as N →∞
n=1


because x < 1. So fn (x) converges pointwise to the function f ≡ 1.
P
n=1
But the convergence is not uniform because
N
X
max |1 − fn (x)| = 1 as N → ∞.
0≤x≤1
n=1

However, it does converge in L2 sense


Z 1 N Z 1
X
2 1
|1 − fn (x)| = |xN |2 dx = →1 as N → ∞.
0 n=1 0 2N + 1

Exercise 11. Let fn (x) = n


1+n2 x2
n−1
− 1+(n−1)2 x2 in the interval 0 < x < 1. Prove

that ∞
a) fn (x) converges pointwise to f ≡ 0.
P
n=1

b) fn (x) does not converges in mean-square sense to f ≡ 0.
P
n=1

c) fn (x) does not uniformly converges to f ≡ 0.
P
n=1

5
Theorem 12. Least-Square Approximation. Let {Xn } be any orthogonal
set of functions. Let a |f | dx < ∞. Let N be a xed positive integer. Among
Rb 2

all possible choices of N constants c1 , c2 , · · · , cN . The choices of N constants


c1 , c2 , · · · cN , the choice that minimizes
Z b N
X
|f − cn Xn |2 dx
a n=1

is cn = (f,Xn )
(Xn ,Xn ) for n = 1, 2, · · · , N .
Proof. Denote
Z b N
(4)
X
EN (c1 , · · · , cN ) = |f − cn Xn |2 dx ≥ 0.
a n=1

So we have
Z b X Z b X X Z b
EN (c1 , · · · , cN ) = |f (x)|2 dx − 2 cn f (x)Xn (x)dx + cn cm Xn (x)Xm (x)dx
a n≤N a n≤N m≤N a
X X
= (f, f ) − 2 cn (f, Xn ) + c2n (Xn , Xn )
n≤N n≤N

(f, Xn ) X (f, Xn )2
(5)
X
= kXn k2 [cn − ]− + (f, f ).
(Xn , Xn ) (Xn , Xn )
n≤N n≤N

So the minimal point of EN is cn = (f,Xn )


(Xn ,Xn ) for n = 1, 2, · · · , N .

Denote An = (f,Xn )
(Xn ,Xn ) , we have the inequality from Equations (4) and (5),
X (f, Xn )2 X
(f, f ) ≥ ≥ An (Xn , Xn ).
(Xn , Xn )
n≤N n≤N

This is known as Bessel's inequality. It is valid as long as the integral of |f |2


is nite.

You might also like