0% found this document useful (0 votes)
29 views59 pages

Residue

The document outlines various mathematical concepts relevant to mathematical physics, including complex analysis, differential equations, Fourier series, and more. It provides detailed explanations of analytic functions, contour integrals, Cauchy's theorems, and residue calculations. Each section is organized with definitions, theorems, and examples to illustrate the applications of these mathematical principles.

Uploaded by

basant73sps
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views59 pages

Residue

The document outlines various mathematical concepts relevant to mathematical physics, including complex analysis, differential equations, Fourier series, and more. It provides detailed explanations of analytic functions, contour integrals, Cauchy's theorems, and residue calculations. Each section is organized with definitions, theorems, and examples to illustrate the applications of these mathematical principles.

Uploaded by

basant73sps
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

Formulae in Mathematical Physics

B ASANT R ANG R ANJAN

December 2024
Basant Rang Ranjan

2
Contents

0.1 Complex Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4


0.2 Differential Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
0.3 Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
0.4 Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
0.5 Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
0.6 Gamma and Beta Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 19
0.7 Dirac Delta Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
0.8 Legendre Differential Equation . . . . . . . . . . . . . . . . . . . . . . . . 22
0.9 Bessel Differential Equation . . . . . . . . . . . . . . . . . . . . . . . . . . 23
0.10 Hermite Differential Equation . . . . . . . . . . . . . . . . . . . . . . . . . 24
0.11 Laguerre Differential Equation . . . . . . . . . . . . . . . . . . . . . . . . 25
0.12 Confluent Hypergeometric Function . . . . . . . . . . . . . . . . . . . . . 26
0.13 Green’s Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
0.14 Numerical Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
0.15 Group Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
0.16 Tensor Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
0.17 Probability Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

3
Basant Rang Ranjan

0.1 Complex Analysis


Conditions for an Analytic Function
1. Cauchy-Riemann (CR) Conditions (Necessary Conditions for Analyticity):
∂u ∂v ∂u ∂v
= and =− .
∂x ∂y ∂y ∂x

2. Sufficient Conditions: For a function to be analytic, the partial derivatives


∂u ∂u ∂v ∂v
, , , and
∂x ∂y ∂x ∂y
must be continuous.

3. Polar Form:
∂u 1 ∂v ∂u ∂v
= , = −r .
∂r r ∂θ ∂θ ∂θ
4. Derivative of a Complex Function:
∂u ∂v ∂v ∂u
f ′ (z) = +i , f ′ (z) = −i .
∂x ∂x ∂y ∂y

5. Polar Form of Derivative:


dw ∂w ∂w i ∂w
= (cos θ − i sin θ ) , = − (cos θ − i sin θ ) .
dz ∂r ∂z r ∂θ

6. Harmonic Functions:
∂2 u ∂2 u ∂2 v ∂2 v
+ 2 = 0, + = 0.
∂x2 ∂y ∂x2 ∂y2

Milne-Thomson Method
1. If u is Given:
∂u ∂u
= ϕ1 ( x, y), = ϕ2 ( x, y).
∂x ∂y
Replacing x with z and y with 0, we get:
Z
f (z) = [ϕ1 (z, 0) − iϕ2 (z, 0)] dz + C,

where C is the constant of integration.


Example If u = x2 − y2 , find a corresponding analytic function.
Solution.
∂u ∂u
= 2x = ϕ1 ( x, y), = −2y = ϕ2 ( x, y)
∂x ∂y
On replacing x by z and y by 0, we have

ϕ1 (z, 0) = 2z and ϕ2 (z, 0) = 0

4
Basant Rang Ranjan
Z
f (z) = [ϕ1 (z, 0) − iϕ2 (z, 0)]dz + C
Z Z
= [2z − i (0)]dz + C = 2z dz + C = z2 + C

This is the required

2. If v is Given:
∂v ∂v
= ϕ2 ( x, y), = ϕ1 ( x, y).
∂x ∂y
Replacing x with z and y with 0, we get:
Z
f (z) = [ϕ1 (z, 0) + iϕ2 (z, 0)] dz + C,

where C is the constant of integration.


Example Find the analytic function f (z) = u + iv, given that

v = e x ( x sin y + y cos y).

Solution.

∂v
= e x ( x sin y + y cos y) + e x sin y = ψ2 ( x, y) =⇒ ψ2 (z, 0) = 0
∂x

∂v
= e x ( x cos y + y cos y − y sin y) = ψ1 ( x, y) =⇒ ψ1 (z, 0) = zez + ez
∂y
Z
f (z) = [ψ1 (z, 0) + iψ2 (z, 0)]dz + C
Z Z
z z
= [e (z + 1) + i (0)]dz + c = (z + 1)e − ez dz + C

= (z + 1)ez − ez + c = zez + C

which is the required function.

3. If u ± v is Given:

f (z) = u + iv, and f (z) = iu − v.

Adding these two equations, we obtain:

(1 + i ) f ( z ) = ( u − v ) + i ( u + v ).

Let F (z) = (1 + i ) f (z), where:

F (z) = U + iV, U = u − v, V = u + v.

5
Basant Rang Ranjan

Method to Find u or v
1. To find v if u is given:
∂v ∂v
dv = dx + dy.
∂x ∂y
After this, use the Cauchy-Riemann equations to find the solution.
Example Let f (z) = u( x, y) + iv( x, y) be an analytic function. If u = 3x − 2xy,
then find v and express f (z) in terms of z.
Solution. Here, we have
u = 3x − 2xy
∂u ∂u
= 3 − 2y, = −2x
∂x ∂y
We know that
∂v ∂v
dv = dx + dy (Total differentiation)
∂x ∂y
   
∂u ∂u
= − dx + dy
∂y ∂x
= 2x dx + (3 − 2y) dy
Z Z
v= 2x dx + (3 − 2y) dy = x2 + 3y − y2 + c

f (z) = u( x, y) + iv( x, y) = (3x − 2xy) + i ( x2 + 3y − y2 + c)


= (ix2 − iy2 − 2xy) + (3x + 3yi + ic) = i ( x2 − y2 + 2ixy) + 3( x + iy) + ic
= i ( x + iy)2 + 3( x + iy) + ic = iz2 + 3z + ic
which is the required expression of f (z) in terms of z.

2. To find u if v is given:
∂u ∂u
du = dx + dy.
∂x ∂y
After this, use the Cauchy-Riemann equations to find the solution.

Contour Integral Formula


1.
I Z Z Z
f (z) dz = (u + iv)(dx + idy) = (u dx − v dy) + i (v dx + u dy)
C C C C

2. Cauchy’s Integral Theorem If the function f (z) is analytic and its derivative
f ′ (z) is continuous at all points inside and on a simple closed curve C, then the
contour integral of f (z) over C is zero.
I
f (z) dz = 0
C

6
Basant Rang Ranjan

3. Cauchy Integral Formula If f (z) is analytic within and on a closed curve C, and
if a is any point within C, then the value of f ( a) is given by the contour integral
above.
1 f (z)
I
f ( a) = dz
2πi C z − a
4. Cauchy Integral Formula For The Derivative Of An Analytic Function

n! f (z)
I
(n)
f ( a) = dz
2πi C ( z − a ) n +1

5. Cauchy’s Inequality If f (z) is analytic within a circle C i.e., |z − a| = R and if


| f (z)| ≤ M on C, then
Mn!
| f (n) ( a)| ≤ n
R
6. Liouville Theorem If a function f (z) is analytic for all finite values of z and is
bounded, then it is a constant.

Taylor’s and Laurent’s Formula


1. Taylor’s Theorem If a function f (z) is analytic at all points inside a circle C, with
its centre at the point a and the radius R, then at each point z inside C we have

f ′′ ( a) f (n) ( a )
f (z) = f ( a) + f ′ ( a)(z − a) + ( z − a )2 + · · · + (z − a)n + · · ·
2! n!

2. Laurent’s Theorem If f (z) is analytic on C1 and C2 , and the annular region R


bounded by the two concentric circles C1 and C2 of radii r1 and r2 (r1 < r2 ) and
with center at a, then for all z in R,

b1 b2
f ( z ) = a0 + a1 ( z − a ) + a2 ( z − a )2 + . . . + + +...
( z − a ) ( z − a )2

where
1 f (w) 1 f (w)
Z Z
an = dw, bn = dw.
2πi C1 ( w − a ) n +1 2πi C2 ( w − a ) − n +1

Residue
1. Residue at simple pole If f (z) has a simple pole at z=a, then

Res f ( a) = lim (z − a) f (z)


z→ a

ϕ(z)
2. If f (z) is of the form f (z) = ψ(z)
where ψ( a) = 0, but ϕ( a) ̸= 0

ϕ( a)
Res f ( a) =
ψ′ ( a)

7
Basant Rang Ranjan

3. Residue at a Pole of Order n: If f (z) has a pole of order n at z = a, then

1 d n −1
Res( f , a) = n − 1
[(z − a)n f (z)] .
(n − 1)! dz z= a

4. Residue at a Pole z = a of Any Order (Simple or of Order n): The residue of


f (z) at z = a is the coefficient of 1t where t = z − a.

1
Res( f , a) = Coefficient of in the Laurent series expansion of f (z) around z = a.
t

5. Residue of f (z) at z = ∞: The residue of f (z) at z = ∞ is given by

Res( f , ∞) = lim (−z f (z)) ,


z→∞

or equivalently,
1
Z
Res( f , ∞) = − f (z) dz,
2πi C
where C is a large contour enclosing z = ∞.

Definitions
1. Zero of Analytic Function A zero of an analytic function f (z) is the value of z
for which f (z) = 0.
Example . Find out the zeros and discuss the nature of the singularities of

( z − 2)
 
1
f (z) = sin
z2 z−1

Solution. Poles of f (z) are given by equating the denominator of f (z) to zero, i.e.
z = 0 is a pole of order two. Zeros of f (z) are given by equating the numerator
of f (z) to zero, i.e.,  
1
(z − 2) sin =0
z−1
 
1
⇒ Either z − 2 = 0 or sin =0
z−1
1
⇒ z = 2 and = nπ
z−1
1
⇒ z = 2, z= + 1, n = ±1, ±2, . . .

Thus, z = 2 is a simple zero. The limit point of the zeros is given by

1
z= +1 (n = ±1, ±2, . . .)

Hence, z = 1 is an isolated essential singularity.

8
Basant Rang Ranjan

2. Singular Point A point at which a function f (z) is not analytic is known as a


singular point or singularity of the function.
For example, the function z−1 2 has a singular point at z − 2 = 0 or z = 2.
Isolated Singular Point If z = a is a singularity of f (z) and if there is no other
singularity within a small circle surrounding the point z = a, then z = a is said to
be an isolated singularity of the function f (z); otherwise it is called non-isolated.
1
For example, the function (z−1)( z −3)
has two isolated singular points, namely
z = 1 and z = 3. [(z − 1)(z − 3) = 0 or z = 1, 3]. Example of non-isolated
singularity The function sin1 π is not analytic at the points where
z

π π 1
sin = 0, i.e., at the points = nπ i.e., the points z = (n = 1, 2, 3, . . .).
z z n

Thus, z = 1, 12 , 13 , . . . , z = 0 are the points of singularity. z = 0 is the non-isolated


singularity of the function sin1 π because in the neighbourhood of z = 0, there are
z
infinite number of other singularities z = n1 , where n is very large.

Pole of order m: Let a function f (z) have an isolated singular point z = a. f (z)
can be expanded in a Laurent series around z = a, giving

b1 b2
f ( z ) = a0 + a1 ( z − a ) + a2 ( z − a )2 + . . . + + +...
z − a ( z − a )2

bm bm + 1 bm + 2
+ m
+ + +... (1)
(z − a) (z − a) m + 1 ( z − a ) m +2
In some cases, it may happen that the coefficients bm+1 = bm+2 = bm+3 = 0, then
(1) reduces to

b1 b2 bm
f ( z ) = a0 + a1 ( z − a ) + a2 ( z − a )2 + . . . + + +...+
z − a (z − a) 2 (z − a)m

f ( z ) = a0 + a1 ( z − a ) + a2 ( z − a )2 + . . . +
1 n
m −1 m −2 m −3
o
+ b ( z − a ) + b2 ( z − a ) + b3 ( z − a ) + . . . + b m
(z − a)m 1
then z = a is said to be a pole of order m of the function f (z). When m = 1, the
pole is said to be a simple pole. In this case,

b1
f ( z ) = a0 + a1 ( z − a ) + a2 ( z − a )2 + . . . +
z−a
If the number of terms of negative powers in expansion (1) is infinite, then z = a
is called an essential singular point of f (z)
Example. Find the singularity (ties) of the functions:
 
(i) f (z) = sin 1z
1
ez
(ii) g(z) = z2

9
Basant Rang Ranjan

Solution. (i) We know that


(−1)n
 
1 1 1 1
sin = − + + . . . +
z z 3!z3 5!z5 (2n + 1)!z2n+1
Obviously, there is a number of singularity.
 
1
sin is not analytic at z = 0.
z
 
1
= ∞ at z = 0
z
  1
Hence, sin 1z has a singularity at z = 0. (ii) Here, we have g(z) = ezz2 We know
that,     
1 1 1 1 1 1 1
e = 2 1+ +
z + +...+ +...
z2 z z 2!z2 3!z3 n!zn
1 1 1 1 1
= 2+ 3+ + +...+ +...
z z 2!z 4 3!z 5 n!zn+2
Here, f (z) has infinite number of terms in negative powers of z. Hence, f (z) has
essential singularity at z = 0.

Residue Theorem
Cauchy’s Residue Theorem: If f (z) is analytic inside and on a closed curve C, except
for a finite number of poles within C, then the contour integral around C is given by
I
f (z) dz = 2πi (sum of the residues at the poles within C ) .
C

1. Integration Round Unit Circle Of The Type


Z 2π
f (cos θ, sin θ ) dθ
0

where f (cos θ, sin θ ) is a rational function of cos θ and sin θ.


To convert this integral to a contour integral, we consider the unit circle |z| = 1
in the complex plane. Let:

z = reiθ = 1.eiθ = eiθ


Then,
eiθ + e−iθ z + 1z eiθ − e−iθ z − 1z
cos θ = = , sin θ = = .
2 2 2i 2i
Also, we know that:
dz
dz = ieiθ dθ = iz dθ, or equivalently dθ = .
iz
Thus, the integral becomes:
!
z + 1z z − 1z
Z 2π
dz
I
f (cos θ, sin θ ) dθ = f , .
0 C 2 2i iz

10
Basant Rang Ranjan

R +∞ f ( x )
2. Evaluation Of −∞ f1 ( x) dx where f 1 ( x ) and f 2 ( x ) are polynomials in x.
2
Such integrals can be reduced to contour integrals, if

(i) f 2 ( x ) has no real roots.


(ii) the degree of f 2 ( x ) is greater than that of f 1 ( x ) by at least two.
f1 (x)
Procedure: Let f ( x ) = f2 (x)
R
Consider C f (z)dz
where C is a curve, consisting of the upper half CR of the circle |z| = R, and part
of the real axis from − R to R.

CR

X
−R O R

If there are no poles of f (z) on the real axis, the circle |z| = R which is arbitrary
can be taken such that there is no singularity on its circumference CR in the upper
half of the plane, but possibly some poles inside the contour C specified above.
Using Cauchy’s theorem of residues, we have
Z
f (z)dz = 2πi × (sum of the residues of f (z) at the poles within C)
C

i.e., Z R Z
f ( x )dx + f (z)dz = 2πi × (sum of residues within C)
−R CR
Z R Z
⇒ f ( x )dx = − f (z)dz + 2πi × (sum of residues within C)
−R CR
Z R Z
∴ lim f ( x )dx = − lim f (z)dz + 2πi × (sum of residues within C) . . . (1)
R→∞ − R R → ∞ CR

Now, Z Z π
lim f (z)dz = f ( Reiθ ) Rieiθ dθ = 0 when R → ∞
R → ∞ CR 0

(1) reduces Z ∞
f ( x )dx = 2πi × (sum of residues within C).
−∞

11
Basant Rang Ranjan

0.2 Differential Equation


Linear Differential Equations Of Second Order
1. The complete solution of a differential equation is the sum of the complementary
function and the particular integral:

Complete Solution = Complementary Function + Particular Integral

y = C.F. + P.I.

2. Method For Finding The Complementary Function

S.L Nature of Roots C.F.


Roots of
A.E.
1 Real (ra- m1 , m2 , m3 C1 em1 x + C2 em2 x + C3 em3 x
tional)
and Dis-
tinct roots
2 Repeated m1 = m2 (C1 + C2 x )em1 x
roots
m1 = m2 = m3 (C1 + C2 x + C3 x2 )em1 x
3 Complex m1 = α + iβ eαx [C1 cos βx + C2 sin βx ]
roots
m2 = α − iβ
4 Repeated m1 = m2 = α + iβ eαx [(C1 + C2 x ) cos βx + (C3 +
Complex C4 x ) sin βx ]
roots
m3 = m4 = α√− iβ √ √
5 Irrational m1 = a + b e ax [C1 cosh bx + C2 sinh bx ]
root √
m2 = a − b√ √
6 Repeated m1 = m2 = a + b e ax [(C1 + C
√2 x ) cosh bx + (C3 +
irrational C4 x ) sinh bx ]
roots √
m3 = m4 = a − b

Table 1: Solutions for different types of roots in differential equations

3. Particular Integral
1 1 ax
(i) f (D)
e ax = f ( a)
e

1
If f ( a) = 0, then f (D)
e ax = x · f ′ 1(a) e ax .
If f ′ ( a) = 0, then f (1D) e ax = x2 · f ′′1(a) e ax .

12
Basant Rang Ranjan

(ii) 1
f (D)
xn = [ f ( D )]−1 x n Expand [ f ( D )]−1 and then operate.

1 1 1 1
(iii) f ( D2 )
sin ax = f (− a2 )
sin ax and f ( D2 )
cos ax = f (− a2 )
cos ax.

1 1
If f (− a2 ) = 0, then f ( D2 )
sin ax = x · f ′ (− a2 )
sin ax.

1 1
(iv) f (D)
e ax · ϕ( x ) = e ax · f ( D + a)
ϕ( x )

1
= e−ax e ax · ϕ( x ) dx
R
(v) D+a ϕ( x )

1
(vi) f (D)
x n sin( ax ).

1 1 1
Now x n (cos ax + i sin ax ) = x n eiax = eiax xn
f (D) f (D) f ( D + ia)

1 1
x n sin( ax ) = Imaginary part of eiax xn
f (D) f ( D + ia)

1 1
x n cos( ax ) = Real part of eiax xn
f (D) f ( D + ia)

4. Cauchy Euler Homogeneous Linear Equations

dn y n −1 y
n −1 d
an x n + a n − 1 x + . . . + a0 y = ϕ ( x ) (1)
dx n dx n−1
where a0 , a1 , a2 , . . . are constants, is called a homogeneous equation. Put x = ez ,
d
z = loge x, dz ≡D

dy dy dz 1 dy dy dy dy
= = =⇒ x = =⇒ x = Dy
dx dz dx x dz dx dz dx
Again,
d2 y 1 dy 1 d2 y dz
   
d dy
1 dy d
2
= =− 2
= +
dx dx dx
x dz x dz x dz2 dx
dx
1 dy 1 d2 y 1 1 d2 y dy
 
1
=− 2 + 2
= 2 2
− = 2 ( D2 − D )y
x dz x dz x x dz dz x
d2 y
x2 = ( D2 − D )y
dx2
or
d2 y
x2 = D ( D − 1) y
dx2
Similarly,
d3 y
x3
= D ( D − 1)( D − 2)y
dx3
The substitution of these values in (1) reduces the given homogeneous equation
to a differential equation with constant coefficients.

13
Basant Rang Ranjan

5. Method of Variation of Parameters


Working Rule:
Step 1.
d2 y dy
+ b + cy = X (1)
dx2 dx

Find out the C.F., i.e., Ay1 + By2 .


Step 2. Particular integral = uy1 + vy2 .
Step 3. Find u and v by the formulas:

− y2 X y1 X
Z Z
u= dx, v= dx
y1 y2′ − y1′ y2 y1 y2′− y1′ y2

Solve
d2 y
+ y = csc x.
dx2
Solution.
( D2 + 1)y = csc x
A.E. is
m2 + 1 = 0 =⇒ m = ±i
C.F. = A cos x + B sin x Here

y1 = cos x, y2 = sin x

P.I. = y1 u + y2 v where

−y2 · csc x dx − sin x · csc x dx


Z Z
u= =
y1 · y1′ − y1′ · y2 cos x (cos x ) − (− sin x )(sin x )

− sin x · sin1 x dx
Z Z
= = − dx = − x
cos2 x + sin2 x
y1 · X dx cos x · csc x dx
Z Z
v= ′ ′ =
y1 · y1 − y1 · y2 cos x (cos x ) − (− sin x )(sin x )
1
cos x · dx
Z Z
sin x
= = cot x dx = log sin x
cos2 x + sin2 x
P.I. = uy1 + vy2 = − x cos x + sin x (log sin x )
General solution = C.F. + P.I.

y = A cos x + B sin x − x cos x + sin x (log sin x )

14
Basant Rang Ranjan

0.3 Fourier Series


The Fourier series of a function f ( x ) is given by:
∞ ∞
a0  nπx   nπx 
f (x) = + ∑ an cos + ∑ bn sin ,
2 n =1
L n =1
L

where the Fourier coefficients are defined as:


Z L
1
a0 = f ( x ) dx,
L −L
Z L  nπx 
1
an = f ( x ) cos dx, n ≥ 1,
L −L L
Z L  nπx 
1
bn = f ( x ) sin dx, n ≥ 1.
L −L L
Dirichlet Conditions:

• f ( x ) is single-valued in the interval (− L, L).

• f ( x ) is bounded in the interval (− L, L).

• f ( x ) has a finite number of discontinuities in (− L, L).

• f ( x ) has at most a finite number of maxima and minima in the interval (− L, L).

15
Basant Rang Ranjan

0.4 Fourier Transform


Laplace Transform
Let f (t) be a function of t defined for 0 ≤ t < ∞. Then, the Laplace transform of f (t),
denoted by L[ f (t)] or F (s), is defined as
Z ∞
L[ f (t)] = F (s) = f (t) dt ( s > 0).
0

The conditions for the existence of the Laplace transform are:

1. f (t) should be piecewise continuous in every finite interval.

2. f (t) should be of exponential order as t → ∞, i.e.,

lim e−st f (t) = finite quantity.


t→∞

Important Laplace Transforms

Laplace Transform Inverse Laplace


h i Transform
L[1] = 1
s L −1 =11
s
hi
tn
n!
L[tn ] = sn+1 L−1 sn+1 = n! 1
h i
L[e at ] = s−1 a L−1 s−1 a = e at
h i
L[sin( at)] = s2 +a a2 L−1 s2 +a a2 = sin( at)
h i
L[cos( at)] = s2 +s a2 L−1 s2 +s a2 = cos( at)
h i
L[sinh( at)] = s2 −a a2 L−1 s2 −a a2 = sinh( at)
h i
L[cosh( at)] = s2 −s a2 L−1 s2 −s a2 = cosh( at)
Laplace Transform Theorems and Properties

If L[ f (t)] = F (s), then:

1. First Shifting Theorem:


L[e at f (t)] = F (s − a).

2. Second Shifting Theorem:

L[ f (t − a) H (t − a)] = e−as F (s),

where H (t − a) is the unit step function, defined as:


(
1, t ≥ a,
H (t − a) =
0, t < a.

3. Multiplication Property:

dn
L[tn f (t)] = (−1)n F ( s ).
dsn

16
Basant Rang Ranjan

4. Division Property:   Z ∞
f (t)
L = F (s) ds.
t 0

5. Convolution Theorem: Convolution Theorem:


If
L[ f 1 (t)] = F1 (s) and L[ f 2 (t)] = F2 (s)
then
t
Z 
L f 1 ( x ) f 2 (t − x ) dx = F1 (s) · F2 (s)
0
or Z t
−1
L ( F1 (s) · F2 (s)) = f 1 ( x ) f 2 (t − x ) dx
0

17
Basant Rang Ranjan

0.5 Fourier Transform


Fourier Transform:
The Fourier transform of f ( x ), denoted by f (s), is given by:
Z ∞
1
f (s) = √ eisx f ( x ) dx.
2π −∞

The inverse Fourier transform of f (s), denoted by f ( x ), is:


Z ∞
1
f (x) = √ e−isx f (s) ds.
2π −∞

Fourier Sine Transform:


The Fourier sine transform of f ( x ), denoted by f s (s), is:

2 ∞
r Z
f s (s) = f ( x ) sin(sx ) dx.
π 0
The inverse Fourier sine transform of f s (s), denoted by f ( x ), is:

2 ∞
r Z
f (x) = f s (s) sin(sx ) ds.
π 0
Fourier Cosine Transform:
The Fourier cosine transform of f ( x ), denoted by f c (s), is:

2 ∞
r Z
f c (s) = f ( x ) cos(sx ) dx.
π 0
The inverse Fourier cosine transform of f c (s), denoted by f ( x ), is:

2 ∞
r Z
f (x) = f c (s) cos(sx ) ds.
π 0

18
Basant Rang Ranjan

0.6 Gamma and Beta Functions


Gamma Function:
The Gamma function, denoted by Γ(n), is defined as:
Z ∞
Γ(n) = e− x x n−1 dx.
0

Some important properties of the Gamma function are:

1. Γ(n + 1) = nΓ(n) = n!.


  √
2. Γ 21 = π.

3. Γ( x ) is not defined for zero and negative integers.

Beta Function:
The Beta function, denoted by B(l, m), is defined as:
Z 1
B(l, m) = x l −1 (1 − x )m−1 dx.
0

Some important properties of the Beta function are:

Γ(m)Γ(n)
1. B(m, n) = Γ(m+n)
.

2. Γ( x )Γ(1 − x ) = π
sin(πx )
.
   
p +1 q +1
π Γ 2 Γ 2
sin p (θ ) cosq (θ ) dθ = 1
R 2
3. 0 2

p + q +2
 .
Γ 2

19
Basant Rang Ranjan

0.7 Dirac Delta Function


Dirac Delta Function:
The Dirac delta function, δ( x − a), is defined as:
(
0, if x ̸= a,
δ( x − a) =
∞, if x = a,

such that the integral over all space is equal to 1:


Z ∞
δ( x − a) dx = 1.
−∞

This means the area under the curve of the Dirac delta function should be equal to
unity.
Dimensionality:
If x has dimensions of length, then the Dirac delta function will have dimensions
of (length)−1 .
Some Representations of the Dirac Delta Function:

1. Rectangle Function:
lim Rσ ( x ) = δ( x − a),
σ →0

where (
1
2σ , if a − σ < x < a + σ,
Rσ ( x ) =
0, otherwise.

2. Gaussian Function:
lim Gσ ( x ) = δ( x − µ),
σ →0

where
( x − µ )2
 
1
Gσ ( x ) = √ exp − .
2πσ2 2σ2

3. Lorentzian Function:
lim Lσ ( x ) = δ( x − x0 ),
σ →0

where
1 ϵ
Lσ ( x ) = .
π ( x − x0 )2 + ϵ2

4. Integral Representation:
Z ∞
1
e±ik( x− x0 ) dk = δ( x − x0 ).
2π −∞

5. Cartesian Coordinates:

δ3 (⃗r −⃗r0 ) = δ( x − x0 )δ(y − y0 )δ(z − z0 ).

20
Basant Rang Ranjan

6. Spherical Polar Coordinates:

1
δ3 (⃗r −⃗r0 ) = δ(r − r0 )δ(θ − θ0 )δ(ϕ − ϕ0 ).
r2 sin θ

7. Cylindrical Coordinates:

1
δ3 (⃗ρ − ⃗ρ0 ) = δ(ρ − ρ0 )δ(θ − θ0 )δ(ϕ − ϕ0 ).
ρ

Some Important Properties of the Dirac Delta Function:


R∞
1. −∞ f ( x )δ( x − a) dx = f ( a).
R∞
2. −∞ f ′ ( x )δ( x − a) dx = − f ′ ( a).

3. δ( x − a) = δ( a − x ).
1
4. δ(c( x − a)) = |c|
δ ( x − a ), where c is a constant.

1
5. δ( x2 − a2 ) = 2a [δ( x − a) + δ( x + a)].
R∞
6. −∞ δ( x − a)δ( x − b) dx = δ( a − b).

21
Basant Rang Ranjan

0.8 Legendre Differential Equation


Legendre Differential Equation:
The Legendre differential equation is given by:

d2 y dy
(1 − x 2 ) 2
− 2x + n(n + 1)y = 0,
dx dx
where n is a positive real integer. Note that x = ±1 are regular singular points of the
Legendre differential equation.
Solution: Legendre Polynomial of Degree n
The solution to the Legendre differential equation is the Legendre polynomial of
degree n, denoted by Pn ( x ). The general solution can be expressed using Rodrigues’
formula as:
1 dn 2
Pn ( x ) = n n
( x − 1) n .
2 n! dx
Generating Function of Legendre Polynomials:
The generating function of the Legendre polynomials is given by:

1

1 − 2xz + z2
= ∑ zn Pn (x),
n =0

where the coefficient of zn in the expansion of the generating function is the Legendre
polynomial of order n.
Orthogonal Property of Legendre Polynomials:
The Legendre polynomials satisfy the following orthogonal property:
Z 1
2
Pm ( x ) Pn ( x ) dx = δmn ,
−1 2n + 1
where δmn is the Kronecker delta, defined as:
(
1, if m = n,
δmn =
0, if m ̸= n.

22
Basant Rang Ranjan

0.9 Bessel Differential Equation


Bessel Differential Equation:
The Bessel differential equation of order n is given by:

d2 y dy
x2 2
+ x + ( x2 − n2 )y = 0,
dx dx
where x = 0 is a regular singular point of the Bessel differential equation.
Solution: Bessel Function of Order n
The solution to the Bessel differential equation is the Bessel function of order n,
denoted by Jn ( x ), which can be expressed as a series expansion:

(−1)r  x n+2r
Jn ( x ) = ∑ .
r =0 r! ( n + r ) ! 2

Generating Function of Bessel Functions:


The generating function of the Bessel functions is given by:

x 1

e 2 (z− z ) = ∑ zn Jn ( x ),
n=−∞

where the coefficient of zn in the expansion of the generating function is the Bessel
function of order n.
Orthonormality Property of Bessel Functions:
The Bessel functions satisfy the following orthonormality property:
Z 1
δαβ
Jn (αx ) Jn ( βx ) dx = [ Jn+1 (α)]2 ,
0 2
where δαβ is the Kronecker delta, defined as:
(
1, if α = β,
δαβ =
0, if α ̸= β.

23
Basant Rang Ranjan

0.10 Hermite Differential Equation


Hermite Differential Equation:
The Hermite differential equation is given by:

d2 y dy
2
− 2x + 2ny = 0,
dx dx
where x = 0 is an ordinary point of the Hermite differential equation.
Hermite Polynomial of Order n:
The Hermite polynomial of order n, denoted by Hn ( x ), is given by:

dn  − x2 
n x2
Hn ( x ) = (−1) e e .
dx n
The polynomial Hn ( x ) is even if n is even and odd if n is odd.
Generating Function for Hermite Polynomials:
The generating function for the Hermite polynomials is given by:

Hn ( x ) n

2
e2xz−z = z ,
n =0 n!

2 H (x)
where the coefficient of zn in the expansion of e2xz−z is nn! .
Orthonormal Property of Hermite Polynomials:
The Hermite polynomials satisfy the following orthonormality property:
Z ∞ √
Hn ( x ) Hm ( x ) dx = 2n n! πδmn ,
−∞

where δmn is the Kronecker delta, defined as:


(
1, if m = n,
δmn =
0, if m ̸= n.

24
Basant Rang Ranjan

0.11 Laguerre Differential Equation


Laguerre Differential Equation:
The Laguerre differential equation is given by:

d2 y dy
x 2
+ (1 − x ) + ny = 0,
dx dx
where x = 0 is a regular singular point of the Laguerre differential equation.
Laguerre Polynomial of Order n:
The Laguerre polynomial of order n, denoted by Ln ( x ), is given by:

e x dn n −x

Ln ( x ) = x e .
n! dx n
Laguerre polynomials are neither even nor odd functions of x.
Generating Function for Laguerre Polynomials:
The generating function for the Laguerre polynomials is given by:
xz

e − 1− z
= ∑ z n L n ( x ),
1−z n =0

where the coefficient of zn in the expansion is the Laguerre polynomial Ln ( x ).


Orthogonality Property of Laguerre Polynomials:
The Laguerre polynomials satisfy the following orthogonality property:
Z ∞
e− x Ln ( x ) Lm ( x ) dx = δmn ,
0

where δmn is the Kronecker delta, defined as:


(
1, if m = n,
δmn =
0, if m ̸= n.

25
Basant Rang Ranjan

0.12 Confluent Hypergeometric Function


Confluent Hypergeometric Function:
The confluent hypergeometric function of the first kind is denoted as 1 F1 ( a; b; x )
and is defined by the series:

( a)n x n
1 F1 ( a; b; x ) = ∑ ,
n=0 ( b )n n!

where ( a)n and (b)n are the Pochhammer symbols (rising factorials), defined as:

( a)n = a( a + 1)( a + 2)...( a + n − 1),


and
(b)n = b(b + 1)(b + 2)...(b + n − 1),
with ( a)0 = 1.
Differential Equation:
The confluent hypergeometric function satisfies the following differential equation:

d2 d
x2
y( x ) + (b − x ) y( x ) − ay( x ) = 0.
dx dx
Asymptotic Behavior:
As x → ∞,
a−b x
1 F1 ( a; b; x ) ∼ x e ,
for b > 0.
Connection with Other Special Functions:
The confluent hypergeometric function is closely related to many other special
functions, such as the exponential integral, Bessel functions, and Legendre functions.

26
Basant Rang Ranjan

0.13 Green’s Function


Introduction
Green’s function is a powerful mathematical tool used to solve inhomogeneous linear
differential equations of the form:

Ly( x ) = f ( x ),

where L is a linear differential operator. For example, consider the equation:


 2 
d 2
+ 2x y( x ) = f ( x ),
dx2
 2 
d 2 . The solution to this equation can be expressed as:
where L = dx 2 + 2x

y ( x ) = L −1 f ( x ),

where L−1 is the inverse of the operator L, known as the Green’s function G ( x, y). The
Green’s function satisfies the following property:

LG ( x, y) = δ( x − y),

where δ( x − y) is the Dirac delta function.


If the Green’s function is known, the solution to the inhomogeneous equation can
be written as: Z
y( x ) = G ( x, y) f (y) dy.

Construction of Green’s Function


To construct the Green’s function for the differential equation Ly( x ) = f ( x ), follow
these steps:

1. Solve the homogeneous differential equation: Solve Ly( x ) = 0 to obtain the


general solution.
2. Find the roots of the differential equation: Solve the homogeneous equation for
general solutions.
3. Solve the inhomogeneous equation: Consider the equation Ly( x ) = f ( x ), and
write the Green’s function G ( x, y) such that:

LG ( x, y) = δ( x − y).

For x < y, let the solution be:

y 1 ( x ) = c 1 e m1 x + c 2 e m2 x ,

and for x > y, let the solution be:

y 2 ( x ) = d 1 e m1 x + d 2 e m2 x .

27
Basant Rang Ranjan

4. Determine the constants: Use the boundary conditions to solve for the un-
knowns c1 , c2 , d1 , d2 .
5. Compute the Wronskian: The Wronskian of y1 ( x ) and y2 ( x ) is:
y1 ( x ) y2 ( x )
W ( y1 , y2 ) = .
y1′ ( x ) y2′ ( x )
6. Construct the Green’s function: The Green’s function is given by:
y1 ( x ) y2 ( y )
G ( x, y) = , if x < y,
W ( y1 , y2 )
and
y1 ( y ) y2 ( x )
G ( x, y) = , if x > y.
W ( y1 , y2 )

Example: Finding the Green’s Function for y′′ ( x ) = f ( x )


Consider the equation:
y′′ ( x ) = f ( x ),
subject to the boundary conditions:
y (0) = 0 and y(1) = 0.
Step 1: Solve the homogeneous equation The homogeneous equation is:
y′′ ( x ) = 0.
The general solution to this equation is:
y( x ) = Ax + B.
Step 2: Write the solutions for x < y and x > y For x < y, the solution is:
y1 ( x ) = c1 x + c2 ,
and for x > y, the solution is:
y2 ( x ) = d1 x + d2 .
Step 3: Apply the boundary conditions At x = 0, we have y(0) = 0. This gives:
c2 = 0, so y1 ( x ) = c1 x.
At x = 1, we have y(1) = 0. This gives:
d1 (1) + d2 = 0, so d1 = − d2 , and y2 ( x ) = d1 ( x − 1).
Step 4: Compute the Wronskian The Wronskian of y1 ( x ) and y2 ( x ) is:
 
c1 x d1 ( x − 1) x ( x − 1)
W ( y1 , y2 ) = = c1 d1 = c1 d1 · (1) = c1 d1 .
c1 d1 1 1
Step 5: Construct the Green’s function For x < y, the Green’s function is:
y1 ( x ) y2 ( y ) c xd (y − 1)
G ( x, y) = = 1 1 = x ( y − 1),
W ( y1 , y2 ) c1 d1
and for x > y, the Green’s function is:
y1 ( y ) y2 ( x ) c yd ( x − 1)
G ( x, y) = = 1 1 = y ( x − 1).
W ( y1 , y2 ) c1 d1

28
Basant Rang Ranjan

Conclusion
Thus, the Green’s function for the equation y′′ ( x ) = f ( x ) with boundary conditions
y(0) = 0 and y(1) = 0 is:
(
x (y − 1), if x < y,
G ( x, y) =
y( x − 1), if x > y.

This Green’s function can now be used to solve the inhomogeneous equation y′′ ( x ) =
f ( x ) by using the relation:
Z 1
y( x ) = G ( x, y) f (y) dy.
0

Properties of the Green’s Function


The Green’s function possesses several important properties that are critical for solv-
ing inhomogeneous differential equations. These properties are outlined below, along
with supporting examples.

1. Green’s Function Satisfies the Boundary Conditions


One of the fundamental properties of the Green’s function is that it satisfies the bound-
ary conditions of the differential equation. This means that if the boundary conditions
for the differential operator L require Ly( x ) = 0 at the boundaries, the Green’s func-
tion must also meet these same boundary conditions.
Example: Consider the equation y′′ ( x ) = f ( x ), with the boundary conditions:

y (0) = 0 and y(1) = 0.

For the Green’s function G ( x, y), it holds that:

G (0, y) = 0 and G (1, y) = 0.

This property ensures that the Green’s function complies with the imposed constraints,
making it a reliable tool in solving the inhomogeneous equation.

2. Continuity of Green’s Function at x = y


The Green’s function is continuous at x = y. This means that while its first derivative
might have a discontinuity, the function itself remains continuous.
Example: For the Green’s function G ( x, y), the following holds:

lim G ( x, y) = lim G ( x, y).


x →y− x →y+

Thus, the Green’s function is smooth across the point x = y, which is critical for its
physical interpretation in solving the equation.

29
Basant Rang Ranjan

3. Discontinuity in the First Derivative at x = y


While the Green’s function is continuous, its first derivative with respect to x exhibits
a discontinuity at x = y. The magnitude of this discontinuity is governed by the Dirac
delta function, reflecting the singular nature of the source term in the equation.
The discontinuity in the first derivative is given by:

∂G ( x, y) ∂G ( x, y)
− = 1.
∂x x =y+ ∂x x =y−

This jump in the derivative corresponds to the nature of the Green’s function, which
is designed to satisfy the equation LG ( x, y) = δ( x − y), where the delta function intro-
duces this sharp change.

4. Green’s Function Satisfies the Differential Equation


The Green’s function satisfies the differential equation for the operator L. Specifically,
it satisfies the equation:
LG ( x, y) = δ( x − y).
This key property of the Green’s function ensures that it is the correct solution to the
inhomogeneous equation when applied to the source term f ( x ).

Conclusion
To summarize, the Green’s function has the following essential properties:

• It satisfies the boundary conditions of the problem, ensuring it adheres to the


physical constraints.

• It is continuous at x = y, ensuring smooth transitions.

• Its first derivative is discontinuous at x = y, with the magnitude of the disconti-


nuity given by the Dirac delta function.

• It satisfies the differential equation LG ( x, y) = δ( x − y), confirming its role as a


solution to the inhomogeneous equation.

These properties make the Green’s function an invaluable and versatile tool for
solving inhomogeneous differential equations in a wide range of physical applica-
tions.

30
Basant Rang Ranjan

0.14 Numerical Analysis


Numerical Techniques for Root Finding
In numerical analysis, root-finding methods are used to find the roots of equations
of the form f ( x ) = 0. These methods are especially important for solving equations
that cannot be solved analytically. The two major types of equations handled by root-
finding techniques are polynomial and transcendental equations.

1. Polynomial Equation
A polynomial equation is an equation where f ( x ) involves only algebraic functions of
x. A general form is:

f ( x ) = a n x n + a n −1 x n −1 + · · · + a 1 x + a 0 = 0

Example: x4 − 2x3 + 8x + 3 = 0

2. Transcendental Equation
A transcendental equation involves both algebraic and other types of functions of x,
such as trigonometric, logarithmic, or exponential functions. A common example is:

f ( x ) = x2 − 2 cos( x ) + 6 = 0

Such equations cannot be solved directly and require numerical methods.

Bisection Method
The Bisection method is a simple and reliable approach for finding the root of a contin-
uous function f ( x ) in a given interval [ a, b], where f ( a) and f (b) have opposite signs.
The method proceeds by repeatedly bisecting the interval and selecting the subinterval
that contains the root.
Working Procedure:
1. Choose the initial interval [ a0 , b0 ] such that f ( a0 ) · f (b0 ) < 0.
a0 +b0
2. Compute the midpoint m0 = 2 .

3. If f (m0 ) = 0, then m0 is the root. Otherwise, proceed:


• If f ( a0 ) · f (m0 ) < 0, update the interval to [ a0 , m0 ].
• If f (b0 ) · f (m0 ) < 0, update the interval to [m0 , b0 ].
4. Repeat steps 2 and 3 until the desired accuracy is reached.
Example: Find the third approximation to the root of x3 + x + 1 = 0 on the interval
[−1, 0].

f ( x ) = x3 + x + 1
Solution:

31
Basant Rang Ranjan

• Initial interval: [ a0 , b0 ] = [−1, 0]

• f (−1) = (−1)3 + (−1) + 1 = −1

• f (0) = 03 + 0 + 1 = 1

• Since f (−1) · f (0) < 0, the root lies within [−1, 0].
Iterations:
−1+0
1. First iteration: m1 = 2 = −0.5, f (−0.5) = (−0.5)3 + (−0.5) + 1 = −0.375
2. Since f (−1) · f (−0.5) < 0, update the interval to [−1, −0.5].
−1+(−0.5)
3. Second iteration: m2 = 2 = −0.75, f (−0.75) = (−0.75)3 + (−0.75) + 1 =
−0.078125
4. Since f (−1) · f (−0.75) < 0, update the interval to [−1, −0.75].
−0.75+(−0.5)
5. Third iteration: m3 = 2 = −0.625, f (−0.625) = (−0.625)3 + (−0.625) +
1 = −0.078125

6. Since f (−0.5) · f (−0.625) < 0, update the interval to [−0.5, −0.625].


Thus, the third approximation to the root is x3 = −0.625.

Regula Falsi Method


The Regula Falsi method, also called the False Position method, is an improvement
on the Bisection method. It uses linear interpolation to estimate the next root based
on two initial guesses a and b, where f ( a) and f (b) have opposite signs. The method
then iteratively refines the estimate of the root.
Formula:
a f (b) − b f ( a)
xn =
f (b) − f ( a)
Where: - xn is the current approximation of the root. - a and b are the endpoints of
the initial interval, with f ( a) and f (b) having opposite signs. - f ( a) and f (b) are the
values of the function at a and b, respectively.
The method iteratively refines the root estimate by using linear interpolation be-
tween a and b, producing a sequence of approximations for the root.
Example: Solve x3 + x + 1 = 0 on the interval [−1, 0] using the Regula Falsi
method.
Solution:
1. Initial guesses: a = −1, b = 0

2. Compute f ( a) = f (−1) = −1, and f (b) = f (0) = 1. Since f ( a) · f (b) < 0, the
root lies between a and b.

3. Compute the first approximation:

a f (b) − b f ( a) −1(1) − 0(−1) −1


x1 = = = = −0.5
f (b) − f ( a) 1 − (−1) 2

32
Basant Rang Ranjan

4. Next, use x1 and b to compute x2 :

f ( x1 ) = f (−0.5) = −0.375, f ( b ) = f (0) = 1

a f ( x1 ) − x1 f ( a ) −1(−0.375) − (−0.5)(−1) 0.375 − 0.5


x2 = = = = −0.6
f ( x1 ) − f ( a ) −0.375 − (−1) 0.625

5. Continue iterating until the desired accuracy is achieved.

Newton-Raphson Method
The Newton-Raphson method is a powerful iterative method for finding roots based
on linear approximation using the tangent line.
Formula:
f ( xn )
x n +1 = x n − ′
f ( xn )
Where: - xn is the current approximation of the root at the n-th iteration. - xn+1 is
the next approximation to the root. - f ′ ( xn ) is the derivative of the function evaluated
at xn .
Example: Solve f ( x ) = 3x3 − 4x − 5 using the Newton-Raphson method with the
initial guess x0 = 2.
Solution:

1. f ( x ) = 3x3 − 4x − 5, f ′ ( x ) = 9x2 − 4

2. Start with x0 = 2:

f (2) = 3(2)3 − 4(2) − 5 = 24 − 8 − 5 = 11

f ′ (2) = 9(2)2 − 4 = 36 − 4 = 32
11
x1 = 2 − ≈ 1.65625
32
3. Second iteration with x1 = 1.65625:

f (1.65625) ≈ 3(1.65625)3 − 4(1.65625) − 5 ≈ 2.0055

f ′ (1.65625) ≈ 9(1.65625)2 − 4 ≈ 20.6357


2.0055
x2 ≈ 1.65625 − ≈ 1.5591
20.6357
4. Repeat until the desired accuracy is achieved.

Conclusion
Root-finding methods such as the Bisection method, Regula Falsi method, and Newton-
Raphson method are essential tools in numerical analysis for solving nonlinear equa-
tions. The Bisection method guarantees convergence, the Regula Falsi method offers
improved efficiency, and the Newton-Raphson method provides rapid convergence
when the initial guess is close to the actual root.

33
Basant Rang Ranjan

Newton’s Forward and Backward Interpolation


Newton’s Forward and Backward Interpolation are used for estimating values of a
function at equally spaced intervals. These methods rely on forward and backward
difference operators.

1. Forward Interpolation Formula


The working formula for forward interpolation is given by:

u u ( u − 1) 2 u(u − 1)(u − 2) 3
f ( a + uh) = f ( a) + ∆ f ( a) + ∆ f ( a) + ∆ f ( a) + . . .
1! 2! 3!
where ∆ f ( x ) = f ( x + h) − f ( x ) is called the forward difference operator.
Problem: Estimate the population in 1895 from the following statistics.

Year Population (thousands)


1891 46
1901 66
1911 81
1921 93
1931 101
Solution: We begin by constructing the forward difference table.

X f ( x ) ∆ f ( x ) ∆2 f ( x ) ∆3 f ( x ) ∆4 f ( x )
1891 46 20 −5 2 −3
1901 66 15 −3 −1
1911 81 12 −4
1921 93 8
1931 101
1895 − 1891
a = 1891, a + uh = 1895, u= = 0.4
10
Now, applying the formula for forward interpolation:

0.4 0.4(0.4 − 1) 0.4(0.4 − 1)(0.4 − 2)


f (1895) = 46 + × 20 + ×6+ ×2
1! 2! 3!

f (1895) = 46 + 8 + (−0.12) + 0.48 = 54.8


Thus, the estimated population in 1895 is 54.8 thousand.

2. Backward Interpolation Formula


The working formula for backward interpolation is given by:

u u ( u + 1) 2 u(u + 1)(u + 2) 3
f ( a + uh) = f ( a) + ∇ f ( a) + ∇ f ( a) + ∇ f ( a) + . . .
1! 2! 3!

34
Basant Rang Ranjan

where ∇ f ( x ) = f ( x ) − f ( x − h) is called the backward difference operator.


Problem: Estimate the population in 1925 from the following statistics.

Year Population (thousands)


1891 46
1901 66
1911 81
1921 93
1931 101
Solution: We begin by constructing the backward difference table.

X f ( x ) ∇ f ( x ) ∇2 f ( x ) ∇3 f ( x ) ∇4 f ( x )
1891 46 0
1901 66 20
1911 81 15 −5
1921 93 12 −3 −2
1931 101 8 −4 −1 −3
1925 − 1891
a = 1891, a + uh = 1925, = 3.4 u=
10
Now, applying the formula for backward interpolation:

3.4 3.4(3.4 + 1) 3.4(3.4 + 1)(3.4 + 2)


f (1925) = 101 + ×8+ ×6+ ×2
1! 2! 3!

f (1925) = 101 + 27.2 + 28.56 + 12.944 = 96.8368


Thus, the estimated population in 1925 is 96.8368 thousand.

Lagrange Interpolation
Lagrange Interpolation is used to estimate the value of a function at a given point
using the values of the function at other known points. The Lagrange interpolation
polynomial can be written as:

P ( x ) = f ( x0 ) · ℓ0 ( x ) + f ( x1 ) · ℓ1 ( x ) + f ( x2 ) · ℓ2 ( x ) + · · · + f ( x n ) · ℓ n ( x )

where f ( xi ) are the known function values at xi , and the Lagrange basis polyno-
mials ℓi ( x ) are calculated as:
n x − xj
ℓi ( x ) = ∏
x − xj
j =0 i
j ̸ =i

This can be written without the product notation for each ℓi ( x ):

( x − x1 )( x − x2 ) · · · ( x − xn )
ℓ0 ( x ) =
( x0 − x1 )( x0 − x2 ) · · · ( x0 − xn )

35
Basant Rang Ranjan

( x − x0 )( x − x2 ) · · · ( x − xn )
ℓ1 ( x ) =
( x1 − x0 )( x1 − x2 ) · · · ( x1 − xn )
( x − x0 )( x − x1 ) · · · ( x − xn )
ℓ2 ( x ) =
( x2 − x0 )( x2 − x1 ) · · · ( x2 − xn )
..
.
( x − x0 )( x − x1 ) · · · ( x − xn−1 )
ℓn ( x ) =
( xn − x0 )( xn − x1 ) · · · ( xn − xn−1 )
Problem: Estimate the value of the function at x = 2.5 using the following data
points.

x f (x)
1 1
2 4
3 9
4 16
Solution: Using the Lagrange interpolation formula, the interpolating polynomial
for this problem is:

P ( x ) = f (1) · ℓ0 ( x ) + f (2) · ℓ1 ( x ) + f (3) · ℓ2 ( x ) + f (4) · ℓ3 ( x )


Now, calculate each Lagrange basis polynomial ℓi ( x ):

( x − 2)( x − 3)( x − 4)
ℓ0 ( x ) =
(1 − 2)(1 − 3)(1 − 4)
( x − 1)( x − 3)( x − 4)
ℓ1 ( x ) =
(2 − 1)(2 − 3)(2 − 4)
( x − 1)( x − 2)( x − 4)
ℓ2 ( x ) =
(3 − 1)(3 − 2)(3 − 4)
( x − 1)( x − 2)( x − 3)
ℓ3 ( x ) =
(4 − 1)(4 − 2)(4 − 3)
Finally, substitute x = 2.5 into the Lagrange polynomial:

P(2.5) = f (1) · ℓ0 (2.5) + f (2) · ℓ1 (2.5) + f (3) · ℓ2 (2.5) + f (4) · ℓ3 (2.5)


By calculating each term, you can find the value of P(2.5).
Thus, the estimated value of the function at x = 2.5 is the result of this calculation.

Numerical Integration Techniques


In numerical integration, we approximate the value of definite integrals using discrete
methods. Below are three commonly used methods: Simpson’s 1/3 Rule, Simpson’s
3/8 Rule, and the Trapezoidal Rule.

36
Basant Rang Ranjan

1. Simpson’s 1/3 Rule


The working formula for Simpson’s 1/3 rule is:

Z b
h
f ( x ) dx ≈ [y0 + yn + 2 (y2 + y4 + y6 + . . . ) + 4 (y1 + y3 + y5 + . . . )]
a 3

where:
b− a
• h= n is the step size,

• n is the number of intervals and must be even.


R6
Problem: Solve 0 1+1x2 dx using Simpson’s 1/3 rule, with n = 6.
Solution: R6
Given the function f ( x ) = 1+1x2 , we want to approximate the integral 0 1
1+ x 2
dx
using Simpson’s 1/3 rule with n = 6.
1. First, calculate the step size h:

b−a 6−0
h= = =1
n 6
2. The values of the function at the required points are:

1 1 1 1 1 1 1
f (0) = = 1, f (1) = = , f (2) = = , f (3) = = ,
1 + 02 1+1 2 2 1+2 2 5 1+3 2 10

1 1 1 1 1 1
f (4) = 2
= , f (5) = 2
= , f (6) = 2
=
1+4 17 1+5 26 1+6 37
3. Apply Simpson’s 1/3 rule:

Z 6
1 1
2
dx ≈ [ f (0) + f (6) + 2( f (2) + f (4)) + 4( f (1) + f (3) + f (5))]
0 1+x 3

    
1 1 1 1 1 1 1
= 1+ +2 + +4 + +
3 37 5 17 2 10 26

1
= [1 + 0.027 + 2(0.2 + 0.0588) + 4(0.5 + 0.1 + 0.0385)]
3

1
= [1 + 0.027 + 2(0.2588) + 4(0.6385)]
3

1 1
= [1 + 0.027 + 0.5176 + 2.554] = × 4.0986 = 1.3662
3 3
Thus, the approximate value of the integral is 1.3662.

37
Basant Rang Ranjan

2. Simpson’s 3/8 Rule


The working formula for Simpson’s 3/8 rule is:

Z b
3h
f ( x ) dx ≈ [y0 + yn + 2 (y3 + y6 + y9 + . . . ) + 3 (y1 + y2 + y4 + . . . )]
a 8

where:
b− a
• h= n is the step size,

• n is the number of intervals, and it must be a multiple of 3.


R6
Problem: Solve 0 1+1x2 dx using Simpson’s 3/8 rule, with n = 6.
Solution: R6
Given the function f ( x ) = 1+1x2 , we want to approximate the integral 0 1
1+ x 2
dx
using Simpson’s 3/8 rule with n = 6.
1. First, calculate the step size h:

6−0
h= =1
6
2. The values of the function at the required points are:

1 1 1 1 1 1
f (0) = 1, f (1) = , f (2) = , f (3) = , f (4) = , f (5) = , f (6) =
2 5 10 17 26 37

3. Apply Simpson’s 3/8 rule:

Z 6
1 3
2
dx ≈ [ f (0) + f (6) + 2( f (3) + f (6)) + 3( f (1) + f (2) + f (4))]
0 1+x 8

    
3 1 1 1 1 1 1
= 1+ +2 + +3 + +
8 37 10 26 2 5 17

3
= [1 + 0.027 + 2(0.1 + 0.0385) + 3(0.5 + 0.2 + 0.0588)]
8

3
= [1 + 0.027 + 2(0.1385) + 3(0.7588)]
8

3
= [1 + 0.027 + 0.277 + 2.2764]
8

3
= × 3.5804 = 1.347
8
Thus, the approximate value of the integral is 1.347.

38
Basant Rang Ranjan

3. Trapezoidal Rule
The working formula for the Trapezoidal rule is:
Z b
h
f ( x ) dx ≈ [y0 + yn + 2 (y1 + y2 + y3 + . . . )]
a 2
where:
b− a
• h= n is the step size,

• n is the number of intervals.


R6
Problem: Solve 0 1+1x2 dx using the Trapezoidal rule, with n = 6.
Solution: R6
Given the function f ( x ) = 1+1x2 , we want to approximate the integral 0 1
1+ x 2
dx
using the Trapezoidal rule with n = 6.
1. First, calculate the step size h:

6−0
=1 h=
6
2. The values of the function at the required points are:

1 1 1 1 1 1
f (0) = 1, f (1) = , f (2) = , f (3) = , f (4) = , f (5) = , f (6) =
2 5 10 17 26 37
3. Apply the Trapezoidal rule:
Z 6
1 1
2
dx ≈ [ f (0) + f (6) + 2( f (1) + f (2) + f (3) + f (4) + f (5))]
0 1+x 2
  
1 1 1 1 1 1 1
= 1+ +2 + + + +
2 37 2 5 10 17 26
1
= [1 + 0.027 + 2 (0.5 + 0.2 + 0.1 + 0.0588 + 0.0385)]
2
1
= [1 + 0.027 + 2(0.8973)]
2
1 1
[1 + 0.027 + 1.7946] = × 2.8216 = 1.4108
=
2 2
Thus, the approximate value of the integral is 1.4108.

Summary of Results
• Using Simpson’s 1/3 rule, the approximate value of the integral is 1.3662.

• Using Simpson’s 3/8 rule, the approximate value of the integral is 1.347.

• Using the Trapezoidal rule, the approximate value of the integral is 1.4108.

39
Basant Rang Ranjan

Numerical Methods for Solving Initial Value Problems


1. Runge-Kutta 2nd Order (RK2) Method
The Runge-Kutta 2nd order method (RK2) approximates the solution of an initial
value problem using the following formula:
 
f ( xn , yn ) + f ( xn+1 , yn + h f ( xn , yn ))
y n +1 = y n + h
2
dy
Problem: Solve dx = x + y with y(0) = 1, using h = 0.1, and compute the approx-
imate value of y at x = 0.2.
Solution:
Start with x0 = 0, y0 = 1, and h = 0.1. We apply the RK2 method:

1. Compute k1 = f ( x0 , y0 ) = 0 + 1 = 1

2. Compute y1∗ = y0 + hk1 = 1 + 0.1 · 1 = 1.1

3. Compute k2 = f ( x1 , y1∗ ) = 0.1 + 1.1 = 1.2

4. Now compute y1 = y0 + 2h (k1 + k2 ):

0.1
y1 = 1 + (1 + 1.2) = 1 + 0.11 = 1.11
2

Thus, the approximate value of y at x = 0.1 is y1 = 1.11.


Now, for x = 0.2, repeat the above steps with the new initial values x1 = 0.1 and
y1 = 1.11:

1. Compute k1 = f (0.1, 1.11) = 0.1 + 1.11 = 1.21

2. Compute y2∗ = 1.11 + 0.1 · 1.21 = 1.11 + 0.121 = 1.231

3. Compute k2 = f (0.2, 1.231) = 0.2 + 1.231 = 1.431


0.1
4. Now compute y2 = 1.11 + 2 (1.21 + 1.431) = 1.11 + 0.12105 = 1.23105

Thus, the approximate value of y at x = 0.2 is y2 = 1.23105.

2. Runge-Kutta 4th Order (RK4) Method


The Runge-Kutta 4th order method (RK4) is a higher-order method for solving ordi-
nary differential equations. The RK4 method is given by the following formula:

h
y n +1 = y n + (k1 + 2k2 + 2k3 + k4 )
6
where:
   
h h h h
k 1 = f ( x n , y n ), k2 = f xn + , yn + k1 , k3 = f xn + , yn + k2 ,
2 2 2 2

k4 = f ( xn + h, yn + hk3 )

40
Basant Rang Ranjan

dy
Problem: Solve dx = x + y with y(0) = 1, using h = 0.1, and compute the approx-
imate value of y at x = 0.2.
Solution:
We are given the differential equation:

dy
= x+y
dx
with the initial condition y(0) = 1, and step size h = 0.1. We need to compute the
approximate value of y at x = 0.1 and x = 0.2 using the RK4 method.
1. Initial Conditions: - x0 = 0, - y0 = 1, - h = 0.1.
2. Apply RK4 Method:
The RK4 method involves calculating the intermediate terms k1 , k2 , k3 , k4 at each
step.
Step 1: Compute k1 :

k1 = f ( x0 , y0 ) = f (0, 1) = 0 + 1 = 1

So, k1 = 1.
Step 2: Compute k2 :
 
h h
k2 = f x0 + , y0 + k1 = f (0.05, 1 + 0.05 × 1) = f (0.05, 1.05)
2 2

k2 = 0.05 + 1.05 = 1.1


So, k2 = 1.1.
Step 3: Compute k3 :
 
h h
k3 = f x0 + , y0 + k2 = f (0.05, 1 + 0.05 × 1.1) = f (0.05, 1.055)
2 2

k3 = 0.05 + 1.055 = 1.105


So, k3 = 1.105.
Step 4: Compute k4 :

k4 = f ( x0 + h, y0 + hk3 ) = f (0.1, 1 + 0.1 × 1.105) = f (0.1, 1.1105)

k4 = 0.1 + 1.1105 = 1.2105


So, k4 = 1.2105.
3. Compute the Next Value y1 :
Now, we can use the RK4 formula to compute y1 at x1 = 0.1:

h
y1 = y0 + (k1 + 2k2 + 2k3 + k4 )
6
Substituting the values of k1 , k2 , k3 , k4 , and h = 0.1:

0.1
y1 = 1 + (1 + 2 · 1.1 + 2 · 1.105 + 1.2105)
6
0.1
y1 = 1 + (1 + 2.2 + 2.21 + 1.2105)
6

41
Basant Rang Ranjan

0.1
y1 = 1 + · 6.6205 = 1 + 0.11034 = 1.11034
6
So, y1 = 1.11034.
4. Repeat the Process for x = 0.2:
Now we repeat the above steps for x1 = 0.1 and y1 = 1.11034, using the same step
size h = 0.1 to compute y2 .
Step 1: Compute k1 :

k1 = f ( x1 , y1 ) = f (0.1, 1.11034) = 0.1 + 1.11034 = 1.21034

Step 2: Compute k2 :
 
h h
k2 = f x1 + , y1 + k 1 = f (0.15, 1.11034 + 0.05 × 1.21034) = f (0.15, 1.17185)
2 2

k2 = 0.15 + 1.17185 = 1.32185

Step 3: Compute k3 :
 
h h
k3 = f x1 + , y1 + k 2 = f (0.15, 1.11034 + 0.05 × 1.32185) = f (0.15, 1.17642)
2 2

k3 = 0.15 + 1.17642 = 1.32642

Step 4: Compute k4 :

k4 = f ( x1 + h, y1 + hk3 ) = f (0.2, 1.11034 + 0.1 × 1.32642) = f (0.2, 1.24398)

k4 = 0.2 + 1.24398 = 1.44398

5. Compute the Next Value y2 :

h
y2 = y1 + (k1 + 2k2 + 2k3 + k4 )
6

Substituting the values of k1 , k2 , k3 , k4 :

0.1
y2 = 1.11034 + (1.21034 + 2 · 1.32185 + 2 · 1.32642 + 1.44398)
6

0.1
y2 = 1.11034 + (1.21034 + 2.6437 + 2.65284 + 1.44398)
6
0.1
y2 = 1.11034 + · 7.95086 = 1.11034 + 0.13251 = 1.24285
6
Thus, the approximate value of y at x = 0.2 is y2 = 1.24285.
Results:
- At x = 0.1, y1 = 1.11034, - At x = 0.2, y2 = 1.24285.

42
Basant Rang Ranjan

3. Finite Difference Method


The Finite Difference Method (FDM) is a numerical technique for solving ordinary
differential equations (ODEs) by approximating derivatives using finite differences.
The basic idea is to discretize the differential equation by approximating derivatives
at discrete points in the domain.
dy
For the first-order derivative dx , the most common finite difference approximations
are:

dy y − yn
≈ n +1 (Forward Difference)
dx h
dy y n − y n −1
≈ (Backward Difference)
dx h
dy y − y n −1
≈ n +1 (Central Difference)
dx 2h
The method proceeds by discretizing the independent variable x with a step size
h, which determines the spacing between the grid points. The unknown solution is
approximated at each grid point, and we solve the system of equations iteratively.
dy
Problem: Solve the differential equation dx = x + y with the initial condition
y(0) = 1, using the Finite Difference Method with step size h = 0.1, and find the
approximate value of y at x = 0.2.
Solution:
We start by discretizing the interval, and the Finite Difference Method involves
dy
replacing the derivative with an approximation. Since we are solving dx = x + y, we
dy
can use the forward difference approximation for dx .
Step 1: Set Up the Discretized Equation
dy
Using the forward difference formula for dx :

dy y − yn
≈ n +1
dx h
dy
Substitute this into the differential equation dx = x + y:
y n +1 − y n
= xn + yn
h
Rearranging the above equation to solve for yn+1 :

y n +1 = y n + h ( x n + y n )
Step 2: Initial Condition and Parameters
We are given the initial condition y(0) = 1, and we are asked to compute the value
of y at x = 0.1 and x = 0.2 using a step size h = 0.1.
We know: - y0 = 1 - h = 0.1 - x0 = 0 - We need to calculate y1 and y2 .
Step 3: Apply the Finite Difference Formula
Compute y1 at x = 0.1:
Using the equation:

y1 = y0 + h ( x0 + y0 )

43
Basant Rang Ranjan

Substitute the known values:

y1 = 1 + 0.1 · (0 + 1) = 1 + 0.1 = 1.1


So, y1 = 1.1.
Compute y2 at x = 0.2:
Now, use the same formula to compute y2 using x1 = 0.1 and y1 = 1.1:

y2 = y1 + h ( x1 + y1 )
Substitute the known values:

y2 = 1.1 + 0.1 · (0.1 + 1.1) = 1.1 + 0.1 · 1.2 = 1.1 + 0.12 = 1.22
So, y2 = 1.22.
Step 4: Results
Using the Finite Difference Method with step size h = 0.1, we have computed the
following approximate values:
- At x = 0.1, y1 = 1.1 - At x = 0.2, y2 = 1.22
Thus, the approximate value of y at x = 0.2 is y2 = 1.22.

4. Euler’s Method
Euler’s method is a straightforward, first-order method for numerically solving initial
value problems. It uses the following iterative formula:

y n +1 = y n + h f ( x n , y n )
Where: - h is the step size, - xn and yn are the current values of the independent
dy
and dependent variables, - f ( xn , yn ) is the function dx , i.e., the right-hand side of the
differential equation.
dy
Problem: Solve dx = x + y with y(0) = 1, using Euler’s method and step size
h = 0.1, to compute the approximate value of y at x = 0.2.
Solution:
dy
1. Given: - The differential equation: dx = x + y, - Initial condition: y(0) = 1, - Step
size: h = 0.1, - We need to compute the approximate value of y at x = 0.2.
2. Initial Conditions: - x0 = 0, - y0 = 1.
3. Euler’s Method Formula: - The formula for Euler’s method is:

y n +1 = y n + h f ( x n , y n )
dy
Where f ( xn , yn ) = xn + yn , based on the given differential equation dx = x + y.
4. Step 1: Compute y1 at x1 = 0.1: - Using the Euler’s method formula:

y1 = y0 + h f ( x0 , y0 )

Substituting the known values:

y1 = 1 + 0.1 · (0 + 1) = 1 + 0.1 = 1.1

Therefore, y1 = 1.1.

44
Basant Rang Ranjan

5. Step 2: Compute y2 at x2 = 0.2: - Using the Euler’s method formula again:

y2 = y1 + h f ( x1 , y1 )

Substituting the known values:

y2 = 1.1 + 0.1 · (0.1 + 1.1) = 1.1 + 0.1 · 1.2 = 1.1 + 0.12 = 1.22

Therefore, y2 = 1.22.
6. Step 3: Compute y3 at x3 = 0.3 (optional for further accuracy): - To compute the
next step, we can continue applying the Euler’s method:

y3 = y2 + h f ( x2 , y2 )

Substituting the known values:

y3 = 1.22 + 0.1 · (0.2 + 1.22) = 1.22 + 0.1 · 1.42 = 1.22 + 0.142 = 1.362

Therefore, y3 = 1.362.
Results:
Using Euler’s method with a step size of h = 0.1, the approximate values of y at
different points are:
- At x = 0.1, y1 = 1.1, - At x = 0.2, y2 = 1.22, - At x = 0.3, y3 = 1.362.
Thus, the approximate value of y at x = 0.2 is y2 = 1.22.

45
Basant Rang Ranjan

0.15 Group Theory

Groups and Abelian Groups


Definition of a Group
A group G is a set equipped with a binary operation ∗ that satisfies the following four
fundamental properties:

• Closure: For all elements a, b ∈ G, the result of the operation a ∗ b is also an


element of G. In other words, the operation ∗ does not produce any element
outside of G. Formally,
a, b ∈ G =⇒ a ∗ b ∈ G.

• Associativity: For all elements a, b, c ∈ G, the operation ∗ satisfies the associative


property:
( a ∗ b ) ∗ c = a ∗ ( b ∗ c ),
meaning the order in which operations are performed does not affect the result.

• Identity element: There exists an element e ∈ G, called the identity element, such
that for every element a ∈ G, the following holds:

a ∗ e = e ∗ a = a.

The identity element acts as a neutral element for the operation ∗, leaving any
element unchanged when combined with it.

• Inverse element: For every element a ∈ G, there exists an element b ∈ G such


that:
a ∗ b = b ∗ a = e,
where e is the identity element. The element b is called the inverse of a, and the
operation ∗ ”undoes” the effect of a when combined with b.

These four properties guarantee the structure of a group, allowing us to perform


algebraic manipulations within the set G under the operation ∗.

Definition of an Abelian Group


An Abelian group (or commutative group) is a group G in which the binary operation
∗ is commutative. This means that for all elements a, b ∈ G, the order of applying the
operation does not matter:
a ∗ b = b ∗ a.
In other words, the elements of the group commute with each other. For example, in
the group of integers under addition (Z, +), we have 2 + 3 = 3 + 2, which shows that
the operation (addition) is commutative. Therefore, (Z, +) is an Abelian group.

46
Basant Rang Ranjan

Examples
SU(2)
The special unitary group SU (2) is the group of 2 × 2 unitary matrices with determinant
1. A matrix A ∈ SU (2) satisfies two key conditions:

A† A = I (unitary condition), det( A) = 1 (determinant condition).

The unitary condition implies that the matrix is invertible, and its conjugate transpose
is its inverse. The determinant condition ensures that the matrix has unit scaling,
preserving the norm of vectors it acts upon.
A general element of SU (2) can be written as:
 
a b
A=
−b a∗

where a, b ∈ C, and a∗ and b∗ denote the complex conjugates of a and b, respectively.


The condition det( A) = a2 + |b|2 = 1 ensures that the determinant of A is 1.
A simple example of an element in SU (2) is:
 
cos θ − sin θ
A=
sin θ cos θ

This matrix represents a rotation in the 2-dimensional complex plane by an angle θ,


and it satisfies both the unitarity condition A† A = I and the determinant condition
det( A) = 1.

SU(3)
The special unitary group SU (3) consists of all 3 × 3 unitary matrices with determinant
1. An element A ∈ SU (3) satisfies:

A† A = I (unitary condition), det( A) = 1 (determinant condition).

This group plays a vital role in quantum chromodynamics (QCD), where it describes
the symmetries of the strong interaction.
A simple example of an element in SU (3) is:

eiθ 0 0
 

A =  0 e−iθ 0
0 0 1

This matrix represents a diagonal transformation with phases eiθ and e−iθ on the diag-
onal. The determinant of this matrix is:

det( A) = eiθ · e−iθ · 1 = 1,

ensuring that it belongs to SU (3).

47
Basant Rang Ranjan

O(3)
The orthogonal group O(3) consists of all 3 × 3 orthogonal matrices. A matrix A ∈ O(3)
satisfies:
A T A = I (orthogonality condition).
This means the rows (and columns) of the matrix are orthonormal vectors, and the ma-
trix preserves the Euclidean norm of vectors it acts upon. The group O(3) represents
the symmetries of 3-dimensional Euclidean space, including rotations and reflections.
A typical example of an element in O(3) is the rotation matrix about the x-axis by
an angle θ:  
1 0 0
A = 0 cos θ − sin θ 
0 sin θ cos θ
This matrix corresponds to a counterclockwise rotation by an angle θ about the x-axis.
It is orthogonal because A T A = I, and its determinant is det( A) = 1, which means the
matrix represents a pure rotation (not a reflection).
In contrast, the group O(3) also includes reflections, where det( A) = −1, indicat-
ing that the transformation is not orientation-preserving. Reflections in O(3) can be
represented by matrices that have a determinant of −1, such as the reflection matrix
across the yz-plane:  
−1 0 0
A =  0 1 0
0 0 1
This matrix represents a reflection that flips the sign of the x-coordinate, leaving the
other coordinates unchanged.
Thus, the group O(3) consists of both rotations and reflections, whereas the sub-
group SO(3) consists only of rotation matrices with det( A) = 1.

48
Basant Rang Ranjan

0.16 Tensor Analysis


Tensor Analysis
Definition of a Tensor
A tensor is a mathematical object that generalizes the concept of scalars, vectors, and
matrices. It can be represented as a multi-dimensional array of components. The rank
of a tensor refers to the number of indices needed to describe it. The components of
a tensor transform in a particular way under a change of coordinates, allowing it to
be used in many areas of physics and engineering, such as mechanics, relativity, and
fluid dynamics.

Tensor Rank
The rank of a tensor refers to the number of indices required to define it. The following
are examples of tensors of various ranks:

• Rank 0 (Scalar): A tensor of rank 0 has only one component. It is a scalar.

• Rank 1 (Vector): A tensor of rank 1 has 3 components. It can be thought of as a


vector in 3-dimensional space.

• Rank 2 (Matrix): A tensor of rank 2 has 3 × 3 = 9 components. It can be repre-


sented as a matrix.

• Rank N: A tensor of rank N has 3 N components in a 3-dimensional space.

Definition of Cartesian Vectors


In Cartesian coordinates, a vector can be represented as a column matrix:

v′x
   
vx
 v′y  = A ·  vy 
v′z vz

where A is the transformation matrix, given by:


 
a11 a12 a13
A =  a21 a22 a23  .
a31 a32 a33

This equation can be written as:

3
vi′ = ∑ aij v j , i = 1, 2, 3.
j =1

Here, the vector components in the transformed coordinate system (v′x , v′y , v′z ) are ob-
tained by multiplying the matrix A with the original components (v x , vy , vz ).

49
Basant Rang Ranjan

Transformation of a Rank-2 Tensor


For a rank-2 tensor T, the transformation law in Cartesian coordinates is given by:
3 3
Tkl′ = ∑ ∑ aik a jl Tij , k, l = 1, 2, 3.
j =1 i =1

This equation shows how the components of a rank-2 tensor transform under a change
of coordinates.

Transformation of a Higher Rank Tensor


For a higher-rank tensor Tαβγδ , the transformation law is given by:

Tαβγδ = ∑ aαi a βj aγk aδl Tijkl ,
αβγδ

where the summation is over the indices i, j, k, l, and the tensor components Tαβγδ
transform accordingly.

Summation Convention
The summation convention, also known as the Einstein summation convention, is a
shorthand notation that implies summation over repeated indices. When an index
appears twice—once as a lower index and once as an upper index—summation over
all its possible values is assumed.
For example, the sum z1 x1 + a2 x2 + a3 x3 + · · · + an xn can be written as:
n
∑ ai xi = ai x i , i = 1, 2, 3, . . . , n.
i =1
This notation simplifies expressions involving vectors, matrices, and tensors, allowing
for a more compact and elegant form.

Covariant Tensor
A covariant tensor transforms according to a specific rule under a change of coordinates.
The transformation laws for covariant tensors of different ranks are as follows:
Rank-1 Covariant Tensor:
∂x j
vi′ = ′ v j ,
∂xi
where vi′ represents the transformed components of the tensor, and v j are the original
components.
Rank-2 Covariant Tensor:
∂x ∂x
vij′ = k′ l′ vkl ,
∂xi ∂x j
where vij′ represents the transformed components, and vkl are the original components.
Rank-3 Covariant Tensor:
∂x ∂x ∂x p

vijm = k′ l′ ′ vkl p .
∂xi ∂x j ∂xm

50
Basant Rang Ranjan

Contravariant Tensor
A contravariant tensor transforms oppositely to a covariant tensor. The transformation
laws for contravariant tensors of different ranks are as follows:
Rank-1 Contravariant Tensor:

′ ∂x ′ j j
vi = v,
∂xi

where vi are the transformed components, and v j are the original components.
Rank-2 Contravariant Tensor:

′ ∂x ′k ∂x ′l kl
vij = v .
∂xi ∂x j
Rank-3 Contravariant Tensor:

′ ∂x ′k ∂x ′l ∂x ′ p kl p
vijm = v .
∂xi ∂x j ∂x m

Mixed Tensors
A mixed tensor combines both covariant and contravariant components. The transfor-
mation rule for a mixed tensor Ar′ p is given by:

∂xr′ ∂xs q
Ar′ p = As .
∂x q ∂x ′p

This expresses the combination of the transformation rules for covariant and con-
travariant tensors.

Symmetric Tensor
A tensor is called symmetric if it remains unchanged when any two indices are ex-
changed. For example:
Amn = Anm ,
where Amn is a symmetric tensor. The number of independent components of a sym-
metric tensor in n-dimensional space is given by:

n ( n + 1)
.
2

Antisymmetric Tensor
A tensor is called antisymmetric if it changes sign when any two indices are swapped.
For example:
Amn = − Anm .

51
Basant Rang Ranjan

The number of independent components of an antisymmetric tensor in n-dimensional


space is given by:
n ( n − 1)
.
2
An example of an antisymmetric tensor is the electromagnetic field tensor in electrody-
namics.

Example
Example 1: Covariant Vector of Gradient of a Scalar Function
∂ϕ
We are asked to show that ∂xi is a covariant vector, where ϕ is a scalar function. To do
so, we will use the transformation properties of covariant tensors.
Solution:
∂ϕ
Let ϕ be a scalar function. The gradient of ϕ, denoted as ∂xi , represents the rate of
change of the scalar function in the direction of the coordinate xi .
∂ϕ
For a scalar function, the components of its gradient, ∂xi , transform as a covariant
vector. According to the transformation law for a rank-1 covariant tensor, we have:
∂x j
vi′ = v,
∂xi′ j

where vi′ are the components of the transformed vector, and v j are the components of
the original vector.
∂ϕ
In the case of the gradient of a scalar function, we have v j = ∂x j , and the transfor-
mation of vi′ is:
∂x j ∂ϕ
vi′ = ′ j .
∂xi ∂x
∂ϕ
This shows that ∂xi transforms according to the rule for a covariant vector, and
hence, the gradient of a scalar function is indeed a covariant vector.

Example 2: Contravariant Components of dxi


We are asked to show that the differential components dxi are the components of a con-
travariant vector, given that xi are the coordinates of a point in n-dimensional space.
Solution:
Consider the differential change in the coordinates dxi , where xi are the coordinates
in n-dimensional space. The components dxi are infinitesimal changes in the coordi-
nates xi , and we need to show that these components transform as a contravariant
vector.
The transformation law for a rank-1 contravariant tensor is given by:

i′ ∂x ′i j
v = v,
∂x j

where vi are the components of the transformed contravariant vector, and v j are the
components of the original contravariant vector.

52
Basant Rang Ranjan

In the case of the differential components dxi , the components v j correspond to dx j ,


and the transformation law becomes:

′ ∂x ′i j
dxi = dx .
∂x j

Since the differential components dxi transform according to the same rule as the
coordinates xi , we can conclude that the components dxi transform as a contravariant
vector.
Thus, dxi are the components of a contravariant vector.

Algebra of Tensors
1. Tensors of the Same Type
Tensors are of the same type if they have an equal number of contravariant and co-
variant indices. For example, Aijk and Bnp
m are tensors of the same type.

Addition and Subtraction: Addition or subtraction is only possible for tensors of


the same type and rank:
i
Aijk ± Bijk = Cjk (Possible),

but:
np
Aijk ± Bm ̸= Cyz
x
(Not Possible).

2. Outer Product of Tensors


If each component of a tensor of rank m is multiplied with every component of a tensor
of rank n, the resulting tensor will have rank m + n. This is known as the outer product
of two tensors. For example:
Aijk · Bnp
m im
= Cjknp .

3. Contraction of a Tensor
The process of equating one contravariant index and one covariant index of a tensor
is called contraction. A single contraction reduces the rank of the tensor by 2. For
example:
ij i =m j
Amnp −−→ Anp .

4. Inner Product of Tensors


The inner product of tensors can be viewed as the outer product followed by a single
contraction. Examples:

j
Aijk Bnp = Cknp
i
, Aijk Bkn in
p = Cj .

53
Basant Rang Ranjan

5. Levi-Civita Tensor
The Levi-Civita tensor ϵijk is a covariant tensor of rank 3 and is antisymmetric. Its
components are defined as:

0,
 if any two indices are equal,
ϵijk = 1, if i, j, k is an even permutation,

−1, if i, j, k is an odd permutation.

Properties: There are 27 components of ϵijk , with the following specific values:
ϵ123 = ϵ231 = ϵ312 = 1, ϵ132 = ϵ213 = ϵ321 = −1,
and all other components are zero.

6. Kronecker Delta Tensor


The Kronecker delta δji is defined as:
(
1, if i = j,
δji =
0, if i ̸= j.
Additionally, it can be expressed as:
∂xi
δji = .
∂x j

7. Important Results
1. ϵijk ϵimn = δjm δkn − δjn δkm .

2. ϵijk ϵijn = 2δkn .


3. ϵijk ϵijk = 6.
4. For the cross product of two vectors:
(A × B)i = ϵijk A j Bk .
5. For the triple product:
[A × (B × C)]n = Bn (A · C) − Cn (A · B).
6. For the curl of a vector field:

(∇ × V)i = ϵijk V.
∂x j k

7. For the divergence of a vector field:


1 ∂ √ i
∇·V = √ ( gV ),
g ∂xi
where V i are the contravariant components of V.
8. For the Laplacian of a vector field:
[∇ × (∇ × V)]n = ∇(∇ · V) − ∇2 V.

54
Basant Rang Ranjan

Metric Tensor or Fundamental Tensor


The metric tensor, also known as the fundamental tensor, defines the infinitesimal
distance between two points in a given coordinate system. It is expressed as:

(ds)2 = gij dxi dx j ,


where gij is the metric tensor, a covariant tensor of rank two. In expanded form, it is
given by:
(ds)2 = g11 (dx1 )2 + g22 (dx2 )2 + g33 (dx3 )2 + g12 dx1 dx2 + g21 dx2 dx1 + g13 dx1 dx3 +
g31 dx3 dx1 + g23 dx2 dx3 + g32 dx3 dx2 .
Properties of the Metric Tensor
• gij is a covariant tensor of rank two.
• gij is a symmetric tensor, i.e., gij = g ji .

• gij is a function of coordinates xi .


Metric Tensor in Cartesian Coordinates In Cartesian coordinates ( x, y, z), the met-
ric tensor is given by:  
1 0 0
gij = 0 1 0 .
0 0 1
Here, the infinitesimal distance between two points is:
(ds)2 = (dx )2 + (dy)2 + (dz)2 .
Metric Tensor in Spherical Polar Coordinates In spherical polar coordinates (r, θ, ϕ),
the metric tensor is expressed as:
 
1 0 0
gij = 0 r2 0 .
0 0 r sin2 θ
2

Here, the infinitesimal distance between two points is:


(ds)2 = (dr )2 + r2 (dθ )2 + r2 sin2 θ (dϕ)2 .
Metric Tensor in Cylindrical Coordinates In cylindrical coordinates (ρ, ϕ, z), the
metric tensor is:  
1 0 0
gij = 0 ρ2 0 .
0 0 1
Here, the infinitesimal distance between two points is:
(ds)2 = (dρ)2 + ρ2 (dϕ)2 + (dz)2 .
Summary The metric tensors and their associated distance formulas in Cartesian,
spherical polar, and cylindrical coordinates encapsulate the geometric properties of
the respective coordinate systems. These are essential in tensor calculus, differential
geometry, and various applications in physics.

55
Basant Rang Ranjan

0.17 Probability Theory


Elementary Probability Theory
Probability theory is a branch of mathematics concerned with analyzing random phe-
nomena. It provides a framework for quantifying uncertainty and modeling stochastic
events.

Basic Definitions
• Experiment: An action or procedure resulting in one or more outcomes. Exam-
ple: Tossing a coin.
• Sample Space (S): The set of all possible outcomes of an experiment. Example:
For a single coin toss, S = {Heads, Tails}.
• Event (E): A subset of the sample space. It is the set of outcomes that satisfy a
specific condition. Example: In a dice roll, the event of getting an even number
is E = {2, 4, 6}.
• Probability (P): A measure of the likelihood of an event occurring, satisfying:
0 ≤ P( E) ≤ 1 and P(S) = 1.

Axioms of Probability
The probability of any event is defined by the following axioms:
• P( E) ≥ 0 for all events E (non-negativity).
• P(S) = 1 (certainty).
• For mutually exclusive events E1 , E2 , . . ., we have:
P( E1 ∪ E2 ∪ . . .) = P( E1 ) + P( E2 ) + . . .
(additivity).

Types of Events
• Independent Events: Two events A and B are independent if:
P ( A ∩ B ) = P ( A ) · P ( B ).
Example: Tossing two coins. The outcome of one coin does not affect the other.
• Mutually Exclusive Events: Two events A and B are mutually exclusive if:
P( A ∩ B) = 0.
Example: Rolling a die and getting either a 3 or a 4. These outcomes cannot occur
simultaneously.
• Complementary Events: The complement of an event A is the set of outcomes
not in A, denoted Ac .
P ( A c ) = 1 − P ( A ).

56
Basant Rang Ranjan

Conditional Probability
The probability of an event A, given that event B has occurred, is defined as:
P( A ∩ B)
P( A | B) = , where P( B) ̸= 0.
P( B)
Example: Drawing two cards from a deck without replacement.

Theorem of Total Probability


If B1 , B2 , . . . , Bn are mutually exclusive and exhaustive events, then for any event A:
n n
P( A) = ∑ P( A ∩ Bi ) = ∑ P( A | Bi ) P( Bi ).
i =1 i =1

Bayes’ Theorem
Bayes’ theorem relates conditional probabilities:
P( A | Bi ) P( Bi )
P( Bi | A) = ,
∑nj=1 P( A | Bj ) P( Bj )

where B1 , B2 , . . . , Bn are mutually exclusive and exhaustive events.

Examples
• Example of Basic Probability: A fair six-sided die is rolled. What is the prob-
ability of rolling a number greater than 4? Solution: The sample space is S =
{1, 2, 3, 4, 5, 6}. The favorable outcomes are {5, 6}. Probability:
Number of favorable outcomes 2 1
P( E) = = = .
Total outcomes 6 3
• Example of Conditional Probability: A box contains 3 red balls and 2 blue balls.
A ball is drawn at random. If it is not replaced, what is the probability that the
second ball is red, given that the first ball was blue? Solution: The sample space
reduces after the first draw. Probability:
3
P(Second red | First blue) = .
4
• Example of Bayes’ Theorem: A factory has two machines A and B that produce
60% and 40% of the total output, respectively. The defect rates for A and B are
1% and 2%. If a randomly selected product is defective, what is the probability
it was produced by machine B? Solution: Using Bayes’ theorem:
P( D | B) P( B)
P( B | D ) = .
P( D | A) P( A) + P( D | B) P( B)
Substituting the values:
(0.02)(0.4) 0.008 4
P( B | D ) = = = .
(0.01)(0.6) + (0.02)(0.4) 0.006 + 0.008 7

57
Basant Rang Ranjan

Solving Probability Problems using nCr and nPr


The concepts of combinations (nCr) and permutations (nPr) are essential in solving
probability problems. Below, we present some illustrative examples.
Definitions - Permutation (nPr): The number of ways to arrange r objects out of n,
where order matters:
n!
nPr = .
(n − r )!
- Combination (nCr): The number of ways to select r objects from n, where order does
not matter:
n!
nCr = .
r!(n − r )!
Problem 1: Arranging People Problem: In a group of 6 people, how many ways
can 3 people be selected and arranged in a line?
Solution: This problem involves both combination and permutation: 1. Select 3
people from the group of 6:

6·5·4
 
6 6!
Ways to select: = = = 20.
3 3!(6 − 3)! 3·2·1

2. Arrange the selected 3 people in a line:

Ways to arrange: 3! = 3 · 2 · 1 = 6.

3. Total ways:  
6
Total ways = · 3! = 20 · 6 = 120.
3
Problem 2: Drawing Cards Problem: A standard deck of 52 cards is given. What
is the probability of drawing 4 aces in a hand of 5 cards?
Solution: 1. Total ways to choose 5 cards from 52:
 
52 52!
= = 2598960.
5 5!(52 − 5)!

2. Ways to choose 4 aces from 4:  


4
= 1.
4
3. Ways to choose the 5th card from the remaining 48 cards:
 
48
= 48.
1

4. Total favorable outcomes:


   
4 48
· = 1 · 48 = 48.
4 1

5. Probability:

Favorable outcomes 48
P(4 aces) = = ≈ 0.0000185.
Total outcomes 2598960

58
Basant Rang Ranjan

Problem 3: Selecting Teams Problem: A class has 10 boys and 8 girls. A team of 5
students is to be selected, with at least 2 girls. How many such teams can be formed?
Solution: We will break the problem into cases based on the number of girls se-
lected:
1. Case 1: 2 girls, 3 boys

8·7
 
8
Ways to choose 2 girls: = = 28.
2 2

10 · 9 · 8
 
10
Ways to choose 3 boys: = = 120.
3 3·2·1
Total for this case: 28 · 120 = 3360.
2. Case 2: 3 girls, 2 boys

8·7·6
 
8
Ways to choose 3 girls: = = 56.
3 3·2·1

10 · 9
 
10
Ways to choose 2 boys: = = 45.
2 2
Total for this case: 56 · 45 = 2520.
3. Case 3: 4 girls, 1 boy

8·7·6·5
 
8
Ways to choose 4 girls: = = 70.
4 4·3·2·1
 
10
Ways to choose 1 boy: = 10.
1
Total for this case: 70 · 10 = 700.
4. Total number of teams:

3360 + 2520 + 700 = 6580.

59

You might also like