0% found this document useful (0 votes)
101 views16 pages

Mathematical Formula Handbook

This document provides mathematical formulas for: 1) Arithmetic and geometric progressions for series. 2) Tests for convergence of series including the ratio test and comparison test. 3) Binomial expansions and their valid ranges. 4) Taylor, Maclaurin, and other power series expansions and their valid ranges. 5) Formulas for vector algebra including scalar and vector products, equations of lines and planes, and expansions in non-orthogonal bases.

Uploaded by

Reshma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
101 views16 pages

Mathematical Formula Handbook

This document provides mathematical formulas for: 1) Arithmetic and geometric progressions for series. 2) Tests for convergence of series including the ratio test and comparison test. 3) Binomial expansions and their valid ranges. 4) Taylor, Maclaurin, and other power series expansions and their valid ranges. 5) Formulas for vector algebra including scalar and vector products, equations of lines and planes, and expansions in non-orthogonal bases.

Uploaded by

Reshma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Mathematical Formula Handbook

1. Series

Arithmetic and Geometric progressions


n
A.P. Sn = a + ( a + d) + ( a + 2d) + · · · + [ a + (n − 1)d] = [2a + (n − 1)d]
2  
2 n−1 1 − rn a
G.P. Sn = a + ar + ar + · · · + ar =a , S∞ = for |r| < 1
1−r 1−r
(These results also hold for complex series.)

Convergence of series: the ratio test



un+1
Sn = u1 + u2 + u3 + · · · + un converges as n→∞ if lim <1
n→∞ un

Convergence of series: the comparison test


If each term in a series of positive terms is less than the corresponding term in a series known to be convergent,
then the given series is also convergent.

Binomial expansion
n(n − 1) 2 n(n − 1)(n − 2) 3
(1 + x)n = 1 + nx + x + x + ···
2! 3!
n
If n is a positive integer the series terminates and is valid for all x: the term in x r is n Cr xr or where n Cr ≡
n! r
is the number of different ways in which an unordered sample of r objects can be selected from a set of
r!(n − r)!
n objects without replacement. When n is not a positive integer, the series does not terminate: the infinite series is
convergent for | x| < 1.

Taylor and Maclaurin Series


If y( x) is well-behaved in the vicinity of x = a then it has a Taylor series,
dy u2 d2 y u3 d3 y
y( x) = y( a + u) = y( a) + u + + + ···
dx 2! dx2 3! dx3
where u = x − a and the differential coefficients are evaluated at x = a. A Maclaurin series is a Taylor series with
a = 0,
dy x2 d2 y x3 d3 y
y( x) = y(0) + x + 2
+ +···
dx 2! dx 3! dx3

Power series with real variables


x2 xn
ex =1+x+ + ···+ +··· valid for all x
2! n!
2 3
x x xn
ln(1 + x) = x− + + · · · + (−1)n+1 + · · · valid for −1 < x ≤ 1
2 3 n
eix + e−ix x2 x4 x6
cos x = = 1− + − + ··· valid for all values of x
2 2! 4! 6!
eix − e−ix x3 x5
sin x = = x− + + ··· valid for all values of x
2i 3! 5!
1 2 5 π π
tan x = x + x3 + x +··· valid for − <x<
3 15 2 2
x3 x5
tan−1 x = x− + − ··· valid for −1 ≤ x ≤ 1
3 5
1 x3 1.3 x5
sin−1 x = x+ + + ··· valid for −1 < x < 1
2 3 2.4 5

2
Integer series
N
N ( N + 1)
∑n = 1+ 2+ 3+ ···+ N =
2
1
N
N ( N + 1)(2N + 1)
∑ n2 = 12 + 22 + 32 + · · · + N 2 = 6
1
N
N 2 ( N + 1)2
∑ n3 = 13 + 23 + 33 + · · · + N 3 = [1 + 2 + 3 + · · · N ] 2 = 4
1


(−1)n+1 1 1 1
∑ n
= 1 − + − + · · · = ln 2
2 3 4
[see expansion of ln (1 + x)]
1

(−1)n+1 1 1 1 π
∑ 2n − 1
= 1− + − + ··· =
3 5 7 4
[see expansion of tan−1 x]
1

1 1 1 1 π2
∑ n2 =1+
4
+ +
9 16
+··· =
6
1
N
N ( N + 1)( N + 2)( N + 3)
∑ n(n + 1)(n + 2) = 1.2.3 + 2.3.4 + · · · + N ( N + 1)( N + 2) = 4
1

This last result is a special case of the more general formula,


N
N ( N + 1)( N + 2) . . . ( N + r)( N + r + 1)
∑ n(n + 1)(n + 2) . . . (n + r) = r+2
.
1

Plane wave expansion



exp(ikz) = exp(ikr cos θ ) = ∑ (2l + 1)il jl (kr) Pl (cos θ),
l =0

where Pl (cos θ ) are Legendre polynomials (see section 11) and j l (kr) are spherical Bessel functions, defined by
r
π
jl (ρ) = J 1 (ρ), with Jl ( x) the Bessel function of order l (see section 11).
2ρ l + /2

2. Vector Algebra

2
If i, j, k are orthonormal vectors and A = A x i + A y j + A z k then | A| = A2x + A2y + A2z . [Orthonormal vectors ≡
orthogonal unit vectors.]

Scalar product

A · B = | A| | B| cos θ where θ is the angle between the vectors


 
Bx
= A x Bx + A y B y + A z Bz = [ A x A y A z ]  B y 
Bz

Scalar multiplication is commutative: A · B = B · A.

Equation of a line
A point r ≡ ( x, y, z) lies on a line passing through a point a and parallel to vector b if
r = a + λb
with λ a real number.

3
Equation of a plane
A point r ≡ ( x, y, z) is on a plane if either
(a) r · b
d = |d|, where d is the normal from the origin to the plane, or
x y z
(b) + + = 1 where X, Y, Z are the intercepts on the axes.
X Y Z

Vector product
A × B = n | A| | B| sin θ, where θ is the angle between the vectors and n is a unit vector normal to the plane containing
A and B in the direction for which A, B, n form a right-handed set of axes.

A × B in determinant form A × B in matrix form


  
i j k 0 − Az A y Bx

Ax A y Az  Az 0 − Ax   By 

Bx B y Bz − A y Ax 0 Bz

Vector multiplication is not commutative: A × B = − B × A.

Scalar triple product



Ax Ay A z

A × B · C = A · B × C = Bx By Bz = − A × C · B, etc.
Cx Cy Cz

Vector triple product

A × ( B × C ) = ( A · C ) B − ( A · B)C, ( A × B) × C = ( A · C ) B − ( B · C ) A

Non-orthogonal basis

A = A1 e1 + A2 e2 + A3 e3
e2 × e3
A1 = 0 · A where 0 =
e1 · (e2 × e3 )
Similarly for A2 and A3 .

Summation convention
a = ai ei implies summation over i = 1 . . . 3
a·b = ai bi
( a × b)i = εi jk a j bk where ε123 = 1; εi jk = −εik j
εi jkεklm = δil δ jm − δimδ jl

4
3. Matrix Algebra

Unit matrices
The unit matrix I of order n is a square matrix with all diagonal elements equal to one and all off-diagonal elements
zero, i.e., ( I ) i j = δi j . If A is a square matrix of order n, then AI = I A = A. Also I = I −1 .
I is sometimes written as In if the order needs to be stated explicitly.

Products
If A is a (n × l ) matrix and B is a (l × m) then the product AB is defined by
l
( AB)i j = ∑ Aik Bk j
k=1

In general AB 6= BA.

Transpose matrices
If A is a matrix, then transpose matrix A T is such that ( A T )i j = ( A) ji .

Inverse matrices
If A is a square matrix with non-zero determinant, then its inverse A −1 is such that AA−1 = A−1 A = I.
transpose of cofactor of A i j
( A−1 )i j =
| A|
where the cofactor of A i j is (−1)i+ j times the determinant of the matrix A with the j-th row and i-th column deleted.

Determinants
If A is a square matrix then the determinant of A, | A| (≡ det A) is defined by
| A| = ∑ i jk... A1i A2 j A3k . . .
i, j,k,...

where the number of the suffixes is equal to the order of the matrix.

2×2 matrices
 
a b
If A = then,
c d
   
a c 1 d −b
| A| = ad − bc AT = A−1 =
b d | A| −c a

Product rules
( AB . . . N ) T = N T . . . B T A T
( AB . . . N )−1 = N −1 . . . B−1 A−1 (if individual inverses exist)
| AB . . . N | = | A| | B| . . . | N | (if individual matrices are square)

Orthogonal matrices
An orthogonal matrix Q is a square matrix whose columns q i form a set of orthonormal vectors. For any orthogonal
matrix Q,
Q−1 = Q T , | Q| = ±1, Q T is also orthogonal.

5
Solving sets of linear simultaneous equations
If A is square then Ax = b has a unique solution x = A −1 b if A−1 exists, i.e., if | A| 6= 0.
If A is square then Ax = 0 has a non-trivial solution if and only if | A| = 0.
An over-constrained set of equations Ax = b is one in which A has m rows and n columns, where m (the number
of equations) is greater than n (the number of variables). The best solution x (in the sense that it minimizes the
error | Ax − b|) is the solution of the n equations A T Ax = A T b. If the columns of A are orthonormal vectors then
x = A T b.

Hermitian matrices
The Hermitian conjugate of A is A † = ( A∗ ) T , where A∗ is a matrix each of whose components is the complex
conjugate of the corresponding components of A. If A = A † then A is called a Hermitian matrix.

Eigenvalues and eigenvectors


The n eigenvalues λ i and eigenvectors u i of an n × n matrix A are the solutions of the equation Au = λ u. The
eigenvalues are the zeros of the polynomial of degree n, Pn (λ ) = | A − λ I |. If A is Hermitian then the eigenvalues
λi are real and the eigenvectors u i are mutually orthogonal. | A − λ I | = 0 is called the characteristic equation of the
matrix A.
Tr A = ∑ λi , also | A| = ∏ λi .
i i

If S is a symmetric matrix, Λ is the diagonal matrix whose diagonal elements are the eigenvalues of S, and U is the
matrix whose columns are the normalized eigenvectors of A, then
U T SU = Λ and S = UΛU T.
If x is an approximation to an eigenvector of A then x T Ax/( x T x) (Rayleigh’s quotient) is an approximation to the
corresponding eigenvalue.

Commutators
[ A, B] ≡ AB − BA
[ A, B] = −[ B, A]
[ A, B]† = [ B† , A† ]
[ A + B, C ] = [ A, C ] + [ B, C ]
[ AB, C ] = A[ B, C ] + [ A, C ] B
[ A, [ B, C ]] + [ B, [C, A]] + [C, [ A, B]] = 0

Hermitian algebra

b† = (b∗1 , b∗2 , . . .)

Matrix form Operator form Bra-ket form


Z Z
Hermiticity b∗ · A · c = ( A · b)∗ · c ψ∗ Oφ = (Oψ)∗φ hψ|O|φi

Eigenvalues, λ real Au i = λ(i) ui Oψi = λ(i)ψi O |i i = λ i | i i


Z
Orthogonality ui · u j = 0 ψ∗i ψ j = 0 hi | j i = 0 (i 6 = j )
Z 
Completeness b = ∑ ui (ui · b) φ = ∑ ψi ψ∗i φ φ = ∑ |i i hi |φi
i i i

Rayleigh–Ritz
Z
b∗ · A · b ψ∗ Oψ hψ|O|ψi
Lowest eigenvalue λ0 ≤ λ0 ≤ Z
b∗ · b ψ ψ ∗ hψ|ψi

6
6. Trigonometric Formulae

cos2 A + sin 2 A = 1 sec2 A − tan2 A = 1 cosec2 A − cot2 A = 1


2 tan A
sin 2A = 2 sin A cos A cos 2A = cos 2 A − sin 2 A tan 2A = .
1 − tan2 A

cos( A + B) + cos( A − B)
sin ( A ± B) = sin A cos B ± cos A sin B cos A cos B =
2

cos( A − B) − cos( A + B)
cos( A ± B) = cos A cos B ∓ sin A sin B sin A sin B =
2

tan A ± tan B sin( A + B) + sin ( A − B)


tan( A ± B) = sin A cos B =
1 ∓ tan A tan B 2

A+B A−B 1 + cos 2A


sin A + sin B = 2 sin cos cos2 A =
2 2 2
A+B A−B 1 − cos 2A
sin A − sin B = 2 cos sin sin 2 A =
2 2 2
A+B A−B 3 cos A + cos 3A
cos A + cos B = 2 cos cos cos3 A =
2 2 4
A+B A−B 3 sin A − sin 3A
cos A − cos B = −2 sin sin sin 3 A =
2 2 4

Relations between sides and angles of any plane triangle


In a plane triangle with angles A, B, and C and sides opposite a, b, and c respectively,
a b c
= = = diameter of circumscribed circle.
sin A sin B sin C
a2 = b2 + c2 − 2bc cos A
a = b cos C + c cos B
b2 + c2 − a2
cos A =
2bc
A−B a−b C
tan = cot
2 a+b 2
q
1 1 1 1
area = ab sin C = bc sin A = ca sin B = s(s − a)(s − b)(s − c), where s = ( a + b + c)
2 2 2 2

Relations between sides and angles of any spherical triangle


In a spherical triangle with angles A, B, and C and sides opposite a, b, and c respectively,
sin a sin b sin c
= =
sin A sin B sin C
cos a = cos b cos c + sin b sin c cos A
cos A = − cos B cos C + sin B sin C cos a

10
7. Hyperbolic Functions

1 x x2 x4
cosh x = ( e + e− x ) = 1 + + + ··· valid for all x
2 2! 4!
1 x3 x5
sinh x = ( ex − e− x ) = x + + + ··· valid for all x
2 3! 5!
cosh ix = cos x cos ix = cosh x
sinh ix = i sin x sin ix = i sinh x
sinh x 1
tanh x = sech x =
cosh x cosh x
cosh x 1
coth x = cosech x =
sinh x sinh x
cosh 2 x − sinh 2 x = 1

For large positive x:


ex
cosh x ≈ sinh x →
2
tanh x → 1
For large negative x:
e− x
cosh x ≈ − sinh x →
2
tanh x → −1

Relations of the functions


sinh x = − sinh (− x) sech x = sech(− x)
cosh x = cosh (− x) cosech x = − cosech(− x)

tanh x = − tanh(− x) coth x = − coth (− x)

2 tanh ( x/2) tanh x 1 + tanh2 ( x/2) 1


sinh x = 2
=q cosh x = 2
= q
1 − tanh ( x/2) 1 − tanh ( x/2)
1 − tanh2 x 1 − tanh2 x
q q
2
tanh x = 1 − sech x sech x = 1 − tanh2 x
q q
coth x = cosech 2 x + 1 cosech x = coth 2 x − 1
r r
cosh x − 1 cosh x + 1
sinh ( x/2) = cosh ( x/2) =
2 2
cosh x − 1 sinh x
tanh( x/2) = =
sinh x cosh x + 1
2 tanh x
sinh (2x) = 2 sinh x cosh x tanh(2x) =
1 + tanh 2 x
cosh (2x) = cosh 2 x + sinh 2 x = 2 cosh2 x − 1 = 1 + 2 sinh 2 x

sinh (3x) = 3 sinh x + 4 sinh 3 x cosh 3x = 4 cosh 3 x − 3 cosh x


3 tanh x + tanh3 x
tanh(3x) =
1 + 3 tanh2 x

11
sinh ( x ± y) = sinh x cosh y ± cosh x sinh y

cosh( x ± y) = cosh x cosh y ± sinh x sinh y


tanh x ± tanh y
tanh( x ± y) =
1 ± tanh x tanh y
1 1 1 1
sinh x + sinh y = 2 sinh ( x + y) cosh ( x − y) cosh x + cosh y = 2 cosh ( x + y) cosh ( x − y)
2 2 2 2
1 1 1 1
sinh x − sinh y = 2 cosh ( x + y) sinh ( x − y) cosh x − cosh y = 2 sinh ( x + y) sinh ( x − y)
2 2 2 2
1 ± tanh ( x/2)
sinh x ± cosh x = = e± x
1 ∓ tanh( x/2)
sinh ( x ± y)
tanh x ± tanh y =
cosh x cosh y
sinh ( x ± y)
coth x ± coth y = ±
sinh x sinh y

Inverse functions
p !
−1 x x+ x2 + a2
sinh = ln for −∞ < x < ∞
a a
p !
−1 x x + x2 − a2
cosh = ln for x ≥ a
a a
 
−1 x 1 a+x
tanh = ln for x2 < a2
a 2 a−x
 
−1 x 1 x +a
coth = ln for x2 > a2
a 2 x−a
 s 
2
x a a
sech−1 = ln  + − 1 for 0 < x ≤ a
a x x2
 s 
2
x a a
cosech−1 = ln  + + 1 for x 6= 0
a x x2

8. Limits

nc xn → 0 as n → ∞ if | x| < 1 (any fixed c)

xn /n! → 0 as n → ∞ (any fixed x)

(1 + x/n)n → ex as n → ∞, x ln x → 0 as x → 0
f ( x) f 0 ( a)
If f ( a) = g( a) = 0 then lim = 0 (l’Hôpital’s rule)
x→ a g( x) g ( a)

12
9. Differentiation
 u 0 u0 v − uv0
(uv)0 = u0 v + uv0 , =
v v2

(uv)(n) = u(n) v + nu(n−1) v(1) + · · · + n Cr u(n−r) v(r) + · · · + uv(n) Leibniz Theorem


 
n n n!
where Cr ≡ =
r r!(n − r)!

d d
(sin x) = cos x (sinh x) = cosh x
dx dx
d d
(cos x) = − sin x (cosh x) = sinh x
dx dx
d d
(tan x) = sec2 x (tanh x) = sech2 x
dx dx
d d
(sec x) = sec x tan x (sech x) = − sech x tanh x
dx dx
d d
(cot x) = − cosec2 x (coth x) = − cosech2 x
dx dx
d d
(cosec x) = − cosec x cot x (cosech x) = − cosech x coth x
dx dx

10. Integration

Standard forms
xn+1
Z
xn dx = +c for n 6= −1
n+1
1
Z Z
dx = ln x + c ln x dx = x(ln x − 1) + c
x  
1 ax x 1
Z Z
eax dx = e +c ax
x e dx = e ax
− 2 +c
a a a
 
x2 1
Z
x ln x dx = ln x − +c
2 2
Z
1 1 x
2 2
dx = tan−1 +c
a +x a a
   
1 1 −1 x 1 a+x
Z
dx = tanh + c = ln +c for x2 < a2
a2 − x2 a a 2a a−x
   
1 1 −1 x 1 x−a
Z
dx = − coth +c= ln +c for x2 > a2
x2 − a2 a a 2a x+a
x −1 1
Z
dx = +c for n 6= 1
( x2 ± a2 )n 2(n − 1) ( x2 ± a2 )n−1
x 1
Z
dx = ln( x2 ± a2 ) + c
x2 ± a2 2
Z
1 x
p dx = sin−1 +c
a2 − x2 a
Z
1  p 
p dx = ln x + x2 ± a2 + c
x2 ± a2
Z
x p
p dx = x2 ± a2 + c
x2 ± a2
p 1h p 2  x i
Z
a2 − x2 dx = x a − x2 + a2 sin −1 +c
2 a

13
∞ 1
Z
dx = π cosec pπ for p < 1
0 (1 + x) x p
r
∞ ∞ 1 π
Z Z
2 2
cos( x ) dx = sin ( x ) dx =
0 0 2 2
Z ∞ √
exp(− x2 /2σ 2 ) dx = σ 2π
−∞
 √
Z ∞  1 × 3 × 5 × · · · (n − 1)σ n+1 2π for n ≥ 2 and even
n 2 2
x exp(− x /2σ ) dx =
−∞ 
0 for n ≥ 1 and odd
Z Z
sin x dx = − cos x + c sinh x dx = cosh x + c
Z Z
cos x dx = sin x + c cosh x dx = sinh x + c
Z Z
tan x dx = − ln(cos x) + c tanh x dx = ln(cosh x) + c
Z Z
cosec x dx = ln(cosec x − cot x) + c cosech x dx = ln [tanh( x/2)] + c
Z Z
sec x dx = ln(sec x + tan x) + c sech x dx = 2 tan−1 ( ex ) + c
Z Z
cot x dx = ln(sin x) + c coth x dx = ln(sinh x) + c

sin (m − n) x sin (m + n) x
Z
sin mx sin nx dx = − +c if m2 6= n2
2(m − n) 2(m + n)
sin (m − n) x sin (m + n) x
Z
cos mx cos nx dx = + +c if m2 6= n2
2(m − n) 2(m + n)

Standard substitutions
If the integrand is a function of: substitute:
p
( a2 − x2 ) or a2 − x2 x = a sin θ or x = a cos θ
p
( x2 + a2 ) or x2 + a2 x = a tan θ or x = a sinh θ
p
( x2 − a2 ) or x2 − a2 x = a sec θ or x = a cosh θ

If the integrand is a rational function of sin x or cos x or both, substitute t = tan( x/2) and use the results:
2t 1 − t2 2 dt
sin x = cos x = dx = .
1 + t2 1 + t2 1 + t2

If the integrand is of the form: substitute:

dx
Z
p px + q = u2
( ax + b) px + q

dx 1
Z
q ax + b = .
( ax + b) px2 + qx + r u

14
Integration by parts
b
b Z b
Z
u dv = uv − v du
a a a

Differentiation of an integral
If f ( x, α ) is a function of x containing a parameter α and the limits of integration a and b are functions of α then

Z b(α ) Z b(α )
d db da
f ( x, α ) dx = f (b, α ) − f ( a, α ) + f ( x, α ) dx.
dα a (α ) dα dα a (α ) ∂α
Special case,
d
Z x
f ( y) dy = f ( x).
dx a

Dirac δ-‘function’
1 ∞
Z
δ (t − τ ) = exp[iω(t − τ )] dω.
2π −∞
Z ∞
If f (t) is an arbitrary function of t then δ (t − τ ) f (t) dt = f (τ ).
−∞
Z ∞
δ (t) = 0 if t 6= 0, also δ (t) dt = 1
−∞

Reduction formulae

Factorials

n! = n(n − 1)(n − 2) . . . 1, 0! = 1.
Stirling’s formula for large n: ln(n!) ≈ n ln n − n.
Z ∞ Z ∞ √ √
For any p > −1, x p e− x dx = p x p−1 e− x dx = p!. (− 1/2)! = π, ( 1/2)! = π/ ,
2 etc.
0 0
Z 1 p!q!
For any p, q > −1, x p (1 − x)q dx = .
0 ( p + q + 1)!

Trigonometrical

If m, n are integers,
Z π/ 2
m − 1 π/ 2 n − 1 π/ 2
Z Z
sin m θ cos n θ dθ =sin m−2 θ cosn θ dθ = sin m θ cosn−2 θ dθ
0 m+n 0 m+n 0
and can therefore be reduced eventually to one of the following integrals
Z π/ 2 Z π/ 2 Z π/ 2 Z π/ 2
1 π
sin θ cos θ dθ = , sin θ dθ = 1, cos θ dθ = 1, dθ = .
0 2 0 0 0 2

Other

r
∞ (n − 1) 1 π 1
Z
If In = xn exp(−α x2 ) dx then In = In − 2 , I0 = , I1 = .
0 2α 2 α 2α

15
11. Differential Equations

Diffusion (conduction) equation


∂ψ
= κ ∇2ψ
∂t

Wave equation
1 ∂2ψ
∇2ψ =
c2 ∂t2

Legendre’s equation
d2 y dy
(1 − x2 ) − 2x + l (l + 1) y = 0,
dx2 dx
 l
1 d l
solutions of which are Legendre polynomials Pl ( x), where Pl ( x) = l x2 − 1 , Rodrigues’ formula so
2 l! dx
1 2
P0 ( x) = 1, P1 ( x) = x, P2 ( x) = (3x − 1) etc.
2

Recursion relation

1
Pl ( x) = [(2l − 1) xPl −1 ( x) − (l − 1) Pl −2( x)]
l

Orthogonality

Z 1 2
Pl ( x) Pl 0 ( x) dx = δll 0
−1 2l + 1

Bessel’s equation
d2 y dy
x2 +x + ( x2 − m2 ) y = 0,
dx2 dx
solutions of which are Bessel functions Jm ( x) of order m.

Series form of Bessel functions of the first kind


(−1)k ( x/2)m+2k
Jm ( x ) = ∑ k!(m + k)!
(integer m).
k=0

The same general form holds for non-integer m > 0.

16
13. Functions of Several Variables

∂φ
If φ = f ( x, y, z, . . .) then implies differentiation with respect to x keeping y, z, . . . constant.
∂x
∂φ ∂φ ∂φ ∂φ ∂φ ∂φ
dφ = dx + dy + dz + · · · and δφ ≈ δx + δy + δz + · · ·
∂x ∂y ∂z ∂x ∂y ∂z
 
∂φ ∂φ ∂φ
where x, y, z, . . . are independent variables. is also written as or when the variables kept
∂x ∂x ∂x
y,... y,...
constant need to be stated explicitly.
∂ 2φ ∂2φ
If φ is a well-behaved function then = etc.
∂x ∂y ∂y ∂x
If φ = f ( x, y),
       
∂φ 1 ∂φ ∂x ∂y
=   , = −1.
∂x y ∂x ∂x y ∂y φ ∂φ x
∂φ y

Taylor series for two variables


If φ( x, y) is well-behaved in the vicinity of x = a, y = b then it has a Taylor series
 2 2 
∂φ ∂φ 1 2∂ φ ∂2φ 2∂ φ
φ( x, y) = φ( a + u, b + v) = φ( a, b) + u +v + u + 2uv +v +···
∂x ∂y 2! ∂x2 ∂x ∂y ∂y2
where x = a + u, y = b + v and the differential coefficients are evaluated at x = a, y=b

Stationary points
∂φ ∂φ ∂2φ ∂2φ ∂2φ
A function φ = f ( x, y) has a stationary point when = = 0. Unless 2 = = = 0, the following
∂x ∂y ∂x ∂y 2 ∂x ∂y
conditions determine whether it is a minimum, a maximum or a saddle point.

∂2φ ∂2φ 
Minimum: > 0, or > 0, 

  2 2
∂x2 ∂y2 ∂2φ ∂2φ ∂φ
2 2 and >
∂φ ∂φ  2
∂x ∂y 2 ∂x ∂y
Maximum: 2
< 0, or 2
< 0, 


∂x ∂y
 2
∂2φ ∂2φ ∂2φ
Saddle point: <
∂x2 ∂y2 ∂x ∂y
∂2φ ∂2φ ∂2φ
If = = = 0 the character of the turning point is determined by the next higher derivative.
∂x2 ∂y2 ∂x ∂y

Changing variables: the chain rule


If φ = f ( x, y, . . .) and the variables x, y, . . . are functions of independent variables u, v, . . . then

∂φ ∂φ ∂x ∂φ ∂y
= + + ···
∂u ∂x ∂u ∂y ∂u
∂φ ∂φ ∂x ∂φ ∂y
= + + ···
∂v ∂x ∂v ∂y ∂v
etc.

18
Changing variables in surface and volume integrals – Jacobians
If an area A in the x, y plane maps into an area A 0 in the u, v plane then

∂x ∂x

Z Z ∂u ∂v
f ( x, y) dx dy = f (u, v) J du dv where J =

A A0 ∂y ∂y

∂u ∂v
∂( x, y)
The Jacobian J is also written as . The corresponding formula for volume integrals is
∂(u, v)

∂x ∂x ∂x

∂u ∂v ∂w

Z Z ∂y ∂y ∂y
f ( x, y, z) dx dy dz = f (u, v, w) J du dv dw where now J =

V V0 ∂u ∂v ∂w

∂z ∂z ∂z

∂u ∂v ∂w

14. Fourier Series and Transforms

Fourier series
If y( x) is a function defined in the range −π ≤ x ≤ π then
M M0
y( x) ≈ c0 + ∑ cm cos mx + ∑ sm sin mx
m=1 m=1

where the coefficients are


1
Z π
c0 = y( x) dx
2π −π
1
Z π
cm = y( x) cos mx dx (m = 1, . . . , M)
π −π
1
Z π
sm = y( x) sin mx dx (m = 1, . . . , M 0 )
π −π
with convergence to y( x) as M, M 0 → ∞ for all points where y( x) is continuous.

Fourier series for other ranges


Variable t, range 0 ≤ t ≤ T, (i.e., a periodic function of time with period T, frequency ω = 2π/ T).
y(t) ≈ c0 + ∑ cm cos mωt + ∑ sm sin mωt
where
ω T ω T ω T
Z Z Z
c0 = y(t) dt, cm = y(t) cos mωt dt, sm = y(t) sin mωt dt.
2π 0 π 0 π 0
Variable x, range 0 ≤ x ≤ L,
2mπx 2mπx
y( x) ≈ c0 + ∑ cm cos + ∑ sm sin
L L
where
1 L 2 L 2mπx 2 L 2mπx
Z Z Z
c0 = y( x) dx, cm = y( x) cos dx, sm = y( x) sin dx.
L 0 L 0 L L 0 L

19
18. Statistics

Mean and Variance


A random variable X has a distribution over some subset x of the real numbers. When the distribution of X is
discrete, the probability that X = x i is Pi . When the distribution is continuous, the probability that X lies in an
interval δx is f ( x)δx, where f ( x) is the probability density function.
Z
Mean µ = E( X ) = ∑ Pi xi or x f ( x) dx.
Z
Variance σ 2 = V ( X ) = E[( X − µ )2 ] = ∑ Pi (xi − µ )2 or ( x − µ )2 f ( x) dx.

Probability distributions
2 x Z
2
Error function: erf( x) = √ e− y dy
π 0
 
n x n− x
Binomial: f ( x) = p q where q = (1 − p), µ = np, σ 2 = npq, p < 1.
x
µ x −µ
Poisson: f ( x) = e , and σ 2 = µ
x!
 
1 ( x − µ )2
Normal: f ( x) = √ exp −
σ 2π 2σ 2

Weighted sums of random variables


If W = aX + bY then E(W ) = aE( X ) + bE(Y ). If X and Y are independent then V (W ) = a 2 V ( X ) + b2 V (Y ).

Statistics of a data sample x 1 , . . . , xn


1
Sample mean x =
n ∑ xi
 
1 1
Sample variance s =
n
2
∑( x i − x ) 2
=
n∑ i
x2 − x2 = E( x2 ) − [E( x)]2

Regression (least squares fitting)


To fit a straight line by least squares to n pairs of points ( x i , yi ), model the observations by y i = α + β( xi − x) + i ,
where the i are independent samples of a random variable with zero mean and variance σ 2 .
1 1 1
Sample statistics: s 2x =
n ∑( x i − x ) 2 , s2y =
n ∑ ( y i − y) 2 , s2xy =
n ∑(xi − x)( yi − y).
s2xy n
Estimators: α b=
b = y, β ; E(Y at x) = α b ( x − x); σb 2 =
b+β (residual variance),
s2x n−2
1 s4
where residual variance = ∑{ yi − α b ( xi − x)}2 = s2 − xy .
b −β y
n s2x

b2
b are σ σb 2
b and β
Estimates for the variances of α and 2 .
n ns x

s2xy
Correlation coefficient: ρ
b=r= .
sx s y

26

You might also like