Clifford Algebra: Essential Definitions
Clifford Algebra: Essential Definitions
Stephen Crowley
Email: stephen.crowley@hushmail.com
March 4, 2012
Abstract. This article distills many of the essential definitions from the very thorough book, Clifford
Algebras: An Introduction, by Dr D.J.H. Garling, with some minor additions.
Table of contents
1. Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1. Notation and Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2. Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.1. Subgroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.2. Quotient Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.3. Exact Sequences, Centers, and Centralizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.4. Specific Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.5. Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.6. Dihedral Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.7. Quaternions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3. Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.1. Linear Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.2. Endomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.3. Duality of Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4. Algebras, Representations, and Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4.1. Super-algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.2. Ideals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.3. Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.4. The Exponential Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.5. Group Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.6. Modules, Submodules, and Direct Sums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4.7. Simple Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.4.8. Semi-simple modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5. Multilinear Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5.1. Multilinear Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5.2. Tensor Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5.3. The Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5.4. Alternating Mappings and the Exterior Algebra: Fermionic Fock Spaces . . . . . . . . . . . 12
1.5.5. The Symmetric Tensor Algebra: Bosonic Fock Spaces . . . . . . . . . . . . . . . . . . . . . . . 13
1.5.6. Creation and Annihilation Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5.7. Tensor Products of Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.5.8. Tensor Products of Super-Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.6. Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.6.1. Real Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.6.2. Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.6.3. Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.6.4. Adjoint Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.6.5. Complex Inner-Product Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.7. Clifford Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.7.1. Universality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1
2 A Very Brief Introduction To Clifford Algebra
2. Modular Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.1. Modular Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.1.1. Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3. Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1
3.1. Particles with Spin 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2. The Dirac Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2.1. The Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3. Maxwell’s Equations for an Electromagnetic Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1. Definitions
The contents of this paper are distilled and in many instances copied verbatim from the very thorough [1].
1.1. Notation and Symbols.
The symbol Z denotes the set of integers. The symbol “∀” represents the expressions “for all” or “for
each”. Likewise, the symbol “∈” means “is an element of” or “in”. Additional symbols and notations will be
introduced as needed.
1.2. Groups.
A group is a non-empty set G together with a composition law , a mapping (g, h) → gh from G ×G to G
which satisfies
1. (gh) j = g (hj)∀g, h, j ∈ G (associativity)
2. there exists e in G such that eg = ge=g∀g ∈ G and
3. ∀g ∈ G there exists g −1 ∈ G such that gg −1 = g −1 g = e
It follows that the identity element g is unique and that ∀g ∈ G the inverse g −1 is also unique. A group
G is said to be abelian(named after the Norwegian mathematician Niels Henrik Abel), or commutative, if
gh = hg∀g, h ∈ G. If G is commutative then the law of composition is often written as addition: (g, h) → g +h
where in such case the identity is denoted by 0 and the inverse of g by −g.
1.2.1. Subgroups.
A non-empty subset H of a group G is a subgroup of G if h1h2 ∈ H and h−1 ∈ H whenever h ∈ H then
H becomes a group under the law of composition inherited from G and this situation is denoted by H ⊆ G
if H is possibly equal to G and H ⊂ G if H is strictly contained in and definately not equal to G.
If A ⊆ G then there is a smallest subgroup Gp(A) of G which contains A called the subgroup generated
by A. If A = {g} is a singleton then we write Gp(g) for Gp(A). Then Gp(g) = {g n: n ∈ Z} where g 0 = e, gn
is the product of n copies of g when n > 0, and g n is the product of |n| copies of g −1 when n < 0. A group
G is cyclic if G = Gp(g) for some g ∈ G. If G has a finite number of elements then the order o(G) of G is
the number of elements of G. If g ∈ G then the order o(g) of g is the order of the group o(Gp(g)).
A mapping θ: G → H from a group G to a group H is a called a homomorphism, or simply a morphism, if
θ(g1 g2) = θ(g1) θ(g2)∀g1, g2 ∈ G
It follows that θ maps the identity in G to the identity in H and that θ(g −1) = θ(g)−1. A bijective morphism
is called an isomorphism, and an isomorphism G → G is called an automorphism of G. The set Aut(G) of
automorphisms of G forms a group when the law of composition is taken as the composition of mappings.
It should be pointed out that a homomorphism is not the same as a homeomorphism. The Greek meaning
of homomorphism is “to form alike” whereas a homeomorphism is a “continuous transformation”.
1.2.2. Quotient Groups.
Stephen Crowley 3
1 1 1 1 1
0 6 j 6 k then the diagram
θ0 θ1 θ2 θk −1 θk
1 G1 G2 Gk 1
is an exact sequence if θ j −1(G j −1) is the kernel of θ j ∀1 6 j 6 k. When k = 3 the sequence is a short exact
sequence. For example, if K is a normal subgroup of g and q: G → G/K is the quotient mapping then
1 1 1 1 K
⊆
G
q
G/K 1
is a short exact sequence. If A is a subset of a group G then the centralizer CG(A) of A in G is defined as
CG(A) = {g ∈ G: ga = ag∀a ∈ A}
Similarly, the center Z(G) = CG(G) is defined as
Z(G) = {g ∈ G: gh = hg∀h ∈ G}
and is a normal subgroup of G. The product of two groups G1 × G2 is a group when the composition law is
defined by
(g1, g2)(h1, h2) = (g1h1, g2h2)
The subgroup G1 × {e2} is identified with G1 and the subgroup {e1} × G2 with G2.
1.2.4. Specific Instances.
The set of real numbers R forms a commutative group when addition is the composition law. The set of
non-zero real numbers R∗ is a group under multiplication. The set of integers Z is a subgroup of R. Any two
groups of order 2 are isomorphic. Denote the multiplicative subgroup {+1, −1} ⊂ R by D2, and the additive
group {0, 1} by D2 which is isomorphic to the quotient group Z/2Z. Though they are small, these groups
play fundamental roles in the theory of Clifford algebras and other branches of mathematics and physics.
Suppse there is a short exact sequence
1 1 1 1 D2
j
G1
θ
G2 1
Then j(D2) is a normal subgroup of G1, from which it follows that j(D2) is contained in the center of G1. If
g ∈ G then we write −g for j(−1) g. Then θ(g) = θ(−g) and if h ∈ G2 then θ −1{h} = {−g, g } for some g ∈ G
in which case we say that G1 is a double cover of G2. Double covers play fundamental roles in the theory
of spin groups. The complex numbers C form a commutative group under addition and R is a subgroup of
C. The set C∗ of non-zero complex numbers is a group under multiplication. The set T = {z ∈ C:|z |=1} is
a subgroup of C∗. There is also the short exact sequence
0 1 1
j
Z
⊆
R
q
T 1
2 πi
where q(θ) = e2 π iθ. The subset Tn = {e n : 0 6 j 6 n} = {z ∈ C: z n = 1} is a cyclic subgroup of T of order
k
2πi n
o(n). Conversely, if G = Gp(g) is a cyclic group of order n then the mapping g k → e is an isomorphism
of G onto Tn.
4 A Very Brief Introduction To Clifford Algebra
1.2.5. Mappings.
A bijective mapping of a set X onto itself is called a permutation. The set ΣX of permutations of X
is a group under the composition of mappings. The permutation set ΣX is noncommutative if X contains
at least 3 elements. The group of permutations of the set {1, , n} is denoted by Σn. A transposition is a
permutation which fixes all but 2 of the elements. The permutation group Σn has a normal subgroup An
1
of order 2 n! composed of all the permutations that can be expressed as the product of an even number of
transpositions. Thus, we have the short exact sequence
1 1 1
An
⊆
Σn
ǫ
D2 1
The signature is defined by
+1 σ ∈ An
ǫ(σ) =
−1 otherwise
1.2.6. Dihedral Groups.
The full dihedral group D is the group of isometries of the complex plane C which fix the origin
An element of D is either a rotation Rθ = eiθz or a reflection Sθ = eiθz̄ . The set Rot of rotations is a subgroup
of D and the mapping R: eiθ → Rθ is an isomorphism of T onto Rot. Particularly, we see that Rπ(z) = −z since
1 1 1
T
R
D
δ
D2 1
where δ(Rθ) = 1 and δ(Sθ) = −1∀θ ∈ [0, 2π). If we let Rn = R(Tn) and then D2n = Rn ∪ RnS0 is a subgroup of
D called the dihedral group of order 2n(some authors use the notation Dn). The symbol “∪” means “union”.
There is an equivalence relation D4 D D2 ×D2 so we see that D4 is commutative. If n > 3 then D2n is the
symmetry group of a regular polygon with n vertices with the center at the origin. The subgroup D2n of D
is a noncommutative subgroup of order 2n. If n = 2k is even then Z(D2n) = {1, r−1} and we have the short
exact sequence
1 D2 D2n D2k 1
The group D2 n is a double cover of D2k. If n = 2k + 1 is odd then Z(D2n) = {1}. Particularly, the dihedral
group D8 is the noncommutative group of symmetries of the square with center at the origin. Let’s set α = ri ,
β = σ1, γ = σi then D8 = {±1, ±α, ±β, ±γ } where −x denotes r −1 x = xr −1 and
αβ = γ βγ = α γα = β
βα = −γ γβ = −α αγ = −β
α2 = −1 β 2 = 1 γ2 = 1
There is the short exact sequence
1 D2 D8 D4 1
1.2.7. Quaternions.
The quaternionic group Q is a group of order o(8) with elements {±1, ±i, ±j , ±k} having identity element
1 and the composition law defined by
ij = k jk = i ki = j
ji = −k kj = −i ik = j
i2 = −1 j 2 = −1 k 2 = −1
Stephen Crowley 5
where (−1)x = x(−1) = −x, (−1)(−x) = (−x)(−1) = x∀x ∈ {1, i, j , k}. The center of Q is Z(Q) = {+1, −1}
and there is the short exact sequence
1 D2 Q D4 1
The group of quaternions Q is a double cover of the commutative dihedral group D4. The groups D8 and Q
are particularly important in the study of Clifford algebras and both provide double covers of D4 but they
are not isomorphic to each other. Q has 6 elements of order o(4), 1 of order o(2), and 1 of order o(1). D8
has 2 elements of order o(4), 5 of order o(2), and 1 of order o(1).
Both R and C are real finite-dimensional algebras, with real dimension 1 and 2 respectively, and both
are also fields since every non-zero element within them have multiplicative inverses. A division algebra is
a noncommutative field. The algebra H of quaternions provides an example of a noncommutative finite-
dimensional real division algebra. The space of 2 × 2 complex matrices M2(C) can be considered as an 8-
dimensional real algebra. The associate Pauli matrices τ0,x, y,z, to be introduced in a later section, form a
linearly independent subset of M2(C) and so their real linear span is a 4-dimensional subspace of H. If we
let h = aτ0 + bτx + cτ y + dτz ∈ H then
a + id ib + c
h=
ib − c a − id
and consequently
z w
H= ∀z, w ∈ C
−w̄ z̄
H is a 4-dimensiional unital subalgebra of the 8-dimensional real algebra M2(C) since composition of the
associate Pauli matrices generates the group π(Q). Elements x ∈ span(1) are called real quaternions iff x is
in the center Z(H) of H. Elements of span(i, j , k) are called pure quaternions, the space of which is denoted
by Pu(H), iff x2 is real and non-positive.
1.3. Vector Spaces.
1.3.1. Linear Subspaces.
Let K denote the field of either the real R or complex C numbers. A vector space E over K is a
commutative additive group (E , +), together with a mapping, scalar multiplication, (λ, x) → λx of K ×E
into E which satisfies
• 1x = x
• (λ + µ)(x) = λx + µx
• λ( µx) = (λµ)x
• λ(x + y) = λx + λy
∀λ, µ ∈ K and x, y ∈ E. The elements of E are called vectors and the elements of K are called scalars. It
follows that 0x = 0 and λ0 = 0∀x ∈ E , λ ∈ K. Note that the same symbol 0 is used for the additive identity
element in E and the zero element in K. A non-empty subset F of a vector space E is a linear subspace of
E if it is a subgroup of E and λx ∈ U ∀λ ∈ K , x ∈ F . A linear subspace is then a vector space with operations
inherited from E. If A is a subset of E then the intersection of all the linear subspaces containing A is a
linear subspace known as the subspace span(A) spanned by A. If E is spanned by a finite set then E is
said to be finite-dimensional. By convention, all considered vector spaces will be finite-dimensional unless
otherwise stated.
A subset B ⊆ E is linearly independent if whenever λ1, , λk are scalars and b1, , bk are distinct elements
of B for which λ1 b1 +
λk bk = 0 then λ1 =
= λk = 0. A linearly independent finite subset B ⊆ E which
spans E is called a basis and is denoted by (b1, , bd). If (b1, , bd) is a basis for E then every element x ∈ E
can be written uniquely as x = x1b1 +
+ xd bd = 0 where x1, , xd are scalars. All finite-dimensional vector
spaces have a basis. If A is a linearly independent subset of E contained in a subset C ⊆ E which spans E
then there is a basis B for E such that A ⊆ B ⊆ C. Any pair of bases have the same number of elements
which is the dimension of E denoted dim (E). For example, let E = K d be the product of d copies of K
with coordinatewise addition and with scalar multiplication
λ(x1, , xd) = (λx1, , λxd)
6 A Very Brief Introduction To Clifford Algebra
If ej = (0, , 0, 1, 0, , 0) is a vector with 1 in the jth position then K d is a vector space having basis (e1, ,
ed) called the standard basis. More generally, let Mm,n = Mm,n(K) denote the set of all K-valued functions
on {1, , m} × {1, , n} then Mm,n becomes a vector space over K when addition and multiplication are
defined coordinatewise. The elements of Mm,n are the familiar matrices. The matrix taking the value 1 at
(i, j) and 0 elsewhere is denoted Ei,j . The set of matrices {Eij : 1 6 i 6 m, 1 6 j 6 n} forms a basis for Mm,n
so that dim (Mm,n) = mn.
If E1 and E2 are vector spaces then the product E1 × E2 is also a vector space with addition and scalar
multiplication defined by
where dim (E1 × E2) = dim (E1) + dim (E2). A mapping T : E → F where E and F are vector spaces over the
same field K is linear if
T (x + y) = T (x) + T (y)
∀λ ∈ K , x, y ∈ E
T (λx) = λT (x)
The image T (E) of E is a linear subspace of F and the null-space defined by N (T ) = {x ∈ E: T (x) = 0} is
a linear subspace of E. The rank of T is rank(T ) = dim (T (E)) and the nullity of T is n(T ) = dim (N (T )).
The fundamental rank-nullity formula is then rank(T ) + n(T ) = dim (E).
A bijective linear mapping J : E → F is called an isomorphism iff(if and only if) N (J) = {0} and J (E) = F
then dim (E) = dim (F ). The topology of a vector space can be considered by noting that K d is a complete
metric space having the usual Euclidean distance metric
v
u d
uX
d(x, y) = t |x j − y j |2
j =1
We can then define a norm k.k on F by setting kJ(x)k=kxk which depends on the choice of the basis, however
any pair of norms are equivalent and define the same topology. If F1 and F2 are linear subspaces of E for
which the linear mapping (x1, x2) → x1 + x2: F1 × F2 → E is an isomorphism then E is the direct sum of F1
and F2 denoted by F1 ⊗ F2. This happens iff F1 ∩ F2 = {0}, E = span(F1 ∪ F2) and every element x ∈ E can
be written uniquely as x = x1 + x2∀x1 ∈ F1, x2 ∈ F2. Suppose that (e1, , ed) is a standard basis for E and
that y1, , yd are the elements of a vector space F . If x = λ1 e1 + + λd ed ∈ E and T (x) = λ1 y1 +
+ λd yd
then T is the unique linear mapping of E into F for which T (e j ) = yj ∀1 6 j 6 d. This process of constructing
T is called extension by linearity.
1.3.2. Endomorphisms.
The set L(E , F ) of linear mappings from E to F is a vector space defined by
where dim (L(E , F )) = dim (E) dim (F ). The elements of L(E) = L(E , E) are the endmorphisms of E. The
Greek meaning of the word endomorphism is “to form inside”. The composition of T ∈ L(E , F ) and S ∈ L(F ,
G) is ST = S ◦ T ⊆ L(E , G). Suppose that (e1, , ed) is a standard basis for E, (f1, , fd) is a basis for F and
that T ∈ L(E , F ). If we let T (e j ) = Σci=1ti j fi and x = Σdj =1 x j ej then T (x) = Σci=1(Σdj =1ti j x j )fi. The mapping
T → (tij ) is an isomorphism of L(E , F ) onto Mc,d so that dim (L(E , F )) = cd = dim (E) dim (F ). T is said
to be represented by the c × d matrix (tij ). If (g1, , gd) is a basis for G and S ∈ L(F , G) is represented by
the matrix (shi) then the product R = ST ∈ L(E , G) which defines matrix multiplication is represented by
the matrix (rh j ) where rhj = Σci=1sh i tij . If (e1, , ed) is the standard basis for E then an element T ∈ L(E)
of the endmorphisms of E can be represented by a matrix (ti j ), and the mapping T → (tij ) is an algebra
isomorphism of L(E) onto the algebra Md(K) of d × d matrices, where composition is defined as matrix
multiplication.
1.3.3. Duality of Vector Spaces.
Stephen Crowley 7
R is a 1-dimensional vector space over K whereas C is a 2-dimensional vector space over R with basis
{1, i}. The space L(E , K) is called the dual or dual space of E and is denoted by E ′, the elements of which
are known as linear functionals on E. Suppose that (e1, , ed) is a basis for E. If we let x = Σdi=1 xi ei and
φi(x) = xi∀1 6 i 6 d then φi ∈ E ′ and (φ1, , φd) is a basis for E ′ known as the dual basis of (e1, , ed). Thus
dim (E) = dim (E ′). If x ∈ E , φ ∈ E ′ and j(x)(φ) = φ(x) then j: E → E ′′ is an isomorphism of E onto E ′′,
the dual of E ′ known as the bidual of E.
Suppose that we let T ∈ L(E , F ), ψ ∈ F ′, x ∈ E and (T ′(ψ))(x) = ψ(T (x)) then T (ψ) ∈ E ′ and T ′ is a linear
mapping of F ′ into E ′ known as the transposed mapping of T . If A is a subset of E then the annihilator
A⊥ in E ′ of A is the set
A⊥ = {φ ∈ E ′: φ(a) = 0∀a ∈ A}
which is a linear subspace of E ′. Similiarly, if B is a subset of E ′ then the annihilator B ⊥ in E of B is the set
B ⊥ = {x ∈ E: φ(x) = 0∀φ ∈ B }
It then follows that A⊥⊥ = span(A) and B ⊥⊥ = span(B). If F is a linear subspace of E then dim (F ) +
dim (F ⊥) = dim (E). If T ∈ L(E , F ) then T (E)⊥ = N (T ′) and N (T )⊥ = T ′(F ′).
1.4. Algebras, Representations, and Modules.
A finite-dimensional associative algebra A over K is a finite-dimensional vector space over K equipped
with a composition law (a, b) → ab from A × A → A which satisfies
• (ab)c = a(bc) (associativity)
• a(b + c) = ab + ac
• (a+b)c = ac + bc
• λ(ab) = (λa)b = a(λb)
∀λ ∈ K , a, b, c ∈ A where, as usual, multiplication takes precedence over addition. An algebra A is said to be
unital if there exists 1 ∈ A, the identity element, such that 1a = a1 = a∀a ∈ A. Most algebras to be considered
will be unital. An algebra is said to be commutative if ab = ba∀a, b ∈ A. A mapping φ from an algebra A over
K to an algebra B over K is said to be an algebra morphism if it is linear and φ(ab) = φ(a) φ(b)∀a, b ∈ A. If A
and B are unital and φ(1A) = 1B where 1A is the identity element of A and 1B is the identity element of B then
φ is said to be a unital morphism. An algebra morphism of an algebra into itself is called an endomorphism.
Suppose that G is a finite group with identity element e then K G is a finite-dimensional vector space with
basis {δ g : g ∈ G} known as the group algebra which is commutative iff G is commutative. Multiplication on
K G is defined by letting a = Σ g∈G a g δ g and b = g∈G b g δ g then setting ab = g ∈G c g δ g where
P P
X X X
cg = a hb j = ah bh−1 g = a gj −1 b j
h j= g h∈G j ∈G
If A is an algebra then the opposite algebra Aopp is obtained by keeping addition and scalar multiplication the
same and defining a new composition law by reversing the original so that a∗b = ba. A linear subspace B of an
algebra A is a subalgebra of A if b1 b2 ∈ B∀b1, b2 ∈ B. If A is unital then a subalgebra B is a unital subalgebra if
the identity element of A belongs to B. For example, if A is a unital subalgebra then the set End(A) of unital
endomorphisms of A is a unital subalgebra of L(A). The centralizer CA(B) of a subset B of an algebra A is
CA(B) = {a ∈ A: ab = ba∀b ∈ B }
and the center Z(A) of A is the centralizer CA(A) which is a commutative subalgebra if A is a unital algebra
Z(A) = {a ∈ A: ab = ba∀b ∈ A}
A unital algebra is said to be central if Z(A) is the 1-dimensional subspace span(1). An element p of an
algebra A is an idempotent if p2 = p. If p is an idempotent of a unital algebra A then 1 − p is also an
idempotent and A = pA ⊗ (1 − p)A is the direct sum of linear subspaces of A. If additionally p ∈ Z(A) then
pA and (1 − p)A are subalgebras of A with identity elements p and 1 − p respecively. The subalgebras pA
and (1 − p)A are unital if p ∈ {0, 1} in which case the mapping a → pa is a unital algebra morphism of A
onto pA. An idempotent in L(E) or Md(K) is called a projection.
8 A Very Brief Introduction To Clifford Algebra
1.4.1. Super-algebras.
1+j
An element j of a unital algebra A is an involution if j 2 = 1 from which it follows that 2 is an
idempotent. In a similiar way, an endomorphism θ of an algebra A is an involution if θ 2 = I. If θ is
I +θ
an involution of a unital algebra A and p = 2 then we can write A = A+ ⊗ A− where A+ = p(A) and
A− = (I − p)(A) then p(A) is a subalgebra of A such that
A+ = {a ∈ A: θ(a) = a}
A− = {a ∈ A: θ(a) = −a}
and
A+.A+ ⊆ A+
A−.A− ⊆ A+
(1)
A−.A+ ⊆ A−
A+.A− ⊆ A−
Conversely, if A = A+ ⊗ A− is a direct sum decomposition for which (1) holds, then the mapping which sends
a+ + a− to a+ − a− is an involution of A known as the Z2-graded algebra or super-algebra. The elements of A+
are called the even elements and the non-zero elements of A− are called the odd elements. Any element a of
A can be decomposed as a = a+ + a−, the sum of the even and odd parts of a. If a ∈ A+ ∪A− then a is said to
be homogeneous. The real algebra C is a super-algebra when the involution j is defined as j(x + iy) = x − iy.
1.4.2. Ideals.
If a is an element of a unital algebra A then b is a left inverse of a if ba = 1. An element b is similiarly
said to be a right inverse of a if ab = 1. a is invertable if it has both a left and right inverse in which case
they are unique and equal. This unique element is called the inverse of a and is denoted by a−1. The set of
invertible elements is denoted by G(A); it is a group under functional composition. G(A) is called the general
linear group of A when A is the endomorphisms of a vector space L(E). Similiarly, we denote G(Md(K))
by GLd(K).
There is a unique mapping known as the determinant , det : Md(K) → K satisfying
Proposition 1. Suppose that A is a finite-dimensional real unital algebra and that a, b ∈ A then
i. The exponential function is continuous
ii. If ab = ba then ea+b = ea eb
iii. ea is invertible with inverse e−a
iv. The mapping t → et is a continuous morphism of the group (R, +) into G(A)
v. If ab = −ba then eab = be−a
vi. If a2 = −1 then et a = cos (t) + sin (ta), the mappingeit →ea t is a homeomorphism of T onto a compact
π
subgroup of G(A), and the mapping t → et a from 0, 2 into G(A) is a continuous path from I to a.
vii. Suppose that a I and that a2 = I, then et a = cosh (t) + sinh (ta), and the mapping t → ea t is a
homeomorphism of R onto an unbounded subgroup of G(A).
is a faithful representation of the full dihedral group as a group of reflections and rotations of R2. If this
mapping is restricted to D8 we have a faithful representation
π(1) = I π Rπ = J π(S0) = U π Sπ = Q
2
2
π(Rπ) = −I π R 3 π = −J π(Sπ) = −U π S 3 π = −Q
2 2
where the matrices and their actions are described by the table
The direct sum of left A-modules can be formed; if M1 and M2 are left A-modules then the vector space
sum M1 ⊗ M2 becomes a left A-module when a(m1, m2) = (am1, am2). A linear subspace N of M is a left
A-submodule if an ∈ N ∀a ∈ A, n ∈ N then N is a left A-module in a natural way. If N1 and N2 are left A-
submodules then so is N1 + N2. The intersection of left A-submodules is again a left A-submodule. If C is
a non-empty subset of a left A-module M then there is a smallest left A-submodule containing C defined by
[C]A = {a1 c1 +
+ ak ck: k > 0, ai ∈ A, ci ∈ C }
if C = {c1, , cn } we write [c1, , cn]A then
[c1, , cn]A = {a1 c1 +
+ an cn : ai ∈ A}
If M = [c1, , cn]A is a left A-module then we say that M is a finitely generated left A-module. If
M = [c]A = {ac: a ∈ A} then M is a cyclic left A-module and c is called a cyclic vector for M . Suppose that M1
and M2 are both left A-modules; then a linear mapping T from M1 to M2 is called a module homomorphism,
or A-homomorphism or simply module morphism or A-morphism. Suppose that N = N1 ⊗
⊗ Nn and
M = M1 ⊗
⊗ Mm are direct sums of left A-modules. A linear mapping θ: N → M is an A-morphism iff
there is a set of A-morphisms θij : N j → Mi such that
m
∀(x1, , xn) ∈ N
P
n
θ(x) = j =1 θ ij x j
i=1
Suppose that T ∈ HomA(M1, M2) where M1 and M2 are simple left A-modules. If T 0 then T is an A-
isomorphism of M1 onto M2 and T −1 is also an A-isomorphism.
1.4.8. Semi-simple modules.
A left A-module is semi-simple if it can be written as a direct sum of simple left A-modules M =
M1 ⊗
⊗ Md which is the case only when M is the span of its simple left A-submodules. A semi-simple left
A-module is finitely generated since each Mi is cyclic. If N is a non-zero simple submodule of M then N is
isomorphic as a left A-module to Mi for some 1 6 i 6 k. A finite-dimensional simple unital algebra A is semi-
simple.
Any irreducible unital representation of Mk(D) is essentially ther same as the natural representation of
Mk(D) as HomD(D k), and any finite-dimensional representation is a direct sum of irreducible representations.
Theorem 4.
Let D = R, C, or H and suppose that W is a finite-dimensional real vector space and that π: Mk(D) → L(W )
is a real unital representation. Then the mapping (λ, w) → π(λI)(w) from D × W to W makes W a left
D-module, dimD (W ) = rk for some r, and there is a π-invariant decomposition
W = W1 ⊗
⊗ Wr
where dimD (Ws) = k∀1 6 s 6 r. For each 1 6 s 6 r there is a basis (e1, , ek) of Ws and an isomorphism πs:
A → Mk(D) such that
Xk k
X Xk
π(A) xj ej = ei aij x j where (aij ) = πs(A)
j =1 i=1 j=1
12 A Very Brief Introduction To Clifford Algebra
Proposition 5. Suppose that m ∈ M k(E , F ). Then the following statements mean the same thing
i. m is alternating
ii. m(x1, , xk) = ε(σ)m(xσ(1), , xσ(k))∀σ ∈ Σk (where ε(σ) is the signature of the permutation σ)
iii. If i, j are distinct elements of {1, , k} and if xi′ = x j, x j′ = xi and xl′ = xl for all other indices l then
m(x1, , xk) = −m(x1′ , , xk′ )
The set of alternating k-linear mappings of E k into F is a linear subspace of M k(E , F ) denoted by Ak(E , F ).
Ak(E) will be written for Ak(E , K) and the dual of Ak(E) by k (E). Suppose that (x1, , xk) ∈ E k. The
V
evaluation mapping a → a(x1, , xk) from Ak(E) to K is called the wedge product or alternating product of
x1, , xk and is denoted by x1 ∧ ∧ xk. The wedge product is a linear functional on Ak(E) and so are elements
of k (E). The alternating k-linear mapping (x1, , xk): E k → k (E) is denoted by ∧k. It is the case that
V V
k
(E) =span{x1 ∧ ∧ xk: (x1, , xk) ∈ E k }
^
then ∧d(T ) is a linear mapping from the 1-dimensional space d (E) into itself, and so there exists an element
V
det (T ) ∈ K such that ∧d(T )(x1, , xd) = (det (T )(x1, , xd)). In the field of quantum physics, exterior algebras
are known as fermionic Fock spaces. The structures of Hilbert state spaces associated with free fields are
called Fock spaces, in honor of Vladimir Fock(1898-1974). [3, p.112][2, 2.2.2][1, 3.5]
1.5.5. The Symmetric Tensor Algebra: Bosonic Fock Spaces.
Suppose that E is a d-dimensional vector space over K, with basis (e1, , ed). A k-linear mapping s:
E k → F is symmetric if
s(x1, , xk) = s(xσ(1), , xσ(k))∀σ ∈ Σk
The set S k(E , F ) of symmetric k-linear mappings s: E k → F is a linear subspace of M k(E; F ). We will denote
S k(E , K) by S k(E) and the evaluation mapping s → s(x1, , xk) from S k(E) to K by the symmetric tensor
products
x1 ⊗s
⊗s xk ∈ S k(E) ′
The span of the symmetric k-linear tensor product mappings ⊗s is denoted by
denote a sequence with k − 1 terms, obtained by deleting the ith and V jth terms. The following pair of
operators feature prominently in the field of quantum physics. If a, b ∈ ∗ (E), let la(b) = a ∧ b then the
of the algebra ∗ (E) in L( ∗ (E)); it is a unital
V V
mapping l: a →Vla is the left regular representation
∗ V∗
isomorphism of (E) onto a subalgebra of L( (E)). In particular, if x ∈ E then we rename the operator
lx to mx and call it the creation operator
^k k+1
^
mx: (E) → (E)
then P φ(a) ∈ Ak+1(E) and P φ ∈ L(Ak(E), Ak+1(E)). The annihilation operator is defined by the transpose
mapping
k+1
^ ^k
δ φ: (E) → (E)
V∗
and we have δ φ(λ) = 0∀λ ∈ K. If we let k vary and consider δ φ as an element of L( (E)) it can be shown that
1 X
δφ(x1 ∧
∧ xk) = k! ε(σ)φ(xσ(1))(xσ(2) ∧
∧ xσ(k))
σ ∈Σk
k
1X
= (−1) j −1 φ(x j )a(x1 ∧
∧ x̂ j ∧
∧ xk)
k
j =1
and that
m2x = δ 2φ = 0
and
mx δ φ + δ φ mx = φ(x)
The creation and annihilation operators are also known as raising and lowering operators.
1.5.7. Tensor Products of Algebras.
1.5.8. Tensor Products of Super-Algebras.
1.6. Quadratic Forms.
1.6.1. Real Quadratic Forms.
Suppose that E is a real vector space. A real-valued function q on E is called a quadratic form on E if
there exists a symmetric bilinear form b on E such that q(x) = b(x, x)∀x ∈ E, and a vector space E equipped
with a quadratic form q is called a quadratic space (E , q). Thus each symmetric bilinear form on E defines
a quadratic form on E. The set Q(E) of quadratic forms on E is a linear subspace of the vector space of all
real-valued functions on E. Distinct symmetric bilinear forms define distinct quadratic forms. Every linear
subspace of E is regular iff q is positive definite or negative definite
1.6.2. Orthogonality.
As in the previous section, let (E , q) be a quadratic space with associated bilinear form b. We say that
x and y are orthogonal and write x ⊥ y or equivalently, due to symmetry, y ⊥ x, if b(x, y) = 0 for x and y
in E. If x ⊥ y then q(x + y) = q(x) + q(y). If A is a subset of E, we define the orthogonal set A⊥ by
A⊥ = {x: x ⊥ a∀a ∈ A}
Stephen Crowley 15
Proposition 6. Suppose A is a subset of a regular quadratic space E, and that F is a linear subspace of E.
i. A⊥ is a linear subspace of E
ii. If A ⊆ B then A⊥ ⊇ B ⊥
iii. A⊥⊥ ⊇ A and A⊥⊥⊥ = A⊥
iv. A⊥⊥=span(A)
v. F=F ⊥⊥
vi. dim (F ) + dim (F ⊥) = dim (E)
Proposition 7. Suppose F is a linear subspace of a regular quadratic space (E , q), then the following are
equivalent.
i. (F , q) is regular
ii. F ∩ F ⊥ = {0}
iii. E = F ⊗ F ⊥
iv. (F ⊥, q) is regular
1.6.3. Diagonalization.
Theorem 8. Suppose that b is a symmetric bilinear form on a vector space E and that there exists a basis
(e1, , ed) and a pair of non-negative integers with p + m = r, the rank of b, such that if B is represented by
the matrix B = (bij ) then
1 j = i and 1 6 i 6 p
bij = −1 j = i and p + 1 6 i 6 p + m
0 otherwise
A basis which satisfies the conclusions of Theorem 8 is called a standard orthogonal basis. If (E , q) is
a Euclidean space and b is the associated bilinear form then bii = 1∀1 6 i 6 d and the basis is called an
orthonormal basis. If (ei) is a standard orthogonal basis and if x = di=1 xi ei and y = di=1 yi ei then
P P
p
X p+m
X
b(x, y) = x i yi − x i yi
i=1 i= p+1
and
p
X p+m
X
q(x) = x2i − x2i
i=1 i= p+1
For more general fields there exists a basis and set of scalars for which
d
X d
X
q(x) = λ j x2j ∀x = xj ej ∈ E
j =1 j =1
If (E1, q1) and (E2, q2) are quadratic spaces with signatures (p1, m1) and (p2, m2) then the direct sum E1 ⊗ E2
also becomes a quadratic space when we define q(x1 ⊗ x2) = q1(x1) + q2(x2). If (E , q) = (E1, q1) ⊗ (E2, q2)
then the signature of (E , q) is (p1 + p2, m1 + m2). The Witt index w of (E , q) is defined to be w = min (p, m).
If w > 0 then (E , q) is a Minkowski space. A regular quadratic space with m = 1 is called a Lorentz space.
Another special case occurs when p = m and p + m = 2p = 2m in which case (E , q) is called a hyperbolic space.
Proposition 10. Suppose that (E , q) is a hyperbolic space of dimension p + m = 2p then there exists a basis
(f1, , f p+m) such that
b(f2i , f2i−1) = b(f2i−1, f2i) = 1∀1 6 i 6 p
and bij = 0.
If (e1, , ed) is a standard orthogonal basis for E then
e +e
f2i−1 = i √ p+i
2
ei − ep+i
f2i = √
2
is called a hyperbolic space and if x = Σdi=1xi fi and y = Σdi=1 yi fi then
b(x, y) = (x1 y2 + x2 y1) +
+ (xd−1 yd + xd yd−1)
q(x) = 2(x1x2 + x3x4 +
+ xd−1xd)
1.6.4. Adjoint Mappings.
Suppose (E , q) is a regular quadratic space with bilinear form b. Then lb is an injective linear mapping
of E into E ′ which enables us to define the adjoint of a linear mapping from E into a quadratic space
Theorem 11. Suppose that T is a linear mapping from a regular quadratic space (E1, q1) into a quadratic
space (E2, q2). Then there exists a unique linear mapping T a called the adjoint from E2 to E1 such that
b2(T (x), y) = b1(x, T a(y))∀x ∈ E1, y ∈ E2
where b1 and b2 are the associated bilinear forms.
Proposition 12. Suppose that (E1, q1) and (E2, q2) are regular quadratic spaces with standard orthogonal
basis (e1, , ed) and (f1, , fd) respectively, T ∈ L(E1, E2) and that T is represented by the matrix (tij ) with
respect to these bases. Then T a is represented by the adjoint matrix (taji) where
taji = q1(e j )q2(fi)tij
As a corollary, if T ∈ L(E) then det (T a) = det (T ).
1.6.5. Complex Inner-Product Spaces.
An inner product on a complex vector space E is a mapping from E × E into C which satisfies
hα1 x1 + α2 x2, y i =α1hx1, y i + α2hx2, y i
i. ∀x, x1, x2, y, y1, y2 ∈ E , α1, α2, β1, β2 ∈ C
hx, β1 y1 + β2 y2i =β¯1hx, y1i + β¯2hx, y2i
ii. hy, xi =hx, y i ∀x, y ∈ E
iii. hx, xi > 0∀{x ∈ E: x 0}
For example, if E = Cd then the usual inner product is given by
d
X
hz, wi = zj w̄ j
j =1
A complex vector space E equipped with an inner product is called an inner-product space. An inner product
is sesquilinear if it is linear in the first variable and conjugate-linear in the second variable. This is the
convention that mathematicians use; physicists use the reverse convention. The quantity
p
kxk = hx, xi
Stephen Crowley 17
Theorem 13. Suppose that a1, , ad are elements of a unital algebra A. Then there exsists a unique Clifford
mapping j: (E , q) → A satisfying j(e1) = ai∀1 6 i 6 d iff
a2i =−q(ei) ∀1 6 i 6 d
ai a j + a j ai =0 ∀1 6 i < j 6 d
1 span(a1, , ad)
E 1 1
T
F
⊂
B(F , r) 0
T̃
A(E , q) 0
⊃
E
Let Ω = Ωd = {1, , d} and C = {i1, , ik } where 1 6 i1 <
< ik 6 d then define the element eC of A to be
the product ei1 eik with e∅ = 1. If |C | > 1 then eC depends on the ordering of the set {1, , d}. The element
eΩ = e1 ed will be particularly important. Suppose that A = span(P ) is a Clifford algebra for (E , q) where
P = {eC : C ⊆ Ω}. If P is a linearly independent basis for A then A is universal. If (E , q) is a quadratic space
then there always exists a universal Clifford algebra A(E , q).
1.7.2. Representation of A0,3.
The Clifford algebra A0,3 is isomorphic to M2(C). The Pauli spin matrices (1.4.5) are used to obtain
an explicit representation. Let us define j, a Clifford mapping of R0,3 into M2(C) which extends to an
isomorphism of A0,3.
j(x, y, z) =xσx + yσ y + zσz
=xQ
+ iyJ + zU
z x − iy
=
x + iy −z
18 A Very Brief Introduction To Clifford Algebra
The reason for σ y being complex is that that the center of A0,3 is Z(A0,3) = span(1, eΩ) D C where
eΩ = Q(iJ)U = i. Thus A0,3 can be considered as a 4-dimensional complex algebra where multiplication by i
becomes multiplication by eΩ. We note that j(R0,3) is the 3-dimensional real subspace of M2(C) consisting
of all Hermitian matrices with zero trace:
j(R0,3) = {T ∈ M2(C): T = T ∗, τ (T ) = 0}
The group A+
0,3 is generated by the elements
fx =e3 e2
f y =e1 e3
fz =e2 e1
and we have j(fx) = τx , j(f y) = τ y , and j(fz ) = τz where τx , τ y , and τz are the associate Pauli matrices. Thus
+
j(A0,3)=H
1.7.3. Spin(8).
This section is copied nearly verbatim from [2, Appendix 5.B]. The description of Dirac matrices for
Spin(8) requires a Clifford algebra with 8 anticommuting matrices. Because of their importance, and the
fact that they are useful building blocks for 10-dimensional Dirac matrices, they will be described explicitly.
The Dirac algebra of Spin(8) requires 16-dimensional matrices corresponding to the reducible 8s + 8c repre-
sentation of Spin(8). This notation can be confusing due to its usage as representing the operation of taking
powers, but this is how it is in the literature. These matrices can be written in block form
!
0 γaȧ
γi =
γbi˙b
where γȧa is the transpose of γaȧ . We also see that the equations {γ i , γ j } = 2δ i j sare satisfied if
i j j i
γaȧ γȧb + γaȧ γȧb =2δ ijδab
i j j i ij
∀i, j = 1 8
˙ γab˙ + γȧaγab˙ =2δ δab
γaa
i
A specific set of matrcies γaȧ that satisfy these equations, expressed as direct products of 2 × 2 blocks is
γ1 =iτ y × iτ y × iτ y
γ2 =1 × τx × iτ y
γ3 =1 × τz × iτ y
γ4 =τx × iτ y × 1
γ5 =τz × iτ y × 1
γ6 =iτ y × 1 × τx
γ7 =iτ y × 1 × τz
γ8 =1 × 1 × 1
2. Modular Forms
2.1. Modular Transformations.
2.1.1. Definitions.
Stephen Crowley 19
which converges for 1 < k ∈ Z where ζ(s) is the Riemann zeta function
∞
X
ζ(s) = n−s
n=1
and
X
σα(k) = dα
d|k
which runs over all positive integer divisors of k. The idea is that the sum goes over all the lattice points
in the complex plane whose structure is determined in terms of the complex number τ . Modular transforms
map this lattice into itself so G2k(τ ) transforms simply under modular transformations. A basic theorem
of modular forms states that an arbitrary holomorphic modular form of weight 2k can be expressed as a
polynomial in G4 and G6. The only modular form of weight 8 is G24 since the weights of modular forms are
additive under multiplication. The smallest weight for which there is more than one independent modular
form is 12 for which there are two independent modular forms, G34 and G26. [2, Appendix 6.B]
3. Physics
1
3.1. Particles with Spin 2 .
The Pauli spin matrices (1.4.5) were introduced by Wolfgang Pauli to represent the internal angular
1
momentum of particles which have spin 2 . In quantum mechanics, an observable corresponds to a Hermitian
linear operator T on a Hilbert space H and when the possible values of the observable are discrete, these
are the possible eigenvalues of T . The Stern-Gerlach experiment showed that elementary particles have an
intrinsic angular momentum, or spin. If x is a unit vector in R3 then the component Jx of the spin in the
direction x is an observable. In a non-relativistic setting, this leads to the consideration of a linear mapping
x → Jx from the Euclidean space V = R3 into the space Lh(H) of Hermitian operators on an appropriate
state space H. Particles are either bosons, in which case the eigenvalues of Jx are integers, or fermions, in
2k + 1 1
which case the eigenvalues of Jx are of the form 2 . In the case where the particle has spin 2 , each of the
1 1
operators Jx has just two eigenvalues, namely 2 (spin up), and − 2 (spin down). Consequently, Jx2 = I. Now,
take the negative-definite quadratic form q(x) = −kxk2 on R3 and consider R0,3 as a subspace of A0,3. Let
1 1
j: A0,3 → M2(C) be the isomorphism defined in (1.7.2) and let Ji = 2 j(ei) = 2 σi for i = 1, 2, 3. Thus
i
J1 J 2 = J 3
2
i
J2 J 3 = J 1
2
i
J3 J 1 = J 2
2
20 A Very Brief Introduction To Clifford Algebra
The Pauli spin matrices are Hermitian, and identifying A0,3 with M2(C) we see that we can take the state
space H to be the spinor space C2. If v = (x, y, z) ∈ R0,3 and q(v) = −1 then
1 z x − iy
Jv =
2 x + iy −z
1 1
is a Hermitian matrix with eigenvalues 2
and − 2 and corresponding eigenvectors.
x − iy x − iy
and
1−z −1 − z
respectively.
3.2. The Dirac Operator.
3.2.1. The Laplacian.
Let U be an open subset of a finite-dimensional real vector space and F is a finite-dimensional vector
space then define C(U , F ) to be the vector space of all continuous F -valued functions defined on U , and, for
k > 0, we define C k(U , F ) to be the vector space of all k-times continuously differentiable F -valued functions
defined on U . Suppose that U is an open subset of Euclidean space Rd and that F is a finite-dimensional
vector space and that f ∈ C 2(U , F ). Then the Laplacian is a second-order linear differential operator ∆:
C 2(U , F ) → C(U , F ) defined as
d
X ∂2f
∆f =
j =1
∂x2j
A harmonic function is one which satisfies ∆f = 0. Suppose that R p,m is the standard regular quadratic
space with signature (p, m). Then the corresponding Laplacian ∆ q is defined as
d
X ∂2f
∆q = q(e j )
j=1
∂x2j
p m (3)
X ∂2f X ∂2f
= −
j=1
∂x j j =p+1 ∂x2j
2
Then a function f is q-harmonic if ∆ qf = 0. Paul Dirac noticed that the 2nd order linear differential
operator −∆ q can be written as the square of a 1st-order linear differential operator in a noncommutative
setting.
Suppose that U is an open subset of Rp,m and that F is a finite-dimensional left A p,m-module and that
f ∈ C 1(U , F ) then we define the (standard) Dirac operator as
d
X ∂f
Dq = q(e j )e j
j =1
∂x2j
and say that f is Clifford analytic if D qf = 0.
Theorem 14. Suppose that (εi) is any orthogonal basis for Rp,m and (yi) denotes the corresponding
coordinates. Then
d
X ∂
Dq = q (εi)εi
∂yi
i=1
then
aij = b(εi , ej ) = b(e j , εi) = b ji
thus
d
X
q(εi)aijai k = b(e j , εk)
i=1
since
Pd
∂f f x + j =1 taije j − f (x)
(x) = lim
∂yi t→0 t
and it follows that
d
∂ X ∂
= aij
∂yi ∂x j
j =1
C 2(U , F ) 1Dq
C 1(U , F ) 1
Dq
C(U , F )
then
ker (D q ) ={f ∈ C 1(U , F ): f is Clifford analytic}
ker (D 2q ) ={f ∈ C 2(U , F ): f is q − harmonic}
Furthermore, if f is q-harmonic then Dqf is Clifford analytic. Also see [2, 4.1].
3.3. Maxwell’s Equations for an Electromagnetic Field.
Maxwell’s equations for an electromagentic field can be expressed as a single equation involving the
standard Dirac operator. The simplest case of electric and magnetic fields in a vacuum varying in space
and time will be considered. Take an open subset U of R × R3 = R4. The points of U will be denoted by
(t, x, y, z) where t is a time variable and x, y, and z are space variables. We begin by considering the electric
field E = (E1, E2, E3) and the magnetic field B = (B1, B2, B3) as continuously differentiable vector-valued
functions defined on U and taking values on R3. Given these, there exists a continuous vector-valued function
J defined on U called the current density, and a continuous scalar-valued function ρ defined on U called the
charge density. Then, with a suitable choice of units, Maxwell’s equations are
Bibliography
[1] D. J. H. Garling. Clifford Algebras: An Introduction. Number 78 in London Mathematical Society Student Texts. Cam-
bridge University Press, 2011.
[2] Michael B. Green, John H. Schwarz, and Edward Witten. Superstring theory. In Introduction, volume 1 of Cambridge
Monographs on Mathematical Physics. Cambridge University Press, first edition, 1987.
[3] Claude Itzykson and Jean-Bernard Zuber. Quantum Field Theory. McGraw-Hill, 1980.