Dark Matter PDF
Dark Matter PDF
1
Institute for Particle Physics Phenomenology, Durham University
2
Institut für Theoretische Physik, Universität Heidelberg
Abstract
Dark matter is, arguably, the most widely discussed topic in contemporary particle physics. Written in the
language of particle physics and quantum field theory, these notes focus on a set of standard calculations needed
to understand different dark matter candidates. After introducing some general features of such dark matter
agents, we introduce a set of established models which guide us through four experimental aspects: the dark
matter relic density extracted from the cosmic microwave background, indirect detection including the Fermi
galactic center excess, direct detection, and collider searches.1
1 continuously updated under www.thphys.uni-heidelberg.de/ plehn. The original publication will be available at www.
˜
springer.com
2
2 Relics 29
2.1 Relic neutrinos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2 Cold light dark matter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.3 Axions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.4 Matter vs anti-matter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.5 Asymmetric dark matter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4 WIMP models 58
4.1 Higgs portal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.2 Vector Portal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.3 Supersymmetric neutralinos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.4 Effective field theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5 Indirect searches 74
5.1 Higgs portal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.2 Supersymmetric neutralinos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.3 Next-to-minimal neutralino sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.4 Simplified models and vector mediator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
6 Direct searches 86
6.1 Higgs portal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.2 Supersymmetric neutralinos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3
7 Collider searches 95
7.1 Lepton colliders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
7.2 Hadron colliders and mono-X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
7.3 Higgs portal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
7.4 Supersymmetric neutralinos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
7.5 Effective field theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Foreword
As expected, this set of lecture notes is based on a course on dark matter at Heidelberg University. The course is
co-taught by a theorist and an experimentalist, and these notes cover the theory half. Because there exist a large
number of text books and of lecture notes on the general topic of dark matter, the obvious question is why we
bothered collecting these notes. The first answer is: because this was the best way for us to learn the topic.
Collecting and reproducing a basic set of interesting calculations and arguments is the way to learn physics. The
only difference between student and faculty is that the latter get to perform their learning curve in front of an
audience. The second answer is that we wanted to specifically organize material on weakly interacting dark matter
candidates with the focus on four key measurements
1. current relic density;
2. indirect searches;
3. direct searches;
4. LHC searches.
All of those aspects can be understood using the language of theoretical particle physics. This implies that we will
mostly talk about particle relics in terms of quantum field theory and not about astrophysics, nuclear physics, or
general relativity. Similarly, we try to avoid arguments based on thermodynamics, with the exception of some
quantum statistics and the Boltzmann equation. With this in mind, these notes include material for at least 20 times
90 minutes of lectures for master-level students, preparing them for using the many excellent black-box tools
which are available in the field. As indicated by the coffee stains, these notes only make sense if people print them
out and go through the formulas one by one. This way any reader is bound to find a lot of typos, and we would be
grateful if they could send us an email with them.
5
– the Hubble constant H0 which describes the expansion of the Universe. Two objects anywhere in the
Universe move away from each other with a velocity proportional to their current distance r. The
proportionality constant is defined through Hubble’s law
ṙ km
H0 := ≈ 70
r s Mpc
105 cm 1
= 70 24
= 2.3 · 10−18 6.6 · 10−16 eV = 1.5 · 10−33 eV . (1.1)
3.1 · 10 cm s
Throughout these lecture notes we will use these high-energy units with ~ = c = 1, eventually adding
kB = 1. Because H0 is not at all a number of order one we can replace H0 with the dimensionless ratio
H0
h := ≈ 0.7 . (1.2)
km
100
s Mpc
The Hubble ‘constant’ H0 is defined at the current point in time, unless explicitly stated otherwise.
– the cosmological constant Λ , which describes most of the energy content of the Universe and which is
defined through the gravitational Einstein-Hilbert action
M2 √
Z
SEH ≡ Pl d4 x −g (R − 2Λ) . (1.3)
2
The reduced Planck mass is defined as
1
MPl = √ = 2.4 · 1027 eV . (1.4)
8πG
It is most convenient to also combine the Hubble constant and the cosmological constant to a dimensionless
parameter
Λ
ΩΛ := . (1.5)
3H02
– the matter content of the Universe which changes with time. As a mass density we can define it as ρm , but as
for our other two key parameters we switch to the dimensionless parameters
ρm ρr
Ωm := and Ωr := . (1.6)
ρc ρc
The denominator ρc is defined as the critical density separating an expanding from a collapsing Universe
with Λ = 0. If we study the early Universe, we need to consider a sum of the relativistic matter or radiation
content Ωr and non-relativistic matter Ωm alone. Today, we can also separate the non-relativistic baryonic
matter content of the Universe. This is the matter content present in terms of atoms and molecules building
stars, planets, and other astrophysical objects. The remaining matter content is dark matter, which we
indicate with an index χ
ρb
Ωb := ⇒ Ωχ := Ωm − Ωb . (1.7)
ρc
6 1 HISTORY OF THE UNIVERSE
If the critical density ρc separates an expanding universe (described by H0 ) and a collapsing universe
(driven by the gravitational interaction G) we can guess that it should be given by something like a ratio of
H0 and G. Because the unit of the critical density has to be eV4 we can already guess that ρc ∼ MPl2 H02 .
In classical gravity we can estimate ρc by computing the escape velocity of a massive particle outside a
spherical piece of the Universe expanding according to Hubble’s law. We start by computing the velocity
vesc a massive particle has to have to escape a gravitational field. Classically, it is defined by equal kinetic
energy and gravitational binding energy for a test mass m at radius r,
4πr3 mr3
2 Gm ρc ρc
mvesc ! GmM 3 6MPl2 1
= = vesc = vesc with G =
2 r 8πMPl2
H0 H0
Eq.(1.1) ! 1
⇔ H03 r3 = 3
vesc = H0 ρc r3
3MPl2
⇔ ρc = 3MPl2 H02 = (2.5 · 10−3 eV)4 . (1.8)
We give the numerical value based on the current Hubble expansion rate. For a more detailed account for the
history of the Universe and a more solid derivation of ρc we will resort to the theory of general relativity in
the next section.
At least for constant a this looks like a metric with a modified distance r(t) → r(t)a. It is also clear that the choice
k = 0 switches off the effect of 1/a2 , because we can combine a and r to arrive at the original Minkowski metric.
Finally, there is really no reason to assume that the scale factor is constant with time. In general, the history of the
Universe has to allow for a time-dependent scale factor a(t), defining the line element or metric as
dr2
2 2 2 2 2 2 2 2
ds = dt − a(t) + r dθ + r sin θdφ . (1.12)
1 − kr2
From Eq.(1.9) we can read off the corresponding metric including the scale factor,
1 0 0 0
2
a
gµν = 0 − 1 − kr2 0 0 .
(1.13)
0 0 −a2 0
0 0 0 −a2
Now, the time-dependent scale factor a(t) indicates a motion of objects in the Universe, r(t) → a(t)r(t). If we
look at objects with no relative motion except for the expanding Universe, we can express Hubble’s law given in
Eq.(1.1) in terms of
! ȧ(t)
r0 (t) = a(t) r ⇔ ṙ0 (t) = ȧ(t) r = H(t)r0 (t) = H(t)a(t) r ⇔ H(t) = . (1.14)
a(t)
This relation reads like a linearized treatment of a(t), because it depends only on the first derivative ȧ(t). However,
higher derivatives of a(t) appear through a possible time dependence of the Hubble constant H(t). From the above
relation we can learn another, basic aspect of cosmology: we can describe the evolution of the universe in terms of
Which of these concepts we prefer depends on the kind of observations we want to link. Clearly, all of them
should be interchangeable. For now we will continue with time.
Assuming the general metric of Eq.(1.12) we can solve Einstein’s equation including the coupling to matter
1 Tµν (t)
Rµν (t) − gµν (t)R(t) + Λ(t)gµν (t) = . (1.15)
2 MPl2
The energy-momentum tensor includes the energy density ρt = T00 and the corresponding pressure p. The latter is
defined as the direction-independent contribution to the diagonal entries Tjj = pj of the energy-momentum tensor.
The Ricci tensor Rµν and Ricci scalar R = g µν Rµν are defined in terms of the metric; their explicit forms are one
of the main topics of a lecture on general relativity. In terms of the scale factor the Ricci tensor reads
3ä(t)
Rij (t) = δij 2ȧ(t)2 + a(t)ä(t) .
R00 (t) = − and (1.16)
a(t)
If we use the 00 component of Einstein’s equation to determine the variable scale factor a(t) , we arrive at the
Friedmann equation
ȧ(t)2 k ρt (t)
+ =
a(t) 2 a(t) 2 3MPl2
ρm (t) + ρr (t) + ρΛ (t)
:= with ρΛ (t) := Λ(t)MPl2 = 3H02 MPl2 ΩΛ (t) , (1.17)
3MPl2
8 1 HISTORY OF THE UNIVERSE
with k defined in Eq.(1.11). A similar, second condition from the symmetry of the energy-momentum tensor and
its derivatives reads
2ä(t) ȧ(t)2 k p(t)
+ 2
+ 2
=− 2 . (1.18)
a(t) a(t) a(t) MPl
If we use the quasi-linear relation Eq.(1.14) and define the time-dependent critical total density of the Universe
following Eq.(1.8), we can write the Friedmann equation as
k ρt (t)
H(t)2 + =
a(t)2 3MPl2
k ρt (t)
⇔ 1+ = =: Ωt (t) with ρc (t) := 3H(t)2 MPl2 . (1.19)
H(t)2 a(t)2 ρc (t)
This is the actual definition of the critical density ρc (t). It means that k is determined by the time-dependent total
energy density of the Universe,
This expression holds at all times t, including today, t0 . For Ωt > 1 the curvature is positive, k > 0, which means
that the boundaries of the Universe are well defined. Below the critical density the curvature is negative. In passing
we note that we can identify
The two separate equations Eq.(1.17) and Eq.(1.18) include not only the energy and matter densities, but also the
pressure. Combining them we find
It is crucial for our understanding of the matter content of the Universe. If we can measure w it will tell us what
the energy or matter density of the Universe consists of.
Following the logic of describing the Universe in terms of the variable scale factor a(t), we can replace the
quasi-linear description in Eq.(1.14) with a full Taylor series for a(t) around the current value a0 and in terms of
H0 . This will allow us to see the drastic effects of the different equations of state in Eq.(1.23),
1
a(t) − a0 = ȧ(t0 ) (t − t0 ) + ä(t0 ) (t − t0 )2 + O (t − t0 )3
2
1
≡ a0 H0 (t − t0 ) − a0 q0 H02 (t − t0 )2 + O (t − t0 )3 ,
(1.24)
2
1.1 Expanding Universe 9
implicitly defining q0 . The units are correct, because the Hubble constant defined in Eq.(1.1) is measured in
energy. The pre-factors in the quadratic term are historic, as is the name deceleration parameter for q0 . Combined
with our former results we find for the quadratic term
ä(t0 ) Eq.(1.22) ρt (t0 ) + 3p(t0 )
q0 = − =
a0 H02 6H02 MPl2
1 X Eq.(1.23) 1 X
= ρt (t0 ) + 3 pj (t0 ) = Ωt (t0 ) + 3 Ωj (t0 )wj . (1.25)
6H02 MPl2 j
2 j
The sum includes the three components contributing to the total energy density of the Universe, as listed in
Eq.(1.31). Negative values of w corresponding to a Universe dominated by its vacuum energy can lead to negative
values of q0 and in turn to an accelerated expansion beyond the linear Hubble law. This is the basis for a
fundamental feature in the evolution of the Universe, called inflation .
To be able to track the evolution of the Universe in terms of the scale factor a(t) rather than time, we next compute
the time dependence of a(t). As a starting point, the Friedmann equation gives us a relation between a(t) and ρ(t).
What we need is a relation of ρ and t, or alternatively a second relation between a(t) and ρ(t). Because we skip as
much of general relativity as possible we leave it as an exercise to show that from the vanishing covariant
derivative of the energy-momentum tensor, which gives rise to Eq.(1.18), we can also extract the time dependence
of the energy and matter densities,
d d
ρj a3 = −pj a3 .
(1.26)
dt dt
It relates the energy inside the volume a3 to the work through the pressure pj . From this conservation law we can
extract the a-dependence of the energy and matter densities
This functional dependence is not yet what we want. To compute the time dependence of the scale factor a(t) we
use a power-law ansatz for a(t) to find
ä(t) Eq.(1.22) 1 + 3wj Eq.(1.27) 1 + 3wj
= − 2 ρj (t) = − C a(t)−3(1+wj )
a(t) 6MPl 6MPl2
⇔ ä(t)a(t)2+3wj = const
⇔ tβ−2 tβ(2+3wj ) = const assuming a ∝ tβ
2
⇔ t3β+3wj β−2 = const ≡ t0 ⇔ β= . (1.28)
3 + 3wj
We can translate the result for a(t) ∝ tβ into the time-dependent Hubble constant
ȧ(t) β tβ−1 β 2 1
H(t) = ∼ β
= = . (1.29)
a(t) t t 3 + 3wj t
10 1 HISTORY OF THE UNIVERSE
The problem with these formulas is that the power-law ansatz and the form of H(t) obviously fails for the vacuum
energy with w = −1. For an energy density only based on vacuum energy and neglecting any curvature, k ≡ 0, in
the absence of matter, Eq.(1.14) together with the Friedmann equation becomes
ȧ(t)2 ρΛ (t)
Eq.(1.17) Λ(t)
H(t)2 = = =
a(t)2 3MPl2 3
√
⇔ a(t) = eH(t)t = e Λ(t)/3 t . (1.30)
Combining this result and Eq.(1.28), the functional dependence of a(t) reads
(
2/3
t2/(3+3wj ) = t
non-relativistic matter
a(t) ∼ t1/2 relativistic radiation (1.31)
√
e Λ(t)/3 t
vacuum energy.
2
2 1 3t non-relativistic matter
=
3 + 3w t 1
H(t) ∼ relativistic radiation (1.32)
r 2t
Λ(t)
vacuum energy.
3
From the above list we have now understood the relation between the time t, the scale factor a(t), and the Hubble
constant H(t). An interesting aspect is that for the vacuum energy case w = −1 the change in the scale factor and
with it the expansion of the Universe does not follow a power law, but an exponential law, defining an inflationary
expansion. What is missing from our list at the beginning of this section is the temperature as the parameter
describing the evolution of the Universe. Here we need to quote a thermodynamic result, namely that for constant
entropy2
1
a(T ) ∝ . (1.33)
T
This relation is correct if the degrees of freedom describing the energy density of the Universe does not change.
The easy reference point is a0 = 1 today. We will use an improved scaling relation in Section 3.
Finally, we can combine several aspects described in these notes and talk about distance measures and their link to
(i) the curved space-time metric, (ii) the expansion of the Universe, and (iii) the energy and matter densities. We
will need it to discuss the cosmic microwave background in Section 1.4. As a first step, we compute the apparent
distance along a line of sight, defined by dφ = 0 = dθ. This is the path of a traveling photon. Based on the
time-dependent curved space-time metric of Eq.(1.12) we find
! dr2 dr
0 = ds2 = dt2 − a(t)2 ⇔ dt = a(t) √ . (1.34)
1 − kr2 1 − kr2
For the definition of the co-moving distance we integrate along this path,
dc dr dt 1
Z Z Z
:= √ = = da . (1.35)
a0 1 − kr 2 a(t) ȧ(t)a(t)
2 This is the only thermodynamic result which we will (repeatedly) use in these notes.
1.2 Radiation and matter 11
The distance measure we obtain from integrating dr in the presence of the curvature k is called the co-moving
distance . It is the distance a photon traveling at the speed of light can reach in a given time. We can evaluate the
integrand using the Friedmann equation, Eq.(1.17), and the relation ρ a3(1−w) = const,
ρt (t)
ȧ(t)2 = a(t)2 −k
3MPl2
ρm (t0 )a30
Eq.(1.27) ρr (t0 )a40 ρΛ a(t)2
= + + −k
3MPl2 a(t) 3MPl2 a(t)2 3MPl2
a30 a40
Eq.(1.20) 2 2 2
= H0 Ωm (t0 ) + Ωr (t0 ) + ΩΛ a(t) − (Ωt (t0 ) − 1) a0
a(t) a(t)2
1 1
⇒ = 1/2
. (1.36)
ȧ(t)a(t) H0 [Ωm (t0 )a0 a(t) + Ωr (t0 )a0 + ΩΛ a(t)4 − (Ωt (t0 ) − 1) a20 a(t)2 ]
3 4
1 da
Z
= 1/2
. (1.37)
H0 [Ωt (t0 )(a0 a(t) − a0 a(t) ) − ΩΛ (a30 a(t) − a(t)4 ) + a20 a(t)2 ]
3 2 2
Here we assume (and confirm later) that today Ωr (t0 ) can be neglected and hence Ωt (t0 ) = Ωm (t0 ) + ΩΛ . What is
important to remember that looking back the variable scale factor is always a(t) < a0 . The integrand only depends
on all mass and energy densities describing today’s Universe, as well as today’s Hubble constant. Note that the
co-moving distance integrates the effect of time passing while we move along the light cone in Minkowski space.
It would therefore be well suited for example to see which regions of the Universe can be causally connected.
Another distance measure based on Eq.(1.11) assumes the same line of sight dφ = 0 = dθ, but also a synchronized
time at both end of the measurement, dt = 0. This defines a purely geometric, instantaneous distance of two points
in space,
dr Eq.(1.20)
dθ = dφ = dt = 0 ⇒ ds(t) = −a(t) √ with k = H02 a20 (Ωt (t0 ) − 1)
1 − kr2
a(t) √
√ arcsin( k d) k>0
k
0 0
dr
Z Z
⇒ dcA (t) := ds = −a(t) √ = a(t)d k=0 (1.38)
d d 1 − kr2
a(t)
p
arcsinh( |k| d) k<0.
p
|k|
This angular diameter distance is time dependent, but because it fixes the time at both ends we can use it for
geometrical analyses. It depends on the assumed constant distance d, which can for example be identified with the
co-moving distance d ≡ dc . The curvature is again expressed in terms of today’s energy density and Hubble
constant.
To understand the implications of the evolution of the Universe following Eq (1.27), we can look at the
composition of the Universe in terms of relativistic states (radiation), non-relativistic states (matter including dark
12 1 HISTORY OF THE UNIVERSE
matter), and a cosmological constant Λ. Figure 1 shows that at very large temperatures the Universe is dominated
by relativistic states. When the variable scale factor a increases, the relativistic energy density drops like 1/a4 . At
the same time, the non-relativistic energy density drops like 1/a3 . This means that as long as the relativistic energy
density dominates, the relative fraction of matter increases linear in a. Radiation and matter contribute the same
amount to the entire energy density around aeq = 3 · 10−4 , a period known as matter-radiation equality. The
cosmological constant does not change, which means eventually it will dominate. This starts happening around
now.
We know experimentally that most of the matter content in the Universe is not baryonic, but dark matter. To
describe its production in our expanding Universe we need to apply some basic statistical physics and
thermodynamics. We start with the observation that according to Figure 1 in the early Universe neither the
curvature k nor the vacuum energy ρΛ play a role. This means that the relevant terms in the Friedmann equation
Eq.(1.17) read
This form will be the basis of our calculation in this section. The main change with respect to our above discussion
will be a shift to temperature rather than time as an evolution variable.
For relativistic and non-relativistic particles or radiation we can use a unified picture in terms of their quantum
fields. What we have to distinguish are fermion and boson fields and the temperature T relative to their respective
masses m. The number of degrees of freedom are counted by a factor g, for example accounting for the
anti-particle, the spin, or the color states. For example for the photon we have gγ = 2, for the electron and positron
ge = 2 each, and for the left-handed neutrino gν = 1. If we neglect the chemical potential because we assume to
Figure 1: Composition of our Universe as a function of the scale factor. Figure from Daniel Baumann’s lecture
notes [1].
1.2 Radiation and matter 13
be either clearly non-relativistic or clearly relativistic, and we set kB = 1, we (or better M ATHEMATICA) find
d3 p 1
Z
neq (T ) =g for fermions/bosons (1.40)
(2π)3 eE/T ± 1
Z ∞ √
EdE E 2 − m2
=g 4π 3 eE/T ± 1
using E 2 = p2 + m2 and pdp = EdE
m (2π)
3/2
mT
g e−m/T non-relativistic states T m
2π
= ζ3 gT 3 relativistic bosons T m
π2
3 ζ3
gT 3
relativistic fermions T m.
4 π2
The Riemann zeta function has the value ζ3 = 1.2. As expected, the quantum-statistical nature only matters once
the states become relativistic and probe the relevant energy ranges. Similarly, we can compute the energy density
in these different cases.
Z ∞ √
d3 p E EdE E E 2 − m2
Z
ρeq (T ) =g = g 4π (1.41)
(2π)3 eE/T ± 1 m (2π)
3 eE/T ± 1
3/2
mT
e−m/T
mg non-relativistic states T m
2π
2
= π gT 4 relativistic bosons T m
30
2
7 π
gT 4 relativistic fermions T m.
8 30
In the non-relativistic case the relative scaling of ρ relative to the number density is given by an additional factor
m T . In the relativistic case the additional factor is the temperature T , resulting in a Stefan–Boltzmann scaling
of the energy density, ρ ∝ T 4 . To compute the pressure we can simply use the equation of state, Eq.(1.23), with
w = 1/3.
The number of active degrees of freedom in our system depends on the temperature. As an example, above the
electroweak scale v = 246 GeV the effective number of degrees of freedom includes all particles of the Standard
Model
Often, the additional factor 7/8 for the fermions in Eq.(1.41) is absorbed in an effective number of degrees of
freedom, implicitly defined through the unified relation
π2
ρr = geff (T ) T 4 , (1.43)
30
with the relativistic contribution to the matter density defined in Eq.(1.17). Strictly speaking, this relation between
the relativistic energy density and the temperature only holds if all states contributing to ρr have the same
temperature, i.e. are in thermal equilibrium with each other. This does not have to be the case. To include different
states with different temperatures we define geff as a weighted sum with the specific temperatures of each
component, namely
X Tb4 X 7 Tf4
geff (T ) = gb + gf . (1.44)
T4 8 T4
bosons fermions
14 1 HISTORY OF THE UNIVERSE
For the entire Standard Model particle content at equal temperatures this gives
Eq.(1.42) 7
geff (T > 175 GeV) = 28 + 90 = 106.75 . (1.45)
8
When we reduce the temperature, this number of active degrees of freedom changes whenever a particle species
vanishes at the respective threshold T = m. This curve is illustrated in Figure 2. For today’s value we will use the
value
Finally, we can insert the relativistic matter density given in Eq.(1.43) into the Friedmann equation Eq.(1.39) and
find for the relativistic, radiation-dominated case
2 √ 2
1 π2 π geff T 2
Eq.(1.32) 1 Eq.(1.39) ρr Eq.(1.43)
H(t)2 = = = geff (T ) T 4 = √ . (1.47)
2t 3MPl2 3MPl2 30 90 MPl
This relation is important, because it links time, temperature, and Hubble constant as three possible scales in the
evolution of our Universe in the relativistic regime. The one thing we need to check is if all relativistic relics have
the same temperature.
Figure 2: Number of effective degrees of freedom geff as a function of the temperature, assuming the Standard
Model particle content. Figure from Daniel Baumann’s lecture notes [1].
1.3 Relic photons 15
Neutrinos, photons, and electrons maintain thermal equilibrium through the scattering processes
ν̄e e− → W ∗ → ν̄e e− and e− γ → e∗ → e− γ . (1.48)
For low temperatures or energies mν T, E mW the two cross sections are approximately
πα2 T 2 πα2
σνe (T ) = 4 σ γe (T ) = . (1.49)
s4w mW m2e
The coupling strength g ≡ e/ sin θw ≡ e/sw with s2w ≈ 1/4 defines the weak coupling α = e2 /(4π) ≈ 1/137.
The geometric factor π comes from the angular integration and helps us getting to the the correct approximate
numbers. The photons are more strongly coupled to the electron bath, which means they will decouple last, and in
their decoupling we do not have to consider the neutrinos anymore. The interaction rate
Γ := σ v n (1.50)
describes the probability for example of the neutrino or photon scattering process in Eq.(1.48) to happen. It is a
combination of the cross section, the relevant number density and the velocity, measured in powers of temperature
or energy, or inverse time. In our case, the relativistic relics move at the speed of light. Because the Universe
expands, the density of neutrinos, photons, and charged leptons will at some point drop to a point where the
processes in Eq.(1.48) hardly occur. They will stop maintaining the equilibrium between photons, neutrinos, and
charged leptons roughly when the respective interaction rate drops below the Hubble expansion. This gives us the
condition
Γ(Tdec ) !
=1 . (1.51)
H(Tdec )
We should be able to compute the photon decoupling from the electrons based on the above definition of Tdec and
the photon–electron or Thomson scattering rate in Eq.(1.49). The problem is, that it will turn out that at the time of
photon decoupling the electrons are no longer the relevant states. Between temperatures of 1 MeV and the relevant
eV-scale for photon decoupling, nucleosynthesis will have happened, and the early Universe will be made up by
atoms and photons, with a small number of free electrons. Based on this, we can very roughly guess the
temperature at which the Universe becomes transparent to photons from the fact that most of the electrons are
bound in hydrogen atoms. The ionization energy of hydrogen is 13.6 eV, which is our first guess for Tdec . On the
other hand, the photon temperature will follow a Boltzmann distribution. This means that for a given temperature
Tdec there will be a high-energy tail of photons with much larger energies. To avoid having too many photons still
ionizing the hydrogen atoms the photon temperature should therefore come out as Tdec . 13.6 eV.
Going back to the defining relation in Eq.(1.51), we can circumvent the problem of the unknown electron density
by expressing the density of free electrons first relative to the density of electrons bound in mostly hydrogen, with
a measured suppression factor ne /nB ≈ 10−2 . Moreover, we can relate the full electron density or the baryon
density nB to the photon density nγ through the measured baryon–to–photon ratio. In combination, this gives us
for the time of photon decoupling
3
ne ne nB 2ζ3 Tdec
ne (Tdec ) = (Tdec ) nB (Tdec ) = (Tdec ) (Tdec ) nγ (Tdec ) = 10−2 10−10 . (1.53)
nB nB nγ π2
16 1 HISTORY OF THE UNIVERSE
At this point we only consider the ratio nB /nγ ≈ 10−10 a measurable quantity, its meaning will be the topic of
Section 2.4. With this estimate of the relevant electron density we can compute the temperature at the point of
photon decoupling. For the Hubble constant we need the number of active degrees of freedom in the absence of
neutrinos and just including electrons, positions, and photons
7
geff (Tdec ) = (2 + 2) + 2 = 5.5 . (1.54)
8
Inserting the Hubble constant from Eq.(1.47) and the cross section from Eq.(1.49) gives us the condition
√
Γγ 2πζ3 α2 −12 T 3 90MPl 1
= 2
10 2
p
H π me π geff (T )T 2
√ p
6 10 ζ3 −12 2 1 MPl T ! 12 π2 m2e geff (Tdec )
= 10 α p =1 ⇔ Tdec = 10 √
π2 geff (T ) m2e 6 10 ζ3 MPl α2
≈ (0.1 ... 1) eV . (1.55)
As discussed above, to avoid having too many photons still ionizing the hydrogen atoms, the photon temperature
indeed is Tdec ≈ 0.26 eV < 13.6 eV.
These decoupled photons form the cosmic microwave background (CMB) , which will be the main topic of
Section 1.4. The main property of this photon background, which we will need all over these notes, is its current
temperature. We can compute T0,γ from the temperature at the point of decoupling, when we account for the
expansion of the Universe between Tdec and now. We can for example use the time evolution of the Hubble
constant H ∝ T 2 from Eq.(1.47) to compute the photon temperature today. We find the experimentally measured
value of
Tdec
T0,γ = 2.4 · 10−4 eV = 2.73 K ≈ . (1.56)
1000
This energy corresponds to a photon frequency around 60 GHz, which is in the microwave range and inspires the
name CMB. We can translate the temperature at the time of photon decoupling into the corresponding scale factor,
In Section 1.3 we have learned that at temperatures around 0.1 eV the thermal photons decoupled from the matter
in the Universe and have since then been streaming through the expanding Universe. This is why their temperature
has dropped to T0 = 2.4 · 10−4 eV now. We can think of the cosmic microwave background or CMB photons as
coming from a sphere of last scattering with the observer in the center. The photons stream freely through the
Universe, which means they come from this sphere straight to us.
The largest effect leading to a temperature fluctuation in the CMB photons is that the earth moves through the
photon background or any other background at constant speed. We can subtract the corresponding dipole
correlation, because it does not tell us anything about fundamental cosmological parameters. The most important,
fundamental result is that after subtracting this dipole contribution the temperature on the surface of last scattering
only shows tiny variations around δT /T . 10−5 . The entire surface, rapidly moving away from us, should not be
causally connected, so what generated such a constant temperature? Our favorite explanation for this is a phase of
1.4 Cosmic microwave background 17
very rapid, inflationary period of expansion. This means that we postulate a fast enough expansion of the Universe,
such that the sphere or last scattering becomes causally connected. From Eq.(1.31) we know that such an
expansion will be driven not by matter but by a cosmological constant. The detailed structure of the CMB
background should therefore be a direct and powerful probe for essentially all parameters defined and discussed in
Section 1.
The main observable which the photon background offers is their temperature or energy — additional information
for example about the polarization of the photons is very interesting in general, but less important for dark matter
studies. Any effect which modifies this picture of an entirely homogeneous Universe made out of a thermal bath of
electrons, photons, neutrinos, and possibly dark matter particles, should be visible as a modification to a constant
temperature over the sphere of last scattering. This means, we are interested in analyzing temperature fluctuations
between points on this surface.
The appropriate observables describing a sphere are the angles θ and φ. Moreover, we know that spherical
harmonics are a convenient set of orthogonal basis functions which describe for example temperature variations on
a sphere,
∞ X
`
δT (θ, φ) T (θ, φ) − T0 X
:= = a`m Y`m (θ, φ) . (1.59)
T0 T0
`=0 m=−`
The spherical harmonics are orthonormal, which means in terms of the integral over the full angle dΩ = dφd cos θ
Z
dΩ Y`m (θ, φ) Y`∗0 m0 (θ, φ) = δ``0 δmm0
δT (θ, φ) ∗
Z Z
Eq.(1.59) X
⇒ dΩ Y`0 m0 (θ, φ) = a`m dΩ Y`m (θ, φ) Y`∗0 m0 (θ, φ)
T0
`m
X
= a`m δ``0 δmm0 = a`0 m0 . (1.60)
`m
This is the inverse relation to Eq.(1.59), which allows us to compute the set of numbers a`m from a known
temperature map δT (θ, φ)/T0 .
For the function T (θ, φ) measured over the sphere of last scattering, we can ask the three questions which we
usually ask for distributions which we know are peaked:
For the CMB we assume that we already know the peak value T0 and that there is no valuable information in the
statistical distribution. This means that we can focus on the width or the variance of the temperature distribution.
Its square root defines the standard deviation. In terms of the spherical harmonics the variance reads
2 " # " #
1 δT (θ, φ) 1
Z Z X X
∗ ∗
dΩ = dΩ a`m Y`m (θ, φ) a`0 m0 Y`0 m0 (θ, φ)
4π T0 4π
`m `0 m 0
Eq.(1.60) 1 X 1 X 2
= a`m a∗`0 m0 δ``0 δmm0 = |a`m | . (1.61)
4π 0 0
4π
`m,` ,m `m
We can further simplify this relation by our expectation for the distribution of the temperature deviations. We
remember for example from quantum mechanics that for the angular momentum the index m describes the angular
18 1 HISTORY OF THE UNIVERSE
momentum in one specific direction. Our analysis of the surface of last scattering, just like the hydrogen atom
without an external magnetic field, does not have any special direction. This implies that the values of a`m do not
depend on the value of the index m; the sum over m should just become a sum over 2` + 1 identical terms. We
therefore define the observed power spectrum as the average of the |a`m |2 over m,
` 2 ∞
1 1 δT (θ, φ) 2` + 1
Z
2
X X
C` := |a`m | ⇔ dΩ = C` . (1.62)
2` + 1 4π T0 4π
m=−` `=0
The great simplification of this last assumption is that we now just analyze the discrete values C` as a function of
` ≥ 0.
Note that we analyze the fluctuations averaged over the surface of last scattering, which gives us one curve C` for
discrete values ` ≥ 0. This curve is one measurement, which means none of its points have to perfectly agree with
the theoretical expectations. However, because of the averaging over m possible statistical fluctuation will cancel,
in particular for larger values of `, where we average over more independent orientations.
We can compare the series in spherical harmonics Eq.(1.59) to a Fourier series. The latter will, for example,
analyze the frequencies contributing to a sound from a musical instrument. The discrete series of Fourier
coefficients tells us which frequency modes contribute how strongly to the sound or noise. The spherical harmonic
do something similar, which we can illustrate using the properties of the Y`0 (θ, φ). Their explicit form in terms of
the associated Legendre polynomials P`m and the Legendre polynomials P` is
s
m imφ 2` + 1 (` − m)!
Y`m (θ, φ) = (−1) e P`m (cos θ)
4 (` + m)!
s
2` + 1 (` − m)! m/2 dm
= (−1)m eimφ (−1)m 1 − cos2 θ P` (cos θ)
4 (` + m)! d(cos θ)m
√
2` + 1
⇒ Y`0 (θ, φ) = P` (cos θ) . (1.63)
2
The Legendre polynomial is for example defined through
1 d`
P` (cos θ) = (cos2 θ − 1)` = C cos` θ + · · · , (1.64)
2` `! dt`
with the normalization P` (±1) = 1 and ` zeros in between. Approximately, these zeros occur at
4k − 1
P` (cos θ) = 0 ⇔ cos θ = cos π k = 1, ..., ` . (1.65)
4` + 2
The first zero of each mode defines an angular resolution θ` of the `th term in the hypergeometric series,
3 3π
cos π ≡ cos θ` ⇔ θ` ≈ . (1.66)
4` + 2 4`
This separation in angle can obviously be translated into a spatial distance on the sphere of last scattering, if we
know the distance of the sphere of last scattering to us. This means that the series of a`m or the power spectrum C`
gives us information about the angular distances (encoded in `) which contribute to the temperature fluctuations
δT /T0 .
Next, we need to think about how a distribution of the C` will typically look. In Figure 3 we see that the measured
power spectrum essentially consists of a set of peaks. Each peak gives us an angular scale with a particularly large
contribution to the temperature fluctuations. The leading physics effects generating such temperature fluctuations
are:
1.4 Cosmic microwave background 19
– acoustic oscillations which occur in the baryon–photon fluid at the time of photon decoupling. As discussed
in Section 1 the photons are initially strongly coupled to the still separate electrons and baryons, because the
two components interact electromagnetically through Thomson scattering. Following Eq.(1.49) the weak
interaction can be neglected in comparison to Thomson scattering for ordinary matter. On the other hand, we
can see what happens when a sizeable fraction of the matter in the Universe is not baryonic and only
interacts gravitationally and possibly through the weak interaction. Such new, dark matter generates
gravitational wells around regions of large matter accumulation.
The baryon–photon fluid gets pulled into these gravitational wells. For the relativistic photon gas we can
relate the pressure to the volume and the temperature through the thermodynamic equation of state P V ∝ T .
If the temperature cannot adjust rapidly enough, for example in an adiabatic transition, a reduced volume
will induce an increased pressure. This photon pressure acts against the gravitational well. The photons
moving with and against a slope in the gravitational potential induces a temperature fluctuation located
around regions of dark matter concentration. Such an oscillation will give rise to a tower of modes with
definite wave lengths. For a classical box-shaped potential they will be equi-distant, while for a smoother
potential the higher modes will be pulled apart. Strictly speaking, we can separate the acoustic oscillations
into a temperature effect and a Doppler shift, which have separate effects on the CMB power spectrum.
– the effect of general relativity on the CMB photons, not only related to the decoupling, but also related to the
propagation of the streaming photons to us. In general, the so-called Sachs–Wolfe effect describes this
impact of gravity on the CMB photons. Such an effect occurs if large accumulations of mass or energy
generate a distinctive gravitational potential which changes during the time the photons travel through it.
This effect will happen before and while the photons are decoupling, but also during the time they are
traveling towards us. From the discussion above it is clear that it is hard to separate the Sachs–Wolfe effect
during photon decoupling from the other effects generating the acoustic oscillations. For the streaming
photons we need to integrate the effect over the line of sight. The later the photons see such a gravitational
potential, the more likely they are to probe the cosmological constant or the geometrical shape of the
Universe close to today.
Figure 3 confirms that the power spectrum essentially consists of a set of peaks, i.e. a set of angular scales at
which we observe a particularly strong correlation in temperatures. They are generated through the acoustic
Figure 3: Power spectrum as measured by PLANCK in 2015. Figure from the PLANCK collaboration [2].
20 1 HISTORY OF THE UNIVERSE
oscillations. Before we discuss the properties of the peaks we notice two general features: first, small values of `
lead to large error bars. This is because for large angular separations there are not many independent
measurements we can do over the sphere, i.e. we lose the statistical advantage from combining measurements
over the whole sphere in one C` curve. Second, the peaks are washed out for large `. This happens because our
approximation that the sphere of last scattering has negligible thickness catches up with us. If we take into account
that the sphere of last scattering has a finite thickness, the strongly peaked structure of the power spectrum gets
washed out. Towards large values ` or small distances the thickness effects become comparable to the spatial
resolution at the time of last scattering. This leads to an additional damping term
2
/15002
C` ∝ e−` , (1.67)
which washes out the peaks above ` = 1500 and erases all relevant information.
Next, we can derive the position of the acoustic peaks. Because of the rapid expansion of the Universe, a critical
angle θ` in Eq.(1.66) defines the size of patches of the sky, which were not in causal contact during and since the
time of the last scattering. Below the corresponding `-value there will be no correlation. It is given by two
distances: the first of them is the distance on the sphere of last scattering, which we can compute in analogy to the
co-moving distance defined in Eq.(1.37). Because the co-moving distance is best described by an integral over the
scale factor a, we use the value adec ≈ 1/1100 from Eq.(1.57) to integrate the ratio of the distance to the sound
velocity in the baryon–photon fluid cs to
rs Eq.(1.35) adec da
Z
= . (1.68)
cs 0 a(t)ȧ(t)
√
For a perfect relativistic fluid the speed of sound is given by cs = 1/ 3. This distance is called the sound horizon
and depends mostly on the matter density around the oscillating baryon–photon fluid. The second relevant distance
is the distance between us and the sphere of last scattering. Again, we start from the co-moving distance dc
introduced in Eq.(1.35). Following Eq.(1.37) it will depend on the current energy and matter content of the
universe. The angular separation is
rs (Ωm , Ωb )
sin θ` = . (1.69)
dc
Both rs (Ωm , Ωb ) and dc are described by the same integrand in Eq.(1.36). It can be simplified for a
matter-dominated (Ωr Ωm ) and almost flat (Ωt ≈ Ωm ) Universe to
−1/2 1
Ωm (t0 )a(t) + Ωr (t0 ) + ΩΛ a(t)4 − (Ωt (t0 ) − 1) a(t)2
≈p , (1.70)
Ωm (t0 )a(t)
where we also replaced a0 = 1. The ratio of the two integrals then gives
Z adec
da
cs √ √
a 1 adec 1
sin θ` = Z 01 =√ √ ≈ ⇒ θ` ≈ 1◦ . (1.71)
da 3 1 − adec 55
√
adec a
A more careful calculation taking into account the reduced speed of sound and the effects from Ωr , ΩΛ gives a
critical angle
◦ Eq.(1.66)
4
θ` ≈ 0.6 ⇒ ` = 0.6◦ = 225 . (1.72)
3π
first peak
The first peak in Figure 3 corresponds to the fundamental tone, a sound wave with a wavelength twice the size of
the horizon at decoupling. By the time of the last scattering this wave had just compressed once. Note that a closed
1.4 Cosmic microwave background 21
or open universe predict different result for θ` following Eq.(1.38). The measurement of the position of the first
peak is therefore considered a measurement of the geometry of the universe and a confirmation of its flatness.
The second peak corresponds to the sound wave which underwent one compression and one rarefaction at the time
of the last scattering and so forth for the higher peaks. Even-numbered peaks are associated with how far the
baryon–photon fluid compresses due to the gravitational potential, odd-numbered peaks indicate the rarefaction
counter effect of radiative pressure. If the relative baryon content in the baryon–photon is higher, the radiation
pressure decreases and the compression peaks become higher. The relative amplitude between odd and even peaks
can therefore be used as a measure of Ωb .
Dark matter does not respond to radiation pressure, but contributes to the gravitational wells and therefore further
enhances the compression peaks with respect to the rarefaction peaks. This makes a large third peak a sign of a
sizable dark matter component at the time of the last scattering.
From Figure 1 we know that today we can neglect Ωr (t0 ) Ωm (t0 ) ∼ ΩΛ . Moreover, the relativistic matter
content is known from the accurate measurement of the photon temperature T0 , giving Ωr h2 through Eq.(2.9).
This means that the peaks in the CMB power spectrum will be described by: the cosmological constant defined in
Eq.(1.5), the entire matter density defined in Eq.(1.6), which is dominated by the dark matter contribution, as well
as by the baryonic matter density defined in Eq.(1.7), and the Hubble parameter defined in Eq.(1.1). People usually
choose the four parameters
Including h2 in the matter densities means that we define the total energy density Ωt (t0 ) as an independent
parameter, but at the expense of h or H0 now being a derived quantity,
2
2
H0 = h2 = Ωm (t0 )h =
Ωm (t0 )h2 Ωm (t0 )h2
≈ . (1.74)
km Ωm (t0 ) Ωt (t0 ) − ΩΛ − Ωr (t0 ) Ωt (t0 ) − ΩΛ
100
s Mpc
There are other, cosmological parameters which we for example need to determine the distance of the sphere of
last scattering, but we will not discuss them in detail. Obviously, the choice of parameter basis is not unique, but a
matter of convenience. There exist plenty of additional parameters which affect the CMB power spectrum, but
they are not as interesting for non-relativistic dark matter studies.
We go through the impact of the parameters basis defined in Eq.(1.73) one by one:
– Ωt affects the co-moving distance, Eq.(1.37), such that an increase in Ωt (t0 ) decreases dc . The same link
to the curvature, k ∝ (Ωt (t0 ) − 1) as given in Eq.(1.20), also decreases ds, following Eq.(1.38); this way the
angular diameter distance dcA is reduced. In addition, there is an indirect effect through H0 ; following
Eq.(1.74) an increased total energy density decreases H0 and in turn increases dc .
Combining all of these effects, it turns our that increasing Ωt (t0 ) decreases dc . According to Eq.(1.69) a
smaller predicted value of dc effectively increases the corresponding θ` scale. This means that the acoustic
peak positions consistently appear at smaller ` values.
– ΩΛ has two effects on the peak positions: first, ΩΛ enters the formula for dc with a different sign, which
means an increase in ΩΛ also increases dc and with it dc . At the same time, an increased ΩΛ also increases
H0 and this way decreases dc . The combined effect is that an increase in ΩΛ moves the acoustic peaks to
smaller `. Because in our parameter basis both, Ωt (t0 ) and ΩΛ have to be determined by the peak positions
we will need to find a way to break this degeneracy.
– Ωm h2 is dominated by dark matter and provides the gravitational potential for the acoustic oscillations.
Increasing the amount of dark matter stabilizes the gravitational background for the baryon–photon fluid,
reducing the height of all peaks, most visibly the first two. In addition, an increased dark matter density
makes the gravitational potential more similar to a box shape, bringing the higher modes closer together.
22 1 HISTORY OF THE UNIVERSE
– Ωb h2 essentially only affects the height of the peaks. The baryons provide most of the mass of the
baryon–photon fluid, which until now we assume to be infinitely strongly coupled. Effects of a changed
Ωb h2 on the CMB power spectrum arise when we go beyond this infinitely strong coupling. Moreover, an
increased amount of baryonic matter increases the height of the odd peaks and reduces the height of the even
peaks.
Separating these four effects from each other and from other astrophysical and cosmological parameters obviously
becomes easier when we can include more and higher peaks. Historically, the WMAP experiment lost sensitivity
around the third peak. This means that its results were typically combined with other experiments. The PLANCK
satellite clearly identified seven peaks and measures in a slight modification to our basis in Eq.(1.73) [2]
Ωχ h2 = 0.1198 ± 0.0015
Ωb h2 = 0.02225 ± 0.00016
ΩΛ = 0.6844 ± 0.0091
km
H0 = 67.27 ± 0.66 . (1.75)
Mpc s
The dark matter relic density is defined in Eq.(1.7). This is the best measurement of Ωχ we currently have.
The first Friedmann equation in this approximation also follows when we use Eq.(1.19) for k → 0 and ρt = ρm ,
ρm
H2 = . (1.81)
3MPl2
We will now allow for small perturbations around the background given in Eq.(1.79),
ρ(t, ~r) = ρ̄(t) + δρ (t, ~r) ~u(t, ~r) = H(t) ~r + ~δu (t, ~r)
1
φ(t, ~r) = ρ̄r2 + δφ (t, ~r) p(t, ~r) = p̄(t) + δp (t, ~r) . (1.82)
12MPl2
The pressure and density fluctuations are linked by the speed of sound δp = c2s δρ . Inserting Eq.(1.82), the
continuity equation becomes
0 = ρ̇ + ∇ · (ρ ~u)
= ρ̄˙ + δ˙ρ + ρ̄∇ · (H~r + ~δu ) + ∇δρ · (H~r + ~δu )
= ρ̄˙ + δ˙ρ + ρ̄∇ · (H~r + ~δu ) + ∇δρ · (H~r + ~δu )
= ρ̄˙ + δ˙ρ + ρ̄∇ · H~r + ρ̄∇ · ~δu + ∇δρ · H~r + O(δ 2 )
Eq.(1.76)
= δ˙ρ + ρ̄∇ · ~δu + ∇δρ · H~r + O(δ 2 ) , (1.83)
where we only keep terms linear in the perturbations. In the last line that the background fields solve the continuity
equation Eq.(1.76). The Euler equation for the perturbations results in
∇p
∂
0= + ~u · ∇ ~u = − − ∇φ
∂t ρm
∇(p̄ + δ )
1
˙
~ ~ ~ p 2
= Ḣ~r + δu + H~r + δu · ∇ H~r + δu + +∇ ρ̄r + δφ
ρ̄ + δρ 12MPl2
∇p̄ ∇δp
˙ 1
= Ḣ~r + ~δu + H~r · ∇H~r + ~δu · ∇H~r + H~r · ∇~δu + + +∇ ρ̄r 2
+ ∇δφ + O(δ 2 )
ρ̄ ρ̄ 12MPl2
Eq.(1.77) ˙ ∇δp
= ~δu + H ~δu + H~r · ∇~δu + + ∇δφ + O(δ 2 ) . (1.84)
ρ̄
Finally, the Poisson equation for the fluctuations becomes
1
0 = ∇2 φ − ρm
2MPl2
1 1 Eq.(1.78) 1
= ∇2 2 ρ̄r 2
+ δ φ − 2 (ρ̄ + δρ ) = ∇2 δφ − δρ . (1.85)
12MPl 2MPl 2MPl2
In analogy with Eq.(1.59) we define dimensionless fluctuations in the density field at a given place x and time t as
ρ(t, ~x) − ρ̄(t) δρ (t, ~x)
δ(t, ~x) := = , (1.86)
ρ̄(t) ρ̄(t)
and further introduce co-moving coordinates
a0 a0 a0 ∂ ∂
~x := ~r ~v := ~u ∇r := ∇x H~r · ∇r + → . (1.87)
a a a ∂t ∂t
The co-moving continuity, Euler and Poisson equations then read
δ̇ + ∇x~δv = 0
2
~δ˙v + 2H ~δv = − a0 ∇x c2 δ + δφ
s
a
1 a 2
2 0
∇x δ φ = ρ̄ δ. (1.88)
2MPl2 a
24 1 HISTORY OF THE UNIVERSE
These three equations can be combined into a second order differential equation for the density fluctuations δ,
a 2
~˙ ~ 0 2
0 = δ̈ + ∇x δv = δ̈ − ∇x 2H δv + ∇x cs δ + δφ
a
a 2 1
0
= δ̈ + 2H δ̇ − c2s ∇2x δ − ρ̄δ . (1.89)
a 2MPl2
To solve this equation, we Fourier-transform the density fluctuation and find the so-called Jeans equation
2
d3 k ~
1 cs ka0
Z
~ ¨ ˙
δ(~x, t) = δ̂(k, t) e−ik·~x ⇒ δ̂ + 2H δ̂ = δ̂ ρ̄ − . (1.90)
(2π)3 2MPl2 a
The two competing terms in the bracket correspond to a gravitational compression of the density fluctuation and a
pressure resisting this compression. The wave number for the homogeneous equation, where these terms exactly
cancel defines the Jeans wave length
s
2π a0 2MPl2
λJ = = 2π cs . (1.91)
k a ρ̄
homogeneous
Perturbations of this size neither grow nor get washed out by pressure. To get an idea what the Jeans length for
baryons means we can compare it to the co-moving Hubble scale,
s r
λJ 2MPl2 Eq.(1.81) 2
a0 −1 = 2πcs ρ̄
H = 2π
3
cs . (1.92)
H
a
√
This gives us a critical fluctuation speed of sound close to the speed of sound in relativistic matter cs ≈ 1/ 3.
Especially for non-relativistic matter, cs 1, the Jeans length is much smaller than the Hubble length and our
Newtonian approach is justified.
The Jeans equation for the evolution of a non-relativistic mass or energy density can be solved in special regimes.
First, for length scales much smaller than the Jeans length λ λJ , the Jeans equation of Eq.(1.90) becomes an
equation of a damped harmonic oscillator,
2
¨ ˙ cs ka0
δ̂ + 2H δ̂ + δ̂ = 0 ⇒ δ̂(t) ∝ e±iωt (non-relativistic, small structures) , (1.93)
a
˙
with ω = cs ka0 /a. The solutions are oscillating with decreasing amplitudes due to the Hubble friction term 2H δ̂.
Structures with sub-Jeans lengths, λ λJ , therefore do not grow, but the resulting acoustic oscillations can be
observed in the matter power spectrum today.
In the opposite regime, for structures larger than the Jeans length λ λJ , the pressure term in the Jeans equation
can be neglected. The gravitational compression term can be simplified for a matter-dominated universe with
a ∝ t2/3 , Eq.(1.31). This gives H = ȧ/a = 2/(3t) and it follows for the second Friedmann equation that
2 Eq.(1.80) 1 4 MPl2
Ḣ + H 2 = − = −ρ̄ ⇒ ρ̄ = . (1.94)
9t2 6MPl2 3 t2
We can use this form to simplify the Jeans equation and solve it
¨ 4 ˙ 2 B
δ̂ + δ̂ − 2 δ̂ = 0 ⇒ δ̂ = At2/3 +
3t 3t t
∝ t2/3 growing mode
a
=: δ̂0 using a ∝ t2/3 (non-relativistic, large structures). (1.95)
a0
1.5 Structure formation 25
We can use this formula for the growth as a function of the scale factor to link the density perturbations at the time
of photon decoupling to today. For this we quote that at photon decoupling we expect δ̂dec ≈ 10−5 , which gives us
for today
δ̂dec Eq.(1.57)
δ̂0 = = 1100 δ̂dec ≈ 1% . (1.96)
adec
We can compare this value with the results from numerical N-body simulations and find that those simulations
prefer much larger values δ̂0 ≈ 1. In other words, the smoothness of the CMB shows that perturbations in the
photon-baryon fluid alone cannot account for the cosmic structures observed today. One way to improve the
situation is to introduce a dominant non-relativistic matter component with a negligible pressure term, defining the
main properties of cold dark matter.
Until now our solutions of the Jeans equation rely on the assumption of non-relativistic matter domination. For
relativistic matter with a ∝ t1/2 the growth of density perturbations follows a different scaling. Following
Eq.(1.32) we use H = t/2 and assume H 2 4πGρ̄, such that the Jeans equation becomes
˙
¨ δ̂
δ̂ + = 0 ⇒ δ̂ = A + B log t (relativistic, small structures). (1.97)
t
This growth of density perturbations is much weaker than for non-relativistic matter.
Finally, we have to consider relativistic density perturbations larger than the Hubble scale, λ a/a0 H. In this
case a Newtonian treatment is no longer justified and we only quote the result of the full calculation from general
relativity, which gives a scaling
2
a
δ̂ = δ̂0 (relativistic, large structures). (1.98)
a0
Together with Eqs.(1.93), Eq.(1.95), and (1.97) this gives us the growth of structures as a function of the scale
parameter for non-relativistic and relativistic matter and for small as well as large structures. Radiation pressure in
the photon-baryon fluid prevents the growth of small baryonic structures, but baryon-acoustic oscillations on
smaller scales predicted by Eq.(1.93) can be observed. Large structures in a relativistic, radiation-dominated
universe indeed expand rapidly. Later in the evolution of the Universe, non-relativistic structures come close to
explaining the matter density in the CMB evolving to today as we see it in numerical simulations, but require a
dominant additional matter component.
Similar to the variations of the cosmic microwave photon temperature we can expand our analysis of the matter
density from the central value to its distribution with different sizes or wave numbers. To this end we define the
matter power spectrum P (k) in momentum space as
As before, we can link k to a wave length λ = 2π/k. For the scaling of the initial power spectrum the proposed
relation by Harrison and Zel’dovich is
n
n 2π
P (k) ∝ k = . (1.100)
λ
From observations we know that n 1 leads to an increase in small-scale structures and as a consequence to too
many black holes. We also know that for n 1, large structures like super-clusters dominate over smaller
structures like galaxies, again contradicting observations. Based on this, the exponent was originally predicted to
be n = 1, in agreement with standard inflationary cosmology. However, the global CMB analysis by PLANCK
quoted in Eq.(1.75) gives
We can solve this slight disagreement by considering perturbations of different size separately. First there are
small perturbations (large k), which enter the horizon of our expanding Universe during the radiation-dominated
era and hardly grow until matter-radiation equality. Second, there are large perturbations with (small k), which
only enter the horizon during matter domination and never stop growing. This freezing of the growth before matter
domination is called the Meszaros effect. Following Eq.(1.98) the relative suppression in their respective growth
between the entering into the horizon and the radiation-matter equality is given by a the correction factor relative
to Eq.(1.100) with n = 1,
2
aenter
P (k) ∝ k . (1.102)
aeq
We are interested in the wavelength of a mode that enters right at matter-radiation equality and hence is the first
mode that never stops growing. Assuming the usual scaling Ωm /Ωr ∝ a and we first find
Ωm (aeq )
aeq Ωr (aeq ) Ωr (a0 )
= = ≈ 3 · 10−4 , (1.103)
a0 Ωm (a0 ) Ωm (a0 )
Ωr (a0 )
again from PLANCK measurements. This allows us to integrate the co-moving distance of Eq.(1.36). The lower
and upper limit of integration is a = 0 and a = aeq = 3 · 10−4 , respectively. For these values of a 1 the
relativistic matter dominates the universe, as can be seen in Figure 1. In this range the integrand of Eq.(1.36) is
approximately
−1/2 1
Ωm (t0 )a(t) + Ωr (t0 ) + ΩΛ a(t)4 − (Ωt (t0 ) − 1) a(t)2
≈p . (1.104)
Ωr (t0 )
This is true even for ΩΛ (t0 ) > Ωm (t0 ) > Ωr (t0 ) today. We can use Eq.(1.104) and write
Z aeq
1 aeq Eq.(1.103) 3 · 10−4 Mpc s
c
deq ≈ da p = p = km
√ = 4.7 · 10−4
H0 Ωr (t0 ) H0 Ωr (t0 ) 70 sMpc 0.28 × 3 · 10 −4 km
0
km Mpc s
⇒ λeq = c dceq ≈ 3 · 105 × 4.7 · 10−4 = 140 Mpc . (1.105)
s km
Figure 4: Best fit of today’s matter power spectrum (a0 = 1) from Max Tegmark’s lecture notes [3].
1.5 Structure formation 27
This means that the growth of structures with a size of at least 140 Mpc never stalls, while for smaller structures
the Meszaros effect leads to a suppressed growth. The scaling of λeq in the radiation dominated era in dependence
of the scale factor is given by λeq ∝ aeq c/H0 . The co-moving wavenumber is defined as k = 2π/λ and therefore
keq ≈ 0.05/Mpc. Using this scaling, a ∝ 1/k, the power spectrum scales as
aenter
2 k
k < keq or λ > 120 Mpc
P (k) ∝ k = (1.106)
aeq 1
k > keq or λ < 120 Mpc.
k3
The measurement of the power spectrum shown in Figure 4 confirms these two regimes.
Even if pressure can be neglected for cold, collision-less dark matter, its perturbations cannot collapse towards
arbitrary small scales because of the non-zero velocity dispersion. Once the velocity of dark matter particles
exceeds the escape velocity of a density perturbation, they will stream away before they can be gravitationally
bound. This phenomenon is called free streaming and allows us to derive more properties of the dark matter
particles from the matter power spectrum. To this end we generalize the Jeans equation of Eq.(1.90) to
2
¨ ˙ 1 eff ka0
δ̂ + 2H δ̂ = δ̂ ρ̄ − c s , (1.107)
2MPl2 a
where in the term counteracting the gravitational attraction the speed of sound is replaced by an effective speed of
sound ceff
s , whose precise form depends on the properties of the dark matter, We show predictions for different dark
matter particles in Fig. [4]:
1 dp p2 f (p)
R
Linear Structure Formation
= –2 Interacting
m
(ceff
s )
R 2
dp f (p)
DM (1.108)
the speed of sound is replaced by the non-relativistic velocity distribution. This results in ceff
s cs and the
cold dark matter Jeans length allows for halo structures as small as stars or planets. The dominant dark
matter component in Figure 4 is cold collision-less dark matter and all lines in Figure 5 is normalized to this
the power spectrum;
P (k)/P⇤CDM (k)
k [h/Mpc]
Figure 5: Sketch of the matter power spectrum for different dark matter scenarios normalized to the ΛCDM power
spectrum. Figure from Ref. [4].
28 1 HISTORY OF THE UNIVERSE
T
ceff
s = (1.109)
m
the effective speed of sound is a function of temperature and mass. Warm dark matter is faster than cold dark
matter and the effective speed of sound is larger. As a result, small structures are washed out as indicated by
the blue line, because the free streaming length for warm dark matter is larger than for cold dark matter;
1 dp p2 f (p)
R
eff 2
(cs ) = 2 R . (1.110)
m dp f (p)
They are a special case of warm dark matter, but the result of the integral depends on the velocity
distribution, which is model-dependent. In general a suppression of small scale structures is expected and
the resulting normalized power spectrum should end up between the two cyan lines;
– light, non-relativistic dark matter or fuzzy dark matter which we will discuss in Section 2.2 gives
k
ceff
s = . (1.111)
m
The effective speed of sound depends on k, leading to an even stronger suppression of small scale structures.
The normalized power spectrum is shown in turquoise;
– for mixed warm and cold dark matter with
2 T2 ρ̄ a δ̂C
(ceff
s ) = − (1.112)
m2 2MPl2 a0 k δ̂
the power spectrum is suppressed. Besides a temperature-dependent speed of sound for the warm dark
matter component, a separate gravitational term for the cold dark matter needs to be added in the Jeans
equation. Massive neutrinos are a special case of this scenario and in turn the power spectrum can be used to
constrain SM neutrino masses;
– finally, self-interacting dark matter with the distinctive new term
2 ˙ ˙ a 1
(ceff dark
s ) = cs + R(δ̂ − δ̂χ ) (1.113)
a0 k δ̂
covers models from a dark force (dark radiation) to multi-component dark matter that could form dark
atoms. Besides a potential dark sound speed, the Jeans equation needs to be modified by an interaction term.
The effects on the power spectrum range from dark acoustic oscillations to a suppression of structures at
multiple scales.
29
2 Relics
After we understand the relic photons in the Universe, we can focus on a set of different other relics, including the
first dark matter candidates. For those the main question is to explain the observed value of Ωχ h2 ≈ 0.12. Before
we will eventually turn to thermal production of massive dark matter particles, we can use a similar approach as
for the photons for relic neutrinos. Furthermore, we will look at ways to produce dark matter during the thermal
history of the Universe, without relying on the thermal bath.
In analogy to photon decoupling, just replacing the photon–electron scattering rate given in Eq.(1.49) by the much
larger neutrino–electron scattering rate, we can also compute today’s neutrino background in the Universe. At the
point of decoupling the neutrinos decouple from the electrons and photons, but they will also lose the ability to
annihilate among themselves through the weak interaction. A well-defined density of neutrinos will therefore
freeze out of thermal equilibrium. The two processes
With only one generation of neutrinos in the initial state and a purely left-handed coupling the number of
relativistic degrees of freedom relevant for this scattering process is g = 1.
Just as for the photons, we first compute the decoupling temperature. To link the interaction rate to the Hubble
constant, as given by Eq.(1.47), we need the effective number of degrees of freedom in the thermal bath. It now
includes electrons, positrons, three generations of neutrinos, and photons
7
geff (Tdec ) = (2 + 2 + 3 × 2) + 2 = 10.75 . (2.3)
8
With Eq.(1.47) and in analogy to Eq.(1.55) we find
√
Γν 3ζ3 gα T 5 90MPl 1
= 2 4 4
p
H 4π sw mW π geff (T )T 2
√ p !1/3
9 10 ζ3 2 g MPl T 3 ! 4π 2 s4w m4W geff (Tdec )
= α =1 ⇔ Tdec = √
4π 2 s4w geff (T ) m4W α2 g
p
9 10 ζ3 MPl
≈ (1 ... 10) MeV . (2.4)
The relativistic neutrinos decouple at a temperature of a few MeV, before nucleosynthesis. From the full
Boltzmann equation we would get Tdec ≈ 1 MeV, consistent with our approximate computation.
Now that we know how the neutrinos and photons decouple from the thermal bath, we follow the
electron-neutrino-photon system from the decoupling phase to today, dealing with one more relevant effect. First,
right after the neutrinos decouple around Tdec ≈ 1 MeV, the electron with a mass of me = 0.5 MeV will drop from
30 2 RELICS
the relevant relativistic degrees of freedom; in the following phase the electrons will only serve as a background
for the photon. For the evolution to today we only have
7
geff (Tdec ... T0 ) = × 6 + 2 = 7.25 (2.5)
8
relativistic degrees of freedom. The decoupling of the massive electron adds one complication: in the full
thermodynamic calculation we need to assume that their entropy is transferred to the photons, the only other
particles still in equilibrium. We only quote the corresponding result from the complete calculation: because the
entropy in the system should not change in this electron decoupling process, the temperature of the photons jumps
from
1/3
11
Tγ = Tν → Tγ = Tν . (2.6)
4
If the neutrino and photon do not have the same temperature we can use Eqs.(1.43) and (1.44) to obtain the
combined relativistic matter density at the time of neutrino decoupling,
π2
ρr (T ) = geff (T ) T 4
30 ! 4/3 !
π2 Tγ4 7 Tν4 π2 π2
21 4
= 2 4 + 6 4 T4 ⇒ ρr (Tγ ) = 2+ Tγ4 = 3.4 Tγ4 , (2.7)
30 T 8 T 30 4 11 30
or geff (T ) = 3.4. This assumes that we measure the current temperature of the Universe through the photons.
Assuming a constant suppression of the neutrino background, its temperature and the total relativistic energy
density today are
π2
T0,ν = 1.7 · 10−4 eV and ρr (T0 ) = 4
3.4 T0,γ 4
= 1.1 T0,γ . (2.8)
30
From the composition in Eq.(2.7) we see that the current relativistic matter density of the Universe is split roughly
60 − 40 between the photons at T0,γ = 2.4 · 10−4 eV and the neutrinos at T0,ν = 1.7 · 10−4 eV. The normalized
relativistic relic density today becomes
4
ρr (T0 )h2 2.4 · 10−4 eV
2
Ωr (t0 )h = = 0.54 = 4.6 · 10−5 . (2.9)
3MPl2 H02 2.5 · 10−3 eV
Note that for this result we assume that the neutrino mass never plays a role in our calculation, which is not at all a
good approximation.
We are now in a position to answer the question whether a massive, stable fourth neutrino could explain the
observed dark matter relic density. With a moderate mass, this fourth neutrino decouples in a relativistic state. In
that case we can relate its number density to the photon temperature through Eq.(2.7),
6 ζ3 3
ρν (T0 ) = mν nν (T0 ) = mν T
11π 2 0,γ
6 ζ3 3 h2 mν (2.4 · 10−4 )3 1 mν
⇒ Ων h2 = mν T = = . (2.11)
11π 2 0,γ 2
3MPl H02 −3 4
30 (2.5 · 10 ) eV 85 eV
2.2 Cold light dark matter 31
For an additional, heavy neutrino to account for the observed dark matter we need to require
!
Ων h2 = Ωχ h2 ≈ 0.12 ⇔ mν ≈ 10 eV . (2.12)
This number for hot neutrino dark matter is not unreasonable, as long as we only consider the dark matter relic
density today. The problem appears when we study the formation of galaxies, where it turns out that dark matter
relativistic at the point of decoupling will move too fast to stabilize the accumulation of matter. We can look at
Eq.(2.12) another way: if all neutrinos in the Universe add to more than this mass value, they predict hot dark
matter with a relic density more than then entire dark matter in the Universe. This gives a stringent upper bound on
the neutrino mass scale.
We consider a toy model for light cold dark matter with a spatially homogeneous but time-dependent complex
scalar field φ(t) with a potential V . For the latter, the Taylor expansion is dominated by a quadratic mass term mφ .
Based on the invariant action with the additional determinant of the metric g, describing the expanding Universe,
the Lagrangian for a single complex scalar field reads
1
p L = (∂ µ φ∗ )(∂µ φ) − V (φ) = (∂ µ φ∗ )(∂µ φ) − m2φ φ∗ φ . (2.13)
|g|
Just as a side remark, the difference between the Lagrangians for real and complex scalar fields is a set of factors
1/2 in front of each term. In our case the equation of motion for a spatially homogeneous field φ(t) is
∂L ∂L
0 = ∂t −
∂(∂t φ∗ ) ∂φ∗
p p
= ∂t |g| ∂t φ + |g| m2φ φ
" p #
p p p p (∂ t |g|)
= (∂t |g|) (∂t φ) + |g| ∂t2 φ + |g| m2φ φ = |g| p (∂t φ) + ∂t2 φ + m2φ φ . (2.14)
|g|
For example from Eq.(1.13) we know that in flat space (k = 0) the determinant of the metric is |g| = a6 , giving us
(∂t a3 ) 3ȧ
0= 3
(∂t φ) + ∂t2 φ + m2φ φ = φ̇ + φ̈ + m2φ φ . (2.15)
a a
Using the definition of the Hubble constant in Eq.(1.14) we find that the expansion of the Universe is responsible
for the friction term in
φ̈(t) + 3H φ̇(t) + m2φ φ(t) = 0 . (2.16)
We can solve this equation for the evolving Universe, described by a decreasing Hubble constant with increasing
time or decreasing temperature, Eq.(1.47). If for each regime we assume a constant value of H — an
approximation we need to check later — and find
φ(t) = eiωt ⇒ φ̇(t) = iωφ(t) ⇒ φ̈(t) = −ω 2 φ(t)
⇒ −ω 2 + 3iHω + m2φ = 0
r
3i 9
⇒ ω = H ± − H 2 + m2φ . (2.17)
2 4
This functional form defines three distinct regimes in the evolution of the Universe:
32 2 RELICS
– In the early Universe H mφ the two solutions are ω = 0 and ω = 3iH. The scalar field value is a
combination of a constant mode and an exponentially decaying mode.
time evolution
φ(t) = φ1 + φ2 e−3Ht −→ φ1 . (2.18)
The scalar field very rapidly settles in a constant field value and stays there. There is no good reason to
assume that this constant value corresponds to a minimum of the potential. Due to the Hubble friction term
in Eq.(2.16), there is simply no time for the field to evolve towards another, minimal value. This behavior
gives the process its name, misalignment mechanism. For our dark matter considerations we are interested
in the energy density. Following the virial theorem we assume that the total energy density stored in our
spatially constant field is twice the average potential energy V = m2φ |φ|2 /2. After the rapid decay of the
exponential contribution this means
– A transition point in the evolution of the universe occurs when the evolution of the field φ switches from the
exponential decay towards a constant value φ1 to an oscillation mode. If we identify the oscillation modes of
the field φ with a dark matter degree of freedom, this point in the thermal history defines the production of
cold, light dark matter,
3i
Hprod ≈ mφ ⇔ ω≈ Hprod . (2.20)
2
– For the late Universe H mφ we expand the complex eigen-frequency one step further,
s !
3i 9H 2 3i 9H 2 3i
ω = H ± mφ 1 − 2 ≈ H ± mφ 1 − 2 ≈ ±mφ + H . (2.21)
2 4mφ 2 8mφ 2
The leading time dependence of the scalar field is an oscillation. The subleading term, suppressed by
H/mφ , describes an exponentially decreasing field amplitude,
A modification of the assumed constant H value changes the rapid decay of the amplitude, but should not
affect these main features. We can understand the physics of this late behavior when we compare it to the
variation of the scale factor for constant H given in Eq.(1.14),
The energy density of the scalar field in this late regime is inversely proportional to the space volume
element in the expanding Universe. This relation is exactly what we expect from a non-relativistic relic
without any interaction or quantum effects.
Next, we can use Eq.(2.23) combined with the assumption of constant H to approximately relate the dark matter
relic densities at the time of production with today a0 = 1,
Using our thermodynamic result a(T ) ∝ 1/T from Eq.(1.33) and the approximate relation between the Hubble
parameter and the temperature at the time of production we find
√ 3/2 3/2
mφ φ(tprod ) T0 Eq.(1.47) mφ φ(tprod ) T0
0.12 = h ≈ h. (2.25)
(2.5 · 10−3 eV)2 T 3/2 (2.5 · 10−3 eV)2 (Hprod MPl )3/4
prod
Moreover, from Eq.(2.20) we know that the Hubble constant at the time of dark matter production is Hprod ∼ mφ .
This leads us to the relic density condition for dark matter produced by the misalignment mechanism,
3/2
mφ φ(tprod ) T0
0.35 = h (2.26)
(2.5 · 10−3 eV)2 (mφ MPl )3/4
1 (2.5 · 10−3 eV)2
⇔ mφ φ(tprod ) = (mφ MPl )3/4 ⇔ mφ φ(tprod ) ≈ (mφ MPl )3/4 eV1/2 .
2 (2.4 · 10−4 eV)3/2
This is the general relation between the mass of a cold dark matter particle and its field value, based on the
observed relic density. If the misalignment mechanism should be responsible for today’s dark matter, inflation
occuring after the field φ has picked its non-trivial starting value will have to give us the required spatial
homogeneity. This is exactly the same argument we used for the relic photons in Section 1.4. We can then link
today’s density to the density at an early starting point through the evolution sketched above.
Before we illustrate this behavior with a specific model we can briefly check when and why this dark matter
candidate is non-relativistic. If through some unspecified quantization we identify the field oscillations of φ with
dark matter particles, their non-relativistic velocity is linked to the field value φ through the quantum mechanical
definition of the momentum operator,
p̂ ∂φ
v= ∝ , (2.27)
m ∂x
assuming an appropriate normalization by the field value φ. It can be small, provided we find a mechanism to keep
the field φ spatially constant. What is nice about this model for cold, light dark matter is that it requires absolutely
no particle physics calculations, no relativistic field theory, and can always be tuned to work.
2.3 Axions
The best way to guarantee that a particle is massless or light is through a symmetry in the Lagrangian of the
quantum field theory. For example, if the Lagrangian for a real spin-0 field φ(x) ≡ a(x) is invariant under a
constant shift a(x) → a(x) + c, a mass term m2a a2 breaks this symmetry. Such particles, called Nambu-Goldstone
bosons, appear in theories with broken global symmetries. Because most global symmetry groups are compact or
have hermitian generators and unitary representations, the Nambu-Goldstone bosons are usually CP-odd.
We illustrate their structure using a complex scalar field transforming under a U (1) rotation, φ → φ eia/fa . A
vacuum expectation value hφi = fa leads to spontaneous breaking of the U (1) symmetry, and the
Nambu-Goldstone boson a will be identified with the broken generator of the phase. If the complex scalar has
couplings to chiral fermions ψL and ψR charged under this U (1) group, the Lagrangian includes the terms
L ⊃ iψ L γ µ ∂µ ψL + iψ R γ µ ∂µ ψR − y φ ψ R ψL + h.c. (2.28)
We can rewrite the Yukawa coupling such that after the rotation the phase is absorbed in the definition of the
fermion fields,
00 0
y φ ψ R ψL → y fa ψ R eia/fa ψL ≡ y fa ψ R ψL with ψR,L = e∓ia/(2fa ) ψR,L . (2.29)
34 2 RELICS
This gives us a fermion mass mψ = yfa . In the new basis the kinetic terms read
0 0
iψ L γ µ ∂µ ψL + iψ R γ µ ∂µ ψR = i ψ L e−ia/(2fa ) γ µ ∂µ eia/(2fa ) ψL
0
+ i ψ R eia/(2fa ) γ µ ∂µ e−ia/(2fa ) ψR
0
0 (∂µ a) 0 0 (∂µ a) 0
= i ψ L γ µ ∂µ + i ψL + i ψ R γ µ ∂µ − i ψR + O(fa−2 )
2fa 2fa
(∂µ a)
= i ψγ µ ∂µ ψ + ψ γ µ γ5 ψ + O(fa−2 ) , (2.30)
2fa
0 0
where in the last line we define the four-component spinor ψ ≡ (ψL , ψR ). The derivative coupling and the axial
structure of the new particle a are evident. Other structures arise if the underlying symmetry is not unitary, as is the
case for space-time symmetries for which the group elements can be written as eα/f and a calculation analogous
to Eq.(2.30) leads to scalar couplings. The Nambu-Goldstone boson of the scale symmetry, the dilaton, is an
example of such a case.
Following Eq.(2.30), the general shift-symmetric Lagrangian for such a CP-odd pseudo-scalar reads
1 a αs a e a µν a α (∂µ a) X
L = (∂µ a) (∂ µ a) + Gµν G + cγ Fµν Feµν + cψ ψγ µ γ5 ψ . (2.31)
2 fa 8π fa 8π 2fa
ψ
The coupling to the Standard Model is mediated by a derivative interactions to the (axial) current of all SM
fermions. Here Feµν = µνρτ F ρτ /2 and correspondingly G e µν are the dual field-strength tensors. This setup has
come to fame as a possible solution to the so-called strong CP-problem. In QCD, the dimension-4 operator
θQCD a e a µν
G G (2.32)
8π µν
respects the SU (3) gauge symmetry, but would induce observable CP-violation, for example a dipole moment for
the neutron. Note that it is non-trivial that this operator cannot be ignored, because it looks like a total derivative,
but due to the topological structure of SU (3), it doesn’t vanish. The non-observation of a neutron dipole moment
sets very strong constraints on θQCD < 10−10 . This almost looks like this operator shouldn’t be there and yet there
is no symmetry in the Standard Model that forbids it.
Combining the gluonic operators in Eq.(2.31) and Eq.(2.32) allows us to solve this problem
1 αs a e a µν + cγ a α Fµν Feµν + (∂µ a)
X
L = (∂µ a) (∂ a) +
µ
− θQCD Gaµν G cψ ψγ µ γ5 ψ . (2.33)
2 8π fa fa 8π 2fa
ψ
With this ansatz we can combine the θ-parameter and the scalar field, such that after quarks and gluons have
formed hadrons, we can rewrite the corresponding effective Lagrangian including the terms
2 4
1 1 2 a a
Leff ⊃ (∂µ a) (∂ a) − κ θQCD −
µ
− λa θQCD − + O(fa−6 ) . (2.34)
2 2 fa fa
The parameters κ and λa depend on the QCD dynamics. This contribution provides a potential for a with a
minimum at hai/fa = θQCD . In other words, the shift symmetry has eliminated the CP-violating gluonic term from
the theory. Because of its axial couplings to matter fields, the field a is called axion.
The axion would be a bad dark matter candidate if it was truly massless. However, the same effects that induce a
potential for the axion also induce an axion mass. Indeed, from Eq.(2.34) we immediately see that
2
∂ V κ2
m2a ≡ = , (2.35)
∂a2 fa2
a=fa θQCD
This seems like a contradiction, because a mass term breaks the shift symmetry and for a true Nambu-Goldstone
boson, we expect this term to vanish. However, in the presence of quark masses, the transformations in Eq.(2.30)
do not leave the Lagrangian invariant under the shift symmetry
0 0
mq ψ R ψL → mq ψ R e2ia/fa ψL . (2.36)
2.3 Axions 35
Fermion masses lead to an explicit breaking of the shift symmetry and turn the axion a pseudo Nambu-Goldstone
boson, similar to the pions in QCD. For more than one quark flavor it suffices to have a single massless quark to
recover the shift symmetry. We can determine this mass term from a chiral Lagrangian in which the fundamental
fields are hadrons instead of quarks and find
where fπ ≈ mπ ≈ 140 MeV are the pion decay constant and mass, respectively. This term vanishes in the limit
mu → 0 or md → 0 as we expect from the discussion above. In the original axion proposal, fa ∼ v and therefore
ma ∼ 10 keV. Since the couplings of the axion are also fixed by the value of fa , such a particle was excluded very
fast by searches for rare kaon decays, like for instance K + → π + a. In general, fa can be a free parameter and the
mass of the axion can be smaller and its couplings can be weaker.
This leaves the question for which model parameters the axion makes a good dark matter candidate. Since the
value of the axion field is not necessarily at the minimum of the potential at the time of the QCD phase transition,
the axion begins to oscillates around the minimum and the oscillation energy density contributes to the dark matter
relic density. This is a special version of the more general misalignment mechanism described in the previous
section. We can then employ Eq.(2.26) and find the relation for the observed relic density
The maximum field value of the oscillation mode is given by a(tprod ) ≈ fa and therefore
This relation holds for ma ≈ 2 · 10−6 eV, which corresponds to fa ≈ 1013 GeV. For heavier axions and smaller
values of fa , the axion can still constitute a part of the relic density. For example with a mass of ma = 6 · 10−5 eV
and fa ≈ 3 · 1011 GeV, axions make up one per-cent of the observed relic density.
Dark Matter candidates with such low masses are hard to detect and we usually takes advantage of their couplings
to photons. In Eq.(2.31), there is no reason why the coupling cγ needs to be there. It it neither relevant for the
strong CP problem nor for the axion to be dark matter. However, from the perspective of the effective theory we
expect all couplings which are allowed by the assumed symmetry structure to appear. This includes the axion
coupling to photons. If the complete theory, the axion coupling to gluons needs to be induced by some physics at
the mass scale fa . This can be achieved by axion couplings to SM quarks, or by axion couplings to non-SM fields
that are color-charged but electrically neutral. Even in the latter case there is a non-zero coupling induced by the
axion mixing with the SM pion after the QCD phase transition. Apart from really fine-tuned models the axion
therefore couples to photons with an order-one coupling constant cγ .
In Figure 6 the yellow band shows the range of axion couplings to photons for which the models solve Eq.(2.37).
The regime where the axion is a viable dark matter candidate is dashed. It is notoriously hard to probe axion dark
matter in the parameter space in which they can constitute dark matter. Helioscopes try to convert axions produced
in the sun into observable photons through a strong magnetic field. Haloscopes like ADMX use the same strategy
to search for axions in the dark matter halo.
The same axion coupling to photons that we rely on for axion detection also allows for the decay a → γγ. This
looks dangerous for a dark matter candidate, to we can estimate the corresponding decay width,
α3 m3a
Γ(a → γγ) = | cγ |2 (2.40)
256 π 3 fa2
1 1 (6 · 10−6 eV)3
= |cγ |2 ≈ 1 · 10−70 eV |cγ |2
1373 256 π 3 (1022 eV)2
36 2 RELICS
Assuming cγ = 1 this corresponds to a lifetime of τ = 1/Γ ≈ 2 · 1047 years, many orders of magnitude larger than
the age of the universe.
While the axion is particularly interesting because it addresses the strong CP problem and dark matter at the same
time, we can drop the relation to the CP problem and study axion-like particles (ALPs) as dark matter. For such a
light pseudoscalar particle Eqs.(2.37) and (2.39) are replaced by the more general relations
µ2 µ8/3
ma = ⇒ ma = eV−2/3 . (2.41)
fa MPl
where µ is a mass scale not related to QCD. In such models, the axion-like pseudoscalar can be very light. For
example for µ ≈ 100 eV, the axion mass is ma ≈ 10−22 eV. For such a low mass and a typical velocity of
v ≈ 100 km/s, the de-Broglie wavelength is around 1 Kpc, the size of a galaxy. This type of dark matter is called
fuzzy dark matter and can inherit the interesting properties of Bose-Einstein condensates or super-fluids.
Before we look at a set of relics linked to dark matter, let us follow a famous argument fixing the conditions which
allow us to live in a Universe dominated by matter rather than anti-matter. In this section we will largely follow
Kolb & Turner [6]. The observational reasoning why matter dominates the Universe goes in two steps: first, matter
and anti-matter cannot be mixed, because we do not observe constant macroscopic annihilation; second, if we
separate matter and anti-matter we should see a boundary with constant annihilation processes, which we do not
see, either. So there cannot be too much anti-matter in the Universe.
The corresponding measurement is usually formulated in terms of the observed baryons, protons and neutrons,
relative to the number of photons,
nB
Axion Parameter Space nγ
≈ 6 · 10−10 . (2.42)
Figure 6: Range of masses and couplings for which the axion can be a viable cold dark matter candidate. Figure
from Ref. [5].
2.4 Matter vs anti-matter 37
The normalization to the photon density is motivated by the fact that this ratio should be of order unity in the very
early Universe. Effects of the Universe’s expansion and cooling to first approximation cancel. Its choice is only
indirectly related to the observed number of photons and instead assumes that the photon density as the entropy
density in thermal equilibrium. As a matter of fact, we use this number already in Eq.(1.53).
To understand Eq.(2.42) we start by remembering that in the hot universe anti-quarks and quarks or anti-baryons
and baryons are pair-produced out of a thermal bath and annihilate with each other in thermal equilibrium.
Following the same argument as for the photons, the baryons and anti-baryons decouple from each other when the
temperature drops enough. In this scenario we can estimate the ratio of baryon and photon densities from
Eq.(1.40), assuming for example T = 20 MeV mB = 1 GeV
3/2
mB T
gB e−mB /T
nB (T ) nB̄ (T ) 2π
= =
nγ (T ) nγ (T ) ζ3
g T3
2 γ
√ π
gB π mB 3/2 −mB /T
= √ e = 3.5 · 10−20 (2.43)
gγ 2 2ζ3 T
The way of looking at the baryon asymmetry is that independent of the actual anti-baryon density the density of
baryons observed today is much larger than what we would expect from thermal production. While we will see
that for dark matter the problem is to get their interactions just right to produce the correct freeze-out density, for
baryons the problem is to avoid their annihilation as much as possible.
We can think of two ways to avoid such an over-annihilation in our thermal history. First, there could be some kind
of mechanism stopping the annihilation of baryons and anti-baryons when nB /nγ reaches the observed value. The
problem with this solution is that we would still have to do something with the anti-baryons, as discussed above.
The second solution is to assume that through the baryon annihilation phase there exists an initially small
asymmetry, such that almost all anti-baryons annihilate while the observed baryons remain. As a rough estimate,
neglecting all degrees of freedom and differences between fermions and bosons, we assume that in the hot thermal
bath we start with roughly as many baryons as photons. After cooling we assume that the anti-baryons reach their
thermal density given in Eq.(2.43), while the baryons through some mechanism arrive at today’s density given in
Eq.(2.42). The baryon vs anti-baryon asymmetry starting at an early time then becomes
nB − nB̄ nB n cooling
≈ − B̄ −→ 6 · 10−10 − 3.5 · 10−20 ≈ 6 · 10−10 . (2.44)
nB nγ nγ
If we do the proper calculation, the correct number for a net quark excess in the early Universe comes out around
nB − nB̄
≈ 3 · 10−8 . (2.45)
nB
In the early Universe we start with this very small net asymmetry between the very large individual densities of
baryons and anti-baryons. Rather than through the freeze-out mechanism introduced for neutrinos in Section 2.1,
the baryons decouple when all anti-baryons are annihilated away. This mechanism can explain the very large
baryon density measured today. The question is now how this asymmetry occurs at high temperatures.
Unlike the rest of the lecture notes, the discussion of the matter anti-matter asymmetry is not aimed at showing
how the relic densities of the two species are computed. Instead, we will get to the general Sakharov conditions
which tell us what ingredients our theory has to have to generate a net baryon excess in the early Universe, where
we naively would expect the number of baryons and anti-baryons (or quarks and anti-quarks) to be exactly the
same and in thermal equilibrium. Let us go through these condition one by one:
Baryon number violation — to understand this condition we just need to remember that we want to generate a
different density of baryons (baryon number B = +1) and anti-baryons (baryon number B = −1) dynamically
38 2 RELICS
during the evolution of the Universe. We assume that our theory is described by a Lagrangian including finite
temperature effects. If our Lagrangian is fully symmetric with respect to exchanging baryons with anti-baryons
there will be no effect, no interaction, no scattering rate, no decay rate, nothing that can ever distinguish between
baryons and anti-baryons and hence generate an asymmetry from a symmetric starting point. Let us assume that we
want to generate a baryon asymmetry from an interaction of quarks and leptons with a heavy state X of the kind
X → dd ¯−,
X → d` (2.46)
where the d quark carries baryon number 1/3. A scattering process induced by these two interactions,
¯−,
dd → X ∗ → d` (2.47)
links an initial state with B = 2/3 to a final state with B = −1/3. The combination B − L is instead conserved.
Such heavy bosons can appear in grand unified theories.
In the Standard Model the situation is a little more complicated: instead of the lepton number L and the baryon
number B individually, the combination B − L is indeed an (accidental) global symmetry of the electroweak
interaction to all orders. In contrast, the orthogonal B + L is anomalous, i.e. there are quantum contributions to
scattering processes which respect B − L but violate B + L. One can show that non-perturbative
finite-temperature sphaleron processes can generate the combined state
for one generation of fermions with SU (2)L indices i, j, k out of the vacuum. It violates lepton and baryon
number,
∆L = 1 ∆B = 1 ∆(B − L) = 0 . (2.49)
The probability of these sphaleron transition to happen at zero temperature (where they are called instanton
2 2
transitions) scales like e−8π /g with the weak SU (2)L coupling g ≈ 0.7. At high temperatures their rate
increases significantly. The main effect of such interactions is that we can replace the condition of baryon number
violation with lepton number violation when we ensure that sphaleron-induced processes transform a lepton
asymmetry into a baryon asymmetry and neither of them gets washed out. This process is called leptogenesis
rather than baryogenesis.
Departure from thermal equilibrium — in our above setup we can see what assumptions we need to be able to
generate a net baryon asymmetry from the interactions given in Eq.(2.46) and the scattering given in Eq.(2.47). If
we follow the reasoning for the relic photons we reduce the temperature until the two sides of the 2 → 2 scattering
process in Eq.(2.47) drop out of thermal equilibrium. Our Universe could settles on one of the two sides of the
scattering process, i.e. either with a net excess of d over d¯ particles or vice versa. The problem is that the process
d¯d¯ → X̄ ∗ → d`+ with mX = mX̄ is protected by CPT invariance and will compensate everything exactly.
The more promising approach are out-of-equilibrium decays of the heavy X boson. This means that a population
of X and X̄ bosons decouple from the thermal bath early and induce the baryon asymmetry through late decays
preferably into quarks or anti-quarks. In both cases we see that baryon number violating interactions require a
departure from thermal equilibrium to generate a net baryon asymmetry in the evolution of the Universe.
In the absence of late-decaying particles, for example in the Standard Model, we need to rely on another
mechanism to deviate from thermal equilibrium. The electroweak phase transition, like any phase transition, can
proceed in two ways: if the phase transition is of first order the Higgs potential develops a non-trivial minimum
while we are sitting at the unbroken field value φ = 0. At the critical temperature the broken minimum becomes
the global minimum of the potential and we have to tunnel there. The second order phase transition instead
develops the broken minimum smoothly such that there is never a potential barrier between the two and we can
smoothly transition into the broken minimum around the critical temperature. For a first-order phase transition
different regions of the Universe will switch to the broken phase at different times, starting with expanding bubbles
of broken phase regions. At the bubble surface the thermal equilibrium will be broken, allowing for a generation of
2.5 Asymmetric dark matter 39
the baryon asymmetry through the electroweak phase transition. Unfortunately, the Standard Model Higgs mass
would have had to be below 60 GeV to allow for this scenario. C and CP violation — this condition appears more
indirectly. First, even if we assume that a transition of the kind shown in Eq.(2.46) exists we need to generate a
baryon asymmetry from these decays where the heavy state and its anti-particle are produced from the vacuum.
Charge conjugation links particles and anti-particles, which means that C conservation implies
In that case there will always be the same numbers of baryons d and anti-baryons d¯ on average in the system. We
only quote the statement that statistical fluctuations of the baryon and anti-baryon numbers are not large enough to
explain the global asymmetry observed.
Next we assume a theory where C is violated, but CP is intact. This could for example be the electroweak Standard
Model with no CP-violating phases in the quark and lepton mixing matrices. For our toy model we introduce a
quark chirality qL,R which violates parity P but restores CP as a symmetry. For our decay widths transforming
under C and CP this means
This means unless C and CP are both violated, there will be no baryon asymmetry from X decays to d quarks.
In the above argument there is, strictly speaking, one piece missing: if we assume that we start with the same
number of X and X̄ bosons out of thermal equilibrium, once all of them have decayed to dd and d¯d¯ pairs
irrespective of their chirality there is again no asymmetry between d and d¯ quarks in the Universe. An asymmetry
only occurs if a competing X decay channel produces a different number of baryons and allows the different
partial widths to generate a net asymmetry. This is why we include the second term in Eq.(2.46). Assuming C and
CP violation it implies
Starting from the similarity of the measured baryon and dark matter densities in Eq.(1.75)
Ωχ 0.12
= = 5.5 , (2.53)
Ωb 0.022
an obvious question is if we can link these two matter densities. We know that the observed baryon density in the
Universe today is not determined by a thermal freeze-out, but by an initial small asymmetry between the baryon
and anti-baryon densities. If we assume that dark matter is very roughly as heavy as baryons, that dark matter
states carry some kind of charge which defines dark matter anti-particles, and that the baryon and dark matter
asymmetries are linked, we can hope to explain the observed dark matter relic density. Following the leptogenesis
example we could assume that the sphaleron transition not only breaks B + L, but also some kind of dark matter
number D. Dark matter is then generated thermally, but the value of the relic density is not determined by thermal
40 2 RELICS
freeze-out. Still, from the structure formation constraints discussed in Section 1.5 we know that the dark matter
agent should not be too light.
First, we can roughly estimate the dark matter masses this scenario predicts. From Section 2.4 we know how little
we understand about the mechanism of generating the baryon asymmetry in models structurally similar to the
Standard Model. For that reason, we start by just assuming that the particle densities of the baryons and of dark
matter trace each other through some kind of mechanism,
nχ (T ) ≈ nB (T ) . (2.54)
This will start in the relativistic regime and remain true after the two sectors decouple from each other and both
densities get diluted through the expansion of the Universe. For the observed densities by PLANCK we use the
non-relativistic relation between number and energy densities in Eq.(1.40) and Eq.(1.41),
Ωχ ρχ mχ nχ mχ
= = ≈ ⇔ mχ ≈ 5.5 mB ≈ 5 GeV . (2.55)
Ωb ρB mB nB mB
Corrections to this relation can arise from the mechanism linking the two asymmetries.
Alternatively, we can assume that at the temperature Tdec at which the link between the baryons and the dark matter
decouples, the baryons are relativistic and dark matter is non-relativistic. For the two energy densities this means
ρχ (Tdec ) = mχ nχ (Tdec )
30ζ3 ρB (Tdec )
≈ mχ nB (Tdec ) = mχ
π4 Tdec
mχ ρχ (Tdec ) π 4
⇒ = ≈ 15 . (2.56)
Tdec ρB (Tdec ) 30ζ3
The relevant temperature is determined by the interactions between the baryonic and the dark matter sectors.
However, in general this scenario will allow for heavy dark matter, mχ mB .
In a second step we can analyze what kind of dark matter annihilation rates are required in the asymmetric dark
matter scenario. Very generally, we know the decoupling condition of a dark matter particle of the thermal bath of
Standard Model states from the relativistic case. The mediating process can include a Standard Model fermion,
χf → χf . The corresponding annihilation process for dark matter which is not its own anti-particle is
χχ̄ → f f¯ . (2.57)
As long as these scattering processes are active, the dark matter agent follows the decreasing temperature of the
light Standard Model states in an equilibrium between production out of the thermal bath and annihilation. At
some point, dark-matter freezes out of the thermal bath, and its density is only reduced by the expansion of the
Universe. This point of decoupling is roughly given by Eq.(2.4), or
p
2
π geff (Tdec ) Tdec
H
nχ (Tdec ) = = √ (2.58)
σχχ v 90MPl σχχ v
in terms of the dark matter annihilation cross section σχχ .
The special feature of asymmetric dark matter is that this relation does not predict the dark matter density nχ (Tdec )
leading to the observed relic density. Instead, this annihilation has to remove all dark matter anti-particles and
almost all dark matter particles, while the observed relic density is generated by a very small dark matter
asymmetry. If we follow the numerical example of the baryon asymmetry given in Eq.(2.45) this means we need a
dark matter annihilation rate which is 108 times the rate necessary to predict the observed relic density for pure
freeze-out dark matter. From the typical expressions for cross sections in Eq.(1.49) we see that a massless
mediator or a t-channel diagram in combination with light dark matter leads to large cross sections,
παχ2
σχχ ≈ , (2.59)
m2χ
with the generic dark matter coupling αχ to a dark gauge boson or another light mediator. For heavier dark matter
we will see in Section 4.1 how we can achieve large annihilation rates through a 2 → 1 annihilation topology.
41
After introducing the observed relic density of photons in Section 1.3 and the observed relic density of neutrinos in
Section 2.1 we will now compute the relic density of a hypothetical massive, weakly interacting dark matter agent.
As for the photons and neutrinos we assume dark matter to be created thermally, and the observed relic density to
be determined by the freeze-out combined with the following expansion of the Universe. We will focus on masses
of at least a few GeV, which guarantees that dark matter will be non-relativistic when it decouples from thermal
equilibrium. At this point we do not have specific particles in mind, but in Section 4 we will illustrate this scenario
with a set of particle physics models.
The general theme of this section and the following Sections 5-7 is the typical four-point interaction of the dark
matter agent with the Standard Model. For illustration purposes we assume the dark matter agent to be a fermion χ
and the Standard Model interaction partner a fermion f :
χ f
χ f
Unlike for asymmetric dark matter, in this process it does not matter if the dark matter agent has an anti-particle χ̄,
or if is it’s own anti-particle χ = χ̄. This Feynman diagram, or more precisely this amplitude mediates three
different scattering processes:
– left-to-right we can compute dark matter annihilation, χχ̄ → f f¯, see Sections 3-5;
This strong link between very different observables is what makes dark matter so interesting for particle physicists,
including the possibility of global analyses for any model which can predict this amplitude. Note also that we will
see how different the kinematics of the different scattering processes actually are.
As for the relativistic neutrinos, we will first avoid solving the full Boltzmann equation for the number density as a
function of time. Instead, we assume that some kind of interaction keeps the dark matter particle χ in thermal
equilibrium with the Standard Model particles and at the same time able to annihilate. At the point of thermal
decoupling the dark matter freezes out with a specific density. As for the neutrinos, the underlying process is
described by the matrix element for dark matter annihilation
χχ → f f¯ . (3.1)
As in Eq.(1.51) the interaction rate Γ corresponding to this scattering process just compensates the increasing scale
factor at the point of decoupling,
!
Γ(Tdec ) = H(Tdec ) . (3.2)
42 3 THERMAL RELIC DENSITY
Assuming this interaction rate is set by electroweak interactions, for non-relativistic dark matter agents, the
temperature dependence in Eq.(1.49) vanishes and gets replaced by the dark matter mass. To allow for an
s-channel process in Eq.(3.1) we use the Z-mass and Z-coupling in the corresponding annihilation cross section
πα2 m2χ
σχχ (T mχ ) = . (3.3)
c4w m4Z
This formula combines the dark matter mass mχ with a weak interaction represented by a 1/mZ suppression,
implicitly assuming mχ mZ . We will check this assumption later. Following Eq.(2.2) we can use the
non-relativistic number density. For the the non-relativistic decoupling we should not assume v = 1, as we did
before. Given the limited number of energy scales in our description we instead estimate very roughly
s
mχ 2 2T
v =T ⇔ v= , (3.4)
2 mχ
remembering that we need to check this later. Moreover, we set the number of relevant degrees of freedom of the
dark matter agent to g = 2, corresponding for example to a complex scalar or a Majorana fermion. In that case the
condition of dark matter freeze-out is
s 3/2
Eq.(1.40) 2Tdec mχ Tdec ! Eq.(1.47) π
e−mχ /Tdec = H
p 2
Γ := σχχ vnχ = σχχ g = √ geff (Tdec ) Tdec
mχ 2π 3 10 MPl
2
mχ Tdec π mχ
e−xdec
p 2
⇔ σχχ = √ geff (Tdec )Tdec with x :=
π 3/2 3 10 MPl T
p
π 5/2 geff (Tdec )
⇔ e−xdec = √
3 10 mχ MPl σχχ
p
geff (Tdec )
= 1.8 . (3.5)
mχ MPl σχχ
Note how in this calculation the explicit temperature dependence drops out. This means the result can be
considered an equation for the ratio xdec . If we want to include the temperature dependence of geff we cannot solve
this equation in a closed form, but we can estimate the value of xdec . First, we can use the generic electroweak
annihilation cross section from Eq.(3.3) to find
√
π π c4w m4Z p
e−xdec = √ geff (Tdec ) . (3.6)
3 10 α2 m3χ MPl
Next, we assume that most of the Standard Model particles contribute to the active degrees of freedom. From
Eq.(1.45) we know that the full number gives us geff = 106.75. In the slightly lower range Tdec = 5 ... 80 GeV the
weak bosons and the top quark decouple, and Eq.(1.44) gives the slightly reduced value
7 7
geff (Tdec ) = (8 × 2 + 2) + (5 × 3 × 2 × 2 + 3 × 2 × 2 + 3 × 2) = 18 + 78 = 86.25 . (3.7)
8 8
Combining all prefactors we find the the range
−9
mZ4 2 · 10
⇔ xdec ≈ 20 (mχ = 10 GeV)
−xdec 5
e ≈ 6 · 10 = 6 · 10−11 ⇔ xdec ≈ 23 (mχ = 30 GeV) (3.8)
m3χ MPl
8 · 10−12
⇔ xdec ≈ 26 (mχ = 60 GeV) .
As a benchmark we will use mχ = 30 GeV with xdec ≈ 23 from now on. We need to eventually check these
assumptions, but because of the leading exponential dependence we expect this result for xdec to be insensitive to
3.1 WIMP miracle 43
our detailed assumptions. Following Eq.(1.40) and Eq.(3.6) the temperature at the point of decoupling gives us the
non-relativistic number density at the point of decoupling,
3/2
c4w m4Z
mχ Tdec π mχ p
r
−xdec 2
nχ (Tdec ) = g e = √ geff (Tdec ) Tdec
2π 3 20 MPl Tdec πα2 m2χ
3/2
m4Z 103 m4
Tdec
≈ 103 ≈ 3/2 Z . (3.9)
MPl mχ xdec MPl
From the time of non-relativistic decoupling we have to evolve the energy density to the current time or
temperature T0 . We start with the fact that once a particle has decoupled, its number density drops like 1/a3 , as we
can read off Eq.(1.27) in the non-relativistic case,
3
a(Tdec )
ρχ (T0 ) = mχ nχ (T0 ) = mχ nχ (Tdec ) . (3.10)
a(T0 )
To translate this dependence on the scale factor a into a temperature dependence we need to quote the same, single
thermodynamic result as in Section 2.2, namely that according to Eq.(1.33) the combination a(T ) T is almost
constant. When we take into account the active degrees of freedom and their individual temperature dependence
the relation is more precisely
3
a(Tdec )Tdec geff (T0 ) 3.6 1
= ≈ = , (3.11)
a(T0 )T0 geff (Tdec ) 100 28
again for Tdec > 5 GeV and depending slightly on the number of neutrinos we take into account. We can use this
result to compute the non-relativistic energy density now
3
T03 nχ (Tdec )x3dec
a(Tdec )Tdec xdec 3 nχ (Tdec )
ρχ (T0 ) = mχ 3 nχ (Tdec ) = T0 2 = T03
a(T0 )T0 Tdec 28 Tdec 28m2χ
Eq.(3.9) m4Z
≈ 3 · 103 T3 . (3.12)
m2χ MPl 0
Using this result we can compute the dimensionless dark matter density in close analogy to the neutrino case of
Eq.(2.11),
ρχ (T0 )h2
Ωχ h2 =
3MPl2 H02
m4Z (2.4 · 10−4 )3 h2
≈ 3 · 103 (3.13)
m2χ MPl (2.5 · 10−3 )4 eV
2
7 · 107 GeV3 1 109 GeV2
13 GeV
≈ 3 · 103 ≈ 20 ⇔ Ωχ h2 ≈ 0.12 .
2 · 1018 m2χ 5 GeV m2χ mχ
This outcome is usually referred to as the WIMP miracle: if we assume an dark matter agent with an
electroweak-scale mass and an annihilation process mediated by the weak interaction, the predicted relic density
comes out exactly as measured.
Let us recapitulate where the WIMP mass dependence of Eq.(3.13) comes from: first, the annihilation cross
section in Eq.(3.3) is assumed to be mediated by electroweak interactions and includes a dependence on mχ . Our
original assumption mχ mW is not perfectly fulfilled, but also not completely wrong. Second, the WIMP mass
enters the relation between the number and energy density, but some of this dependence is absorbed into the value
xdec = 23, which means that the decoupling of the non-relativistic WIMPs is supposed to happen at a very low
temperature of Tdec ≈ mχ /23. Making things worse, some of the assumption we made in this non-relativistic and
hence multi-scale calculation are not as convincing as they were for the simpler relativistic neutrino counterpart, so
let us check Eq.(3.13) with an alternative estimate. One of the key questions we will try to answer in our
alternative approach is how the mχ -dependence of Eq.(3.13) occurs.
44 3 THERMAL RELIC DENSITY
At some point, the underlying assumption of thermal equilibrium breaks down. For the number of WIMPs the
relevant process is not a process which guarantees thermal equilibrium with other states, but the explicit pair
production or pair annihilation via a weakly interacting process
χχ ↔ f f¯ , (3.15)
with any available pair of light fermions in the final state. The depletion rate from the WIMP pair annihilation
process in Eq.(3.15) is given by the corresponding σχχ v n2χ . This rate describes the probability of the WIMP
annihilation process in Eq.(3.15) to happen, given the WIMP density and their velocity. For the relativistic relic
neutrinos we could safely assume v = 1, while for the WIMP case we did not even make this assumption for our
previous order-of-magnitude estimate.
When we derive the Boltzmann equation from first principles it turns out that we need to thermally average. This
reflects the fact that the WIMP number density is a global observable, integrated over the velocity spectrum. In the
non-relativistic limit the velocity of a particle with momentum ~k and energy k0 is
|~k| |~k|
vk := ≈ 1. (3.16)
k0 mχ
The external momenta of the two fermions then have the form
v2
q
!
k 2 = k02 − ~k 2 = k02 − (mχ vk )2 = m2χ ⇔ k0 = m2χ + m2χ vk2 ≈ mχ 1 + k . (3.17)
2
For a 2 → 2 scattering process we have to distinguish the velocities of the individual states and the relative
velocity. The energy of initial state is given by the Mandelstam variable s = (k1 + k2 )2 , in terms of the incoming
momenta k1 and k2 . These momenta are linked to the masses of the incoming dark matter state via k12 = k22 = m2χ .
For two incoming states with the same mass this gives us the velocity of each of the two particles as
cms
s = (k1 + k2 )2 = 2m2χ + 2k10 k20 − 2~k1~k2 = 2m2χ + 2(k10 )2 + 2|~k1 |2
s − 4m2χ s
= 4m2χ + 4|~k1 |2 = 4m2χ (1 + v12 ) ⇔ v12 = 2
= −1. (3.18)
4mχ 4m2χ
The relative velocity of the two incoming particles in the non-relativistic limit is instead defined as
~k
1 ~k2 cms ~k1 ~k1 2|~k1 |
v = 0 − 0 = 0 + 0 = 0 ≈ 2v1 ⇔ m2χ v 2 = 4m2χ v12 = s − 4m2χ . (3.19)
k1 k2 k1 k1 k1
Using the relative velocity the thermal average of σχχ v as it for example appears in Eq.(3.6) is defined as
in terms of the modified Bessel functions of the second kind K1,2 . Unfortunately, this form is numerically not very
helpful in the general case. The thermal averaging replaces the global value of σχχ v, as it gets added to the
equilibrium Boltzmann equation Eq.(3.14) on the right-hand side,
The time dependence of n induced by the annihilation process is proportional to n2 because of the two WIMPs in
the initial state of the annihilation process. The form of the equation guarantees that for n = neq the only change
in density occurs from the expanding Universe.
We can analytically solve this Boltzmann equation using a set of approximations. We start with a re-definition,
introducing the yield Y , we get rid of the linear term,
1 d
n(t)a(t)3 = −hσχχ vi n(t)2 − neq (t)2
3
a(t) dt
d n(t)
T (t)3 = −hσχχ vi n(t)2 − neq (t)2
⇔ 3
dt T (t)
dY (t) n(t)
= −hσχχ vi T (t)3 Y (t)2 − Yeq (t)2
⇔ with Y (t) := . (3.22)
dt T3
Throughout these lecture notes we have always replaced the time by some other variable describing the history of
the Universe. We again switch variables to x = mχ /T . For the Jacobian we assume that most of the dark matter
decoupling happens with ρr ρm ; in the early, radiation-dominated Universe we can link the time and x through
the Hubble constant,
1 Eq.(1.31) Eq.(1.47) H(x = 1) p
= H = ⇔ x= 2tH(x = 1)
2t x2
dx 2H(x = 1) H(x = 1)
⇔ = p = , (3.23)
dt 2 2tH(x = 1) x
The value λ̄ depends on x independently through geff , so we can assume it to be constant as long as geff does not
change much. Under this assumption we can then solve the Boltzmann equation with the simple substitution
Y = 1/Y ,
From Eq.(3.8) we know that thermal WIMPs have masses well above 10 GeV, which corresponds to geff ≈ 100.
This value only changes once the temperature reaches the bottom mass and then drops to geff ≈ 3.6 today. This
allows us to separate the leading effects driving the dark matter density into the decoupling phase described by the
Boltzmann equation and an expansion phase with its drop in geff . For the first phase we can just integrate the
Boltzmann equation for constant geff starting just before decoupling (xdec ) and to a point x0dec xdec after
decoupling but above the bottom mass,
1 1 λ̄ λ̄
− = Y (x0dec ) − Y (xdec ) = − 03/2 + 3/2 . (3.28)
Y (x0dec ) Y (xdec ) xdec xdec
From the form of the Boltzmann equation in Eq.(3.25) we see that Y (x) drops rapidly with increasing x. If we
choose x0dec xdec = 23 it follows that Y (x0dec ) Y (xdec ) and hence
√
1 λ̄ m3χ hσχχ vi xdec Eq.(3.24) π geff 1
0 = 3/2 ⇔ Y (x0dec ) = = = xdec √ . (3.29)
Y (xdec ) xdec H(x = 1) λ(xdec ) 90 MPl mχ hσχχ vi
In this expression geff is evaluated around the point of decoupling. For the second, expansion phase we can just
follow Eq.(3.10) and compute
ρχ (T0 ) = mχ nχ (T0 )
0
3
Y (x0dec )T03
a(Tdec ) Eq.(3.11) geff (T0 )
= mχ Y (x0dec )Tdec
03
= mχ Y (x0dec ) T03 0 ) = m χ . (3.30)
a(T0 ) geff (Tdec 28
Y (x0dec )T03 h2
⇒ Ωχ h2 = mχ
28 3MPl2 H02
√
h2 π geff xdec T03
= √ (3.31)
28 90 MPl hσχχ vi 3MPl2 H02
√ √
h2 π geff xdec (2.4 · 10−4 )3 1 2 xdec geff 1.7 · 10−9 GeV−2
= √ ⇒ Ωχ h ≈ 0.12 .
28 90 MPl hσχχ vi (2.5 · 10−3 )4 eV 23 10 hσχχ vi
We can translate this result into different units. In the cosmology literature people often use eV−1 = 2 · 10−5 cm.
In particle physics we measure cross sections in barn, where 1 fb = 10−39 cm2 . Our above result is a very good
approximation to the correct value for the relic density in terms of the annihilation cross section
√ √
xdec geff 1.7 · 10−9 GeV−2 xdec geff 2.04 · 10−26 cm3 /s
Ωχ h2 ≈ 0.12 ≈ 0.12 . (3.32)
23 10 hσχχ vi 23 10 hσχχ vi
With this result we can now insert the WIMP annihilation rate given by Eq.(3.3) and Eq.(3.4),
2 πα2 m2χ
r
2
hσχχ vi = σχχ v + O(v ) ≈
x c4w m4Z
√ √ x 3/2 √g 35 GeV 2
xdec geff c4w m4Z x 1.7 · 10−9 dec eff
⇒ Ωχ h2 = 0.12 √ = 0.12 . (3.33)
23 10 2πα2 m2χ GeV2 23 10 mχ
3.3 Co-annihilation 47
We can compare this result to our earlier estimate in Eq.(3.13) and confirm that these numbers make sense for a
weakly interacting particle with a weak-scale mass.
Alternatively, we can replace the scaling of the annihilation cross section given in Eq.(3.3) by a simpler form, only
including the WIMP mass and certainly valid for heavy dark matter, mχ > mZ . We find
−9
g4 ! 1.7 · 10 mχ mχ
hσχχ vi ≈ = ⇔ g2 ≈ = . (3.34)
16πm2χ GeV2 3400 GeV 3.4 TeV
This form of the cross section does not assume a weakly interacting origin, it simply follows for the scaling with
the coupling and from dimensional analysis. Depending on the coupling, its prediction for the dark matter mass
can be significantly higher. Based on this relation we can estimate an upper limit on mχ from the unitarity
condition for the annihilation cross section
A lower limit does not exist, because we can make a lighter particle more and more weakly coupled. Eventually, it
will be light enough to be relativistic at the point of decoupling, bringing us back to the relic neutrinos discussed in
Section 2.1.
Let us briefly recapitulate our argument which through the Boltzmann equation leads us to the WIMP miracle: we
start with a 2 → 2 scattering process linking dark matter to Standard Model particles through a so-called mediator,
which can for example be a weak boson. This allows us to compute the dark matter relic density as a function of
the mediating coupling and the dark matter mass, and it turns out that a weak-coupling combined with a dark
matter mass below the TeV scale fits perfectly. There are two ways in which we can modify the assumed dark
matter annihilation process given in Eq.(3.15): first, in the next Section 3.3 we will introduce additional
annihilation channels for an extended dark matter sector. Second, in Section 4.1 we will show what happens if the
annihilation process proceeds through an s-channel Higgs resonance.
3.3 Co-annihilation
In many models the dark matter sector consists of more than one particle, separated from the Standard Model
particles for example through a specific quantum number. A typical structure are two dark matter particles χ1 and
χ2 with mχ1 < mχ2 . In analogy to Eq.(3.15) they can annihilate into a pair of Standard Model particles through
the set of processes
χ1 χ1 → f f¯ χ1 χ2 → f f¯ χ2 χ2 → f f¯ . (3.36)
This set of processes can mediate a much more efficient annihilation of the dark matter state χ1 together with the
second state χ2 , even in the limit where the actual dark matter process χ1 χ1 → f f¯ is not allowed. Two
non-relativistic states will have number densities both given by Eq.(1.40). We know from Eq.(3.8) that decoupling
of a WIMP happens at typical values xdec = mχ /Tdec ≈ 28, so if we for example assume
∆mχ = mχ2 − mχ1 = 0.2 mχ1 and g1 = g2 we find
3/2
n2 (Tdec ) Eq.(1.40) g2 ∆mχ 1
= 1+ e−∆mχ /Tdec = 1.31 e−0.2xdec ≈ . (3.37)
n1 (Tdec ) g1 mχ1 206
Just from statistics the heavier state will already be rare by the time the lighter, actual dark matter agent
annihilates. For a mass difference around 10% this suppression is reduced to a factor 1/15, gives us an estimate
that efficient co-annihilation will prefer two states with mass differences in the 10% range or closer.
Let us assume that there are two particles present at the time of decoupling. In addition, we assume that the first
two processes shown in Eq.(3.36) contribute to the annihilation of the dark matter state χ1 . In this case the
48 3 THERMAL RELIC DENSITY
ṅ1 (t) + 3H(t)n1 (t) = −hσχ1 χ1 vi n1 (t)2 − n1,eq (t)2 − hσχ1 χ2 vi (n1 (t)n2 (t) − n1,eq (t)n2,eq (t))
n2
≈ −hσχ1 χ1 vi n1 (t)2 − n1,eq (t)2 − hσχ1 χ2 vi n21 (t) − n1,eq (t)2
n1
" 3/2 #
Eq.(3.37) g2 ∆mχ
e−∆mχ /T n1 (t)2 − n1,eq (t)2 .
= − hσχ1 χ1 vi + hσχ1 χ2 vi 1+
g1 mχ1
(3.38)
In the second step we assume that the two particles decouple simultaneously, such that their number densities track
each other through the entire process, including the assumed equilibrium values. This means that we can
throughout our single-species calculations just replace
3/2
g2 ∆mχ
hσχχ vi → hσχ1 χ1 vi + hσχ1 χ2 vi 1+ e−∆mχ /T . (3.39)
g1 mχ1
In the co-annihilation setup it is not required that the direct annihilation process dominates. The annihilation of
more than one particle contributing to a dark matter sector can include many other aspects, for example when the
dark matter state only interacts gravitationally and the annihilation proceeds mostly through a next-to-lightest,
weakly interacting state. The Boltzmann equation will in this case split into one equation for each state and
include decays of the heavier state into the dark matter state. Such a system of Boltzmann equations cannot be
solved analytically in general.
What we can assume is that the two co-annihilation partners have very similar masses, ∆mχ mχ1 , similar
couplings, g1 = g2 , and that the two annihilation processes in Eq.(3.36) are of similar size, hσχ1 χ1 vi ≈ hσχ1 χ2 vi.
In that limit we simply find hσχχ vi → 2hσχ1 χ1 vi in the Boltzmann equation. We know from Eq.(3.32) how the
correct relic density depends on the annihilation cross section. Keeping the relic density constant we absorb the
rate increase through co-annihilation into a shift in the typical WIMP masses of the two dark matter states.
According to Eq.(3.34) the WIMP masses should now be
g4 g4 √
hσχχ vi ≈ 2
≡2 or mχ1 ≈ mχ2 ≈ 2mχ . (3.40)
16πmχ 32π m2χ1
A simple question we can ask for example when we will talk about collider signatures is how easy it would be to
discover a single WIMP compared to the pair of co-annihilating, slightly heavier WIMPs.
An interesting question is how co-annihilation channels modify the WIMP mass scale which is required by the
observed relic density. From Eq.(3.40) we see that an increase in the total annihilation rate leads to a larger mass
scale of the dark matter particles, as expected from our usual scaling. On the other hand, the annihilation cross
√
section really enters for example Eq.(3.32) in the combination hσχ1 χ1 vi/ geff . If we increase the number of
effective degrees of freedom significantly, while the co-annihilation channels really have a small effect on the total
annihilation rate, the dark matter mass might also decrease.
This pattern follows from the partial wave analysis of relativistic scattering. The first term s0 is
velocity-independent and arises from S-wave scattering. An example is the scattering of two scalar dark matter
3.4 Velocity dependence 49
particles with an s-channel scalar mediator or two Dirac fermions with an s-channel vector mediator. The second
term s1 with a vanishing rate at threshold is generated by S-wave and P -wave scattering. It occurs for example for
Dirac fermion scattering through an s-channel vector mediator. All t-channel processes have an S-wave
component and are not suppressed at threshold.
For dark matter phenomenology, the dependence on a potentially small velocity shown in Eq.(3.41) is the
important aspect. Different dark matter agents with different interaction patterns lead to distinct threshold
dependences. For s-channel and t-channel mediators and several kinds of couplings to Standard Model fermions f
in the final state we find
Particles who are their own anti-particles, like Majorana fermions and real scalars, do not annihilate through
s-channel vector mediators. The same happens for complex scalars and axial-vector mediators. In general,
t-channel annihilation to two Standard Model fermions is not possible for scalar dark matter.
To allow for an efficient dark matter annihilation to today’s relic density, we tend to prefer an un-suppressed
contribution s0 to increase the thermal freeze-out cross section. The problem with such large annihilation rates is
that they are strongly constrained by early-universe physics. For example, the PLANCK measurements of the
matter power spectrum discussed in Section 1.5 constrain the light dark matter very generally, just based on the
fact that such light dark matter can affect the photon background at the time of decoupling. The problem arises if
dark matter candidates annihilate into Standard Model particles through non-gravitational interactions,
χχ → SM SM . (3.42)
As we know from Eq.(3.1) this process is the key ingredient to thermal freeze-out dark matter. If it happens at the
time of last scattering it injects heat into the intergalactic medium. This ionizes the hydrogen and helium atoms
formed during recombination. While the ionization energy does not modify the time of the last scattering, it
prolongs the period of recombination or, alternatively, leads to a broadening of the surface of last scattering. This
leads to a suppression of the temperature fluctuations and enhance the polarization power spectrum. The
temperature and polarization data from PLANCK puts an upper limit on the dark matter annihilation cross section
The factor feff < 1 denotes the fraction of the dark matter rest mass energy injected into the intergalactic medium.
It is a function of the dark matter mass, the dominant annihilation channel, and the fragmentation patterns of the
SM particles the dark matter agents annihilate into. For example, a 200 GeV dark matter particle annihilating to
photons or electrons reaches feff = 0.66 ... 0.71, while an annihilation to muon pairs only gives feff = 0.28. As we
know from Eq.(3.31) for freeze-out dark matter an annihilation cross section of the order
hσχχ vi ≈ 1.7 · 10−9 / GeV2 is needed. This means that the PLANCK constraints of Eq.(3.43) requires
In contrast to limits from searches for dark matter annihilation in the center of the galaxy or in dwarf galaxies, as
we will discuss in Section 5, this constraint does not suffer from astrophysical uncertainties, such as the density
profile of the dark matter halo in galaxies.
50 3 THERMAL RELIC DENSITY
Radiative corrections can drastically change the threshold behavior shown in Eq.(3.41). As an example, we study
the annihilation of two dark matter fermions through an s-channel scalar in the limit of small relative velocity of
the two fermions. The starting point of our discussion is the loop diagram which describes the exchange of a gauge
boson between two incoming (or outgoing) massive fermions χ:
χ(k1 )
χ(q + k1 )
Z(q) S
χ(q + k2 )
χ(k2 )
q/ + k/ 1 + mχ 1 q/ + k/ 2 + mχ
Z
d4 q γµ γµ . (3.45)
(q + k1 )2 − m2χ q 2 − m2Z (q + k2 )2 − m2χ
The question is where this integral receives large contributions. Using k 2 = m2χ the denominators of the fermion
propagators read
1 1
=
2 2
(q + k) − mχ q0 − |~q| + 2q0 k0 − 2~q~k
2 2
Eq.(3.17) 1
=
q02 − |~q|2
+ (2 + v 2 )mχ q0 − 2mχ v|~q| cos θ + O(q0 v 2 )
|~
q |=mχ v 1
= . (3.46)
q02 − m2χ v 2 (1 + 2 cos θ) + (2 + v 2 )mχ q 0 + O(q0 v 2 )
The particles in the loop are not on their respective mass shells. Instead, we can identify a particularly dangerous
region for v → 0, namely q0 = mχ v 2 , where
1 1
2 2
= 2 2 . (3.47)
(q + k) − mχ mχ v (1 − 2 cos θ) + O(v 4 )
Unless we make an assumption about the angle θ we cannot make a stronger statement about the contributions of
the fermion propagators. If we just set cos θ = 0 we find
1 1
= 2 2 . (3.48)
(q + k)2 − m2χ mχ v + O(v 4 )
In the same phase space region the Z boson propagator in the integral scales like
1 1 1
= 2 4 =− 2 2 . (3.49)
q 2 − m2Z mχ v − m2χ v 2 − m2Z mχ v + m2Z + O(v 4 )
In the absence of the gauge boson mass the gauge boson propagator would diverge for v → 0, just like the fermion
propagators. This means that we can approximate the loop integral by focussing on the phase space regime
The complete infrared contribution to the one-loop matrix element of Eq.(3.45) with a massive gauge boson
exchange and neglecting the Dirac matrix structure is
mχ 1 mχ 1 1 1
Z
d4 q ≈ ∆q0 (∆|~q|)3
(q + k1 )2 − m2χ q 2 − m2Z (q − k2 )2 − m2χ mχ v 2 m2χ v 2 + m2Z mχ v 2
1 1 1
≈ mχ v 2 (mχ v)3
mχ v 2 m2χ v 2 + m2Z mχ v 2
v mχ mZ 1
= 2 −→ . (3.51)
m v
v 2 + Z2
mχ
This means that part of the one-loop correction to the dark matter annihilation process at threshold scales like 1/v
in the limit of massless gauge boson exchange. For massive gauge bosons the divergent behavior is cut off with a
lower limit v & mZ /mχ . If we attach an additional gauge boson exchange to form a two-loop integral, the above
considerations apply again, but only to the last, triangular diagram. The divergence still has the form 1/v.
Eventually, it will be cut off by the widths of the particles, which is a phrase often used in the literature and not at
all easy to show in detail.
What is more important is the question what the impact of this result is for our calculations — it will turn out that
while the loop corrections for slowly moving particles with a massless gauge boson exchange are divergent, they
typically correct a cross section which vanishes at threshold and only lead to a finite rate at the production
threshold.
As long as we limit ourselves to v 1 we do not need to use relativistic quantum field theory for this calculation.
We can compute the same v-dependent correction to particle scattering using non-relativistic quantum mechanics.
We assume two electrically and weakly charged particles χ± , so their attractive potential has spherically
symmetric Coulomb and Yukawa parts,
e2 g2
V (r) = − − Z e−mZ r with r = |~r| . (3.52)
r r
The coupling gZ describes an unknown χ-χ-Z interaction. With such a potential we can compute a two-body
scattering process. The wave function ψk (~r) will in general be a superposition of an incoming plane wave in the
z-direction and a set of spherical waves with a modulation in terms of the scattering angle θ. As in Eq.(1.59) we
can expand the wave function in spherical harmonics, combined with an energy-dependent radial function
R(r; E). We again exploit the symmetry with respect to the azimuthal angle φ and obtain
∞ X
X `
ψk (~r) = a`m Y`m (θ, φ) R` (r; E)
`=0 m=−`
X∞
= (2` + 1) a`0 Y`0 (θ, φ) R` (r; E)
`=0
∞ √ ∞
Eq.(1.63) X 2` + 1 X
= (2` + 1) a`0 P` (cos θ) R` (r; E) =: A` P` (cos θ) R` (r; E) . (3.53)
2
`=0 `=0
From the calculation of the hydrogen atom we know that the radial, time-independent Schrödinger equation in
terms of the reduced mass m reads
1 d 2 d `(` + 1)
− r + + V (r) − E R` (r; E) = 0 . (3.54)
2mr2 dr dr 2mr2
The reduced mass for a system with two identical masses is given by
m1 m2 mχ
m= = . (3.55)
m1 + m2 2
52 3 THERMAL RELIC DENSITY
As a first step we solve the Schrödinger equation at large distances, where we can neglect V (r). We know that the
solution will be plane waves, but to establish our procedure we follow the procedure starting with Eq.(3.54) step
by step,
1 d 2 d `(` + 1) 2
− 2 r + − k Rk` (r) = 0 with k 2 := 2mE = m2 v 2
r dr dr r2
1 d 2 dRk` `(` + 1)
⇔ ρ − Rk` + Rk` = 0 with ρ := kr
ρ2 dρ dρ ρ2
2
1 dRk` 2 d Rk` `(` + 1)
⇔ 2ρ +ρ − Rk` + Rk` = 0
ρ2 dρ dρ2 ρ2
d2 Rk` dRk`
⇔ ρ2 + 2ρ − `(` + 1)Rk` + ρ2 Rk` = 0 (3.56)
dρ2 dρ
This differential equation turns out to be identical to the implicit definition of the spherical Bessel functions j` (ρ),
so we can identify Rk` (r) = j` (ρ). The radial wave function can then be expressed in Legendre polynomials,
1
1 ρ `
Z
Rk` (r) = j` (ρ) = (−1)` dt eiρt (t2 − 1)`
2`! 2 −1
1
`
Z 1
1 ρ
1 1 d
= (−1)` eiρt (t2 − 1)` − dt eiρt (t2 − 1)`
2`! 2 iρ −1 iρ dt
−1
1
1 ρ ` (−1) d
Z
= (−1)` dt eiρt (t2 − 1)` = · · ·
2`! 2 iρ −1 dt
`
Z 1 ` ` Z 1
1 ρ
1 iρt d 2 ` Eq.(1.64) (−i)
= dt e (t − 1) = dt eiρt P` (t) . (3.57)
2`! 2 (iρ)` −1 dt` 2 −1
The integration variable t corresponds to cos θ in our physics problem. As mentioned above, these solutions to the
free Schrödinger equation have to be plane waves. We use the relation
∞
X 2` + 1
P` (t)P` (t0 ) = δ(t − t0 ) (3.58)
2
`=0
to link the plane wave to this expression in terms of the spherical Bessel functions and the Legendre polynomials.
With the correct ansatz we find
∞ ∞ Z 1
X Eq.(3.57) X ` (−i)` 0
`
i (2` + 1)P` (t)j` (ρ) = i (2` + 1)P` (t) dt0 eiρt P` (t0 )
2 −1
`=0 `=0
Eq.(3.58) (−i)` iρt
= 2i` e = eikr cos θ . (3.59)
2
If we know that the series in Eq.(3.53) describes such plane waves, we can determine A` Rk` by comparing the two
sums and find
`π
sin kr −
2
i` (2` + 1)
for kr `2
` kr
A` Rk` (r) = i (2` + 1)j` (kr) ≈ (3.60)
√
`
i` (2` + 1) (kr)
for kr 2 ` .
(2` + 1)!!
We include two limits which can be derived for the spherical Bessel functions. To describe the interaction with and
without a potential V (r) we are always interested in the wave function at the origin. The lower of the above two
3.5 Sommerfeld enhancement 53
limits indicates that for small r and hence small ρ values only the first term ` = 0 will contribute. We can evaluate
j0 (kr) for kr = 0 in both forms and find the same value,
~ 2 `=0
2 2
ψk (0) = |A0 P0 (cos θ)Rk` (0)| = |A0 Rk` (0)| = lim |j0 (kr)|2 = 1 (3.61)
r→0
The argument that only ` = 0 contributes to the wave function at the origin is not at all trivial to make, and it holds
as long as the potential does not diverge faster than 1/r towards the origin.
Next, we add an attractive Coulomb potential to Eq.(3.56), giving us the radial Schrödinger equation in a slightly
re-written form in the first term
1 d2 `(` + 1) 2me2
2 uk`
− r + − − k =0 with uk` (r) := rRk` (r)
r dr2 r2 r r
d2 `(` + 1) 2me2
⇔ 2
uk` − 2
uk` + uk` + k 2 uk` = 0
dr r r
d2 `(` + 1) 2me2
⇔ u k` − u k` + uk` + uk` = 0 (3.62)
dρ2 ρ2 ρk
The solution of this equation will lead us to the well-known hydrogen atom and its energy levels. However, we are
not interested in the energy levels but in the continuum scattering process. Following the discussion around
Eq.(3.60) and assuming that the Coulomb potential will not change the fundamental structure of the solution
around the origin we can evaluate the radial wave function for ` = 0,
d2 2me2
2
uk0 + uk0 + uk0 = 0 (3.63)
dρ ρk
This is the equation we need to solve and then evaluate at the origin, ~r = ~0. We only quote the result,
2
2πe
2 2 for v → 0
~ 2πe 1
ψk (0) = ≈ v . (3.64)
v 1 − e−2πe2 /v
1 for v → ∞
Compared to Eq.(3.61) this increased probability measure is called the Sommerfeld enhancement. It is divergent at
small velocities, just as in the Feynman-diagrammatic discussion before. For very small velocities, it can lead to an
enhancement of the threshold cross section by several orders of magnitude.
It can be shown that the calculation based on ladder diagrams in momentum space and based on the Schrödinger
equation in position space are equivalent for simple scattering processes. The resummation of the ladder diagrams
is equivalent to the computation of the wave function at the origin in the Fourier-transformed position space.
The case of the Yukawa potential shows a similar behavior. It involves an amusing trick in the computation of the
potential, so we discuss it in some detail. When we include the Yukawa potential in the Schrödinger equation we
cannot solve the equation analytically; however, the Hulthen potential is an approximation to the Yukawa potential
which does allow us to solve the Schrödinger equation. It is defined as
2
gZ δe−δr
V (r) = . (3.65)
1 − e−δr
Optimizing the numerical agreement of the Hulthen potential’s radial wave functions with those of the Yukawa
potential suggests for the relevant mass ratio in our calculation
π2
δ≈ mZ , (3.66)
6
54 3 THERMAL RELIC DENSITY
which we will use later. Unlike for the Coulomb potential we can now keep the full `-dependence of the
Schrödinger equation. The only additional approximation we use is for the angular momentum term
δ 2 e−δr 1 − δr + O(δ 2 r2 )
2 = δ2 2
(1 − e−δr ) 1
−δr + δ 2 r2 + O(δ 3 r3 )
2
1 1 − δr + O(δ 2 r2 ) 1
1 + O(δ 2 r2 ) .
= 2 2 = 2 (3.67)
r 1 r
1 − δr + O(δ 2 r2 )
2
The radial Schrödinger equation of Eq.(3.62) with the Hulthen potential and the above approximation for the
angular-momentum-induced potential term now reads
" #
1 d2 δ 2 e−δr 2
gZ δe−δr 2 uk`
− r + `(` + 1) 2 + 1 − e−δr − k =0
r dr2 (1 − e )−δr r
d2 δ 2 e−δr g 2 δe−δr
⇔ 2
uk` − `(` + 1) 2 uk` + Z −δr uk` + k 2 uk` = 0 . (3.68)
dr (1 − e−δr ) 1−e
Again, we only quote the result: the leading term for the corresponding Sommerfeld enhancement factor in the
limit v 1 arises from
2vmχ π
πg 2 sinh
~ 2
ψk (0) = Z δr !. (3.69)
v 2vmχ π 2
gZ mχ v 2 m2χ
cosh − cos 2π −
δ δ δ2
This Sommerfeld enhancement factor will be a combination of a slowly varying underlying function with a peak
structure defined by the denominator.
We are interested in the position and the height of the first and the following peaks. We need to two Taylor series
x2
sinh x = x + O(x3 ) and cosh x = 1 + + O(x4 ) (3.70)
2
105
104
103
-5
10
v=
102
v=10-3
10 v=10-2
v=10-1
Figure 7: Sommerfeld enhancement for a Yukawa potential as a function of the dark matter mass (M ≡ mχ ),
2
shown for different velocities. It assumes the correct Z-mass and a coupling strength of gZ = 1/30. Figure from
Ref. [7], found for example in Mariangela Lisanti’s lecture notes [8].
3.6 Freeze-in production 55
The cosh function is always larger than one and grows rapidly with increasing argument This means that in the
limit v 1 the two terms in the denominator can cancel almost entirely,
2πvmχ 2π 2 gZ2
mχ
πg 2 + O(v 3 )
~ 2
ψk (0) = Z δ v→0
! −→ r δ . (3.71)
v
r
2 2 2
gZ m χ 4π gZ mχ
1 + O(v 2 ) − cos 2π + O(v 2 ) 1 − cos
δ δ
The finite limit for v → 0 is well defined except for mass ratios mχ /δ or mχ /mZ right on the pole. The positions
of the peaks in this oscillating function of the mass ratio mχ /mZ is independent of the velocity in the limit v 1.
The peak positions are
4π 2 gZ2
mχ mχ n2 Eq.(3.66) mχ π2
= (2nπ)2 ⇔ = 2 ⇔ = 2 n2 with n = 1, 2, ... (3.72)
δ δ gZ mZ 6gZ
2
For example assuming gZ ≈ 1/20 we expect the first peak at dark matter masses below roughly 3 TeV. For the
Sommerfeld enhancement factor on the first peak we have to include the second term in the Taylor series in
Eq.(3.69) and find
2π 2 gZ2
mχ
~ 2
δ g2 δ Eq.(3.72) ~ 2
g4
ψk (0) = 2 = Z 2 ⇒ ψk (0) = Z2 . (3.73)
mχ v v
1 2vmχ π
+ O(v 4 )
2 δ
From our calculation and this final result it is clear that a large ratio of the dark matter mass to the electroweak
masses modifies the pure v-dependence of the Coulomb-like Sommerfeld enhancement, but is not its source. Just
like for the Coulomb potential the driving force behind the Sommerfeld enhancement is the vanishing velocity,
leading to long-lived bound states. The ratio mχ /mZ entering the Sommerfeld enhancement is simply the effect
of the Z-mass acting as a regulator towards small velocities.
In the previous discussion we have seen that thermal freeze-out offers an elegant explanation of the observed relic
density, requiring only minimal modifications to the thermal history of the Universe. On the other hand, for cold
dark matter and asymmetric dark matter we have seen that an alternative production mechanism has a huge effect
on dark matter physics. A crucial assumption behind freeze-out dark matter is that the coupling between the
Standard Model and dark matter cannot be too small, otherwise we will never reach thermal equilibrium and
cannot apply Eq.(3.2). For example for the Higgs portal model discussed in Section 4.1 this is the case for a portal
coupling of λ3 . 10−7 . For such small interaction rates the (almost) model-independent lower bound on the dark
matter mass from measurements of the CMB temperature variation and polarization, discussed in Section 1.4 and
giving mχ & 10 GeV, does not apply. This allows for new kinds of light dark matter.
For such very weakly interacting particles, called feebly interacting massive particles or FIMPs, we can invoke the
non-thermal, so-called freeze-in mechanism. The idea is that the dark matter sector gets populated through decay
or annihilation of SM particles until the number density of the corresponding SM particles species becomes
Boltzmann-suppressed. For an example SM particle B with an interaction
and mB > 2mχ , the decay B → χχ̄ allows to populate the dark sector. The Boltzmann equation in Eq.(3.21)
then acquires a source term
The condition that the dark matter sector is not in thermal equilibrium initially translates into a lower bound on the
dark matter mass. Its precise value depends on the model, but for a mediator with mB ≈ 100 GeV one can
estimate mχ & 0.1 ... 1 keV from the fundamental assumptions of the model.
∗
A decay-based source term in terms of the internal number of degrees of freedom gB , the partial width B → χχ̄,
and the equilibrium distribution exp(−EB /T ) can be written as
Z 3
∗ d pB −EB /T mB
S(B → χχ̄) = gB e Γ(B → χχ̄)
(2π)3 EB
pB |2 d|pB | −EB /T mB
|~
Z
∗
= gB Γ(B → χχ̄) e
2π 2 EB
∗ ∞
g mB pB |
d|~ EB dEB
Z q
= B 2 Γ(B → χχ̄) dEB EB 2 − m2 e−EB /T using =p 2
B
2π mB dE B EB − m2B
∗
gB m2B
= Γ(B → χχ̄) T K1 (mB /T ) , (3.76)
2π 2
where K1 (z) is the the modified Bessel function of the second kind. For small z it is approximately given by
K1 (z) ≈ 1/z, while for large z it reproduces the Boltzmann factor, K1 (z) ∝ e−z /z + O(1/z). This form
suggests that the dark matter density will increase until T becomes small compared to mB and the source term
becomes suppressed by e−mB /T . The source term is independent of nχ and proportional to the partial decay
width. We also expect it to be proportional to the equilibrium number density of B, defined as
∗ Z ∞
d3 p −EB /T gB
Z q
eq ∗ 2 − m2 e−EB /T
nB = gB e = dE B E B EB B
(2π)3 2π 2 mB
∗
g
= B2 m2B T K2 (mB /T ) , (3.77)
2π
in analogy to Eq.(3.76), but with the Bessel function of the first kind K1 . We can use this relation to eliminate the
-4 -4
-6 -6
-8 -8 Γ
log10 Y
log10 Y
-10 -10
-12 -12
-14 -14
Γ
-16 -16
0.0 0.5 1.0 1.5 2.0 2.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0
log10 x log10 x
FigureFigure 2. The
8: Scaling of Ytwo
(x) basic
= nχmechanisms for DM production:
/T 3 for the freeze-out the freeze-out
(left) and freeze-in (right)(left panel) and
mechanisms forfreeze-in
three different
(right panel), for three di↵erent values of the interaction rate between the visible sector
interaction rates (larger to smaller cross sections along the arrow). In the left panel x = mχ /T and and in DMthe right
particles in each case. The arrows indicate the e↵ect of increasing the rate of the two processes.
panel x = mB /T . The dashed contours correspond to the equilibrium densities. Figure from Ref. [9].
In the left panel x = m /T and gray dashed line shows the equilibrium density of DM particles. In
the right panel x = m /T , where denotes the particle decaying into DM, and the gray dashed line
shows the equilibrium density of . In both panels Y = n /s, where s is the entropy density of the
baryon-photon fluid.
3.6 Freeze-in production 57
K1 (mB /T ) eq
S(B → χχ̄) = Γ(B → χ̄χ) n . (3.78)
K2 (mB /T ) B
To compute the relic density, we introduce the notation of Eq.(3.24), namely x = mB /T and Y = nχ /T 3 . The
Boltzmann equation from Eq.(3.75) now reads
(
dY (x) gB∗
Γ(B → χχ̄) 3 3 x3 e−x x 1 or T mB
= x K1 (x) with x K1 (x) ≈ . (3.79)
dx 2
2π H(xB = 1) x2 x 1 or T mB
Because the function x3 K1 (x) has a distinct maximum at x ≈ 1, dark matter production is dominated by
temperatures T ≈ mB . We can integrate the dark matter production over the entire thermal history and find for the
final yield Y (x0dec ) with the help of the appropriate integral table
We can now follow the steps from Eq.(3.13) and Eq.(3.31) and compute the relic density today,
h2 mχ 3
Ωχ h2 = T Y (x0dec )
3MPl2 H02 28 0
h2 mχ T03
= Γ(B → χχ̄)
112π MPl2 H 2 H(xB = 1)
√ ∗
Eq.(1.47) 90h2 gB mχ T03
= √ Γ(B → χχ̄)
112π 2 geff m2B H 2 MPl
√ ∗
90h2 gB mχ (2.4 · 10−4 )3 ∗
23 gB mχ
= √ M Pl Γ(B → χ χ̄) = 3.6 · 10 Γ(B → χχ̄) . (3.81)
112π 2 geff m2B (2.5 · 10−3 )4 m2B
The calculation up to this point is independent from the details of the interaction between the decaying particle B
and the DM candidate χ. For the example interaction Eq.(3.74), the partial decay with is given by
∗
Γ(B → χχ̄) = y 2 mB /(8π), and assuming gB = 2 we find
y 2 m
χ
Ωχ h2 = 0.12 . (3.82)
2 · 10−12 mB
The correct relic density from B-decays requires small couplings y and/or dark matter masses mχ , compatible
with the initial assumption that dark matter was never in thermal equilibrium with the Standard Model for
T & mB . Following Eq.(3.82), larger interaction rates lead to larger final dark matter abundances. This is the
opposite scaling as for the freeze-out mechanism of Eq.(3.32). In the right panel of Figure 8 we show the scaling
of Y (x) with x = mB /T , compared with the scaling of Y (x) with x = mχ /T for freeze-out. Both mechanisms
can be understood as the limits of increasing the interaction strength between the visible and the dark matter sector
(freeze-out) and decreasing this interaction strength (freeze-in) in a given model.
Even though we illustrate the freeze-in mechanism with the example of the decay of the SM particle B into dark
matter, the dark matter sector could also be populated by an annihilation process B B̄ → χχ̄, decays of SM
particles into a visible particle and dark matter B → B2 χ, or scenarios where B is not a SM particle. If the decay
B → B2 χ is responsible for the observed relic density, it can account for asymmetric dark matter if
Γ(B → B2 χ) 6= Γ(B̄ → B̄2 χ̄), as discussed in Section 2.5.
58 4 WIMP MODELS
4 WIMP models
If we want to approach the problem of dark matter from a particle physics perspective, we need to make
assumptions about the quantum numbers of the weakly interacting state which forms dark matter. During most of
these lecture notes we assume that this new particle has a mass in the GeV to TeV range, and that its density is
thermally produced during the cooling of the Universe. Moreover, we assume that the entire dark matter density of
the Universe is due to one stable particle.
The first assumption fixes the spin of this particle. From the Standard Model we know that there exist fundamental
scalars, like the Higgs, fundamental fermions, like quarks and leptons, and fundamental gauge bosons, like the
gluon or the weak gauge bosons. Scalars have spin zero, fermions have spin 1/2, and gauge bosons have spin 1.
Because calculations with gauge bosons are significantly harder, in particular when they are massive, we limit
ourselves to scalars and fermions.
When we construct particle models of dark matter we are faced with this wide choice of new, stable particles and
their quantum numbers. Moreover, dark matter has to couple to the Standard Model, because it has to annihilate to
produce the observed relic density Ωχ h = 0.12. This means that strictly speaking we do not only need to postulate
a dark matter particle, but also a way for this state to communicate to the Standard Model along the line of the
table of states in Section 3.5. The second state is usually called a mediator.
A possible linear term in the new, real field is removed by a shift in the fields. In the above form the new scalar S
can couple to two SM Higgs bosons, which induces a decay either on-shell S → HH or off-shell
S → H ∗ H ∗ → 4b. To forbid this, we apply the usual trick, which is behind essentially all WIMP dark matter
models; we require the Lagrangian to obey a global Z2 symmetry
S → −S, H → +H, ··· (4.4)
This defines an ad-hoc Z2 parity +1 for all SM particles and −1 for the dark matter candidate. The combined
potential now reads
m2H 2 m2H 3 m2H 4 λ3
V ⊃− H + H + 2 H − µ2S S 2 + λS S 4 + (H + vH )2 S 2
2 2vH 8vH 2
2 2 2 2
m m m vH λ3
= − H H2 + H H3 + H 4 2
2 H − µS − λ3 2 S 2 + λS S 4 + λ3 vH HS 2 + H 2 S 2 . (4.5)
2 2vH 8vH 2
The mass of the dark matter scalar and its phenomenologically relevant SSH and SSHH couplings read
q
mS = 2µ2S − λ3 vH 2 gSSH = −2λ3 vH gSSHH = −2λ3 . (4.6)
The sign of λ3 is a free parameter. Unlike for singlet models with a second VEV, the dark singlet does not affect
the SM Higgs relations in Eq.(4.2). However, the SSH coupling mediates SS interactions with pairs of SM
particles through the light Higgs pole, as well as Higgs decays H → SS, provided the new scalar is light enough.
The SSHH coupling can mediate heavy dark matter annihilation into Higgs pairs. We will discuss more details
on invisible Higgs decays in Section 7.
For dark matter annihilation, the SSf f¯ transition matrix element based on the Higgs portal is described by the
Feynman diagram
S b(k2 )
S b̄(k1 )
All momenta are defined incoming, giving us for an outgoing fermion and an outgoing anti-fermion
−imf −i
M = ū(k2 ) v(k1 ) (−2iλ3 vH ) . (4.7)
vH (k1 + k2 )2 − m2H + imH ΓH
In this expression we see that vH cancels, but the fermion mass mf will appear in the expression for the
annihilation rate. We have to square this matrix element, paying attention to the spinors v and u, and then sum
over the spins of the external fermions,
X X X 1
|M|2 = 4λ23 m2f v(k1 )v̄(k1 ) u(k2 )ū(k2 ) 2
spin spin spin |(k1 + k2 ) − m2H + imH ΓH |
2
1
= 4λ23 m2f Tr [(k/ 1 − mf 11) (k/ 2 + mf 11)] 2
[(k1 + k2 )2 − m2H ] + m2H Γ2H
1
= 4λ23 m2f 4 k1 k2 − m2f
2
[(k1 + k2 )2 − m2H ] + m2H Γ2H
(k1 + k2 )2 − 4m2f
= 8λ23 m2f 2 . (4.8)
[(k1 + k2 )2 − m2H ] + m2H Γ2H
60 4 WIMP MODELS
In the sum over spin and color of the external fermions the averaging is not yet included, because we need to
specify which of the external particles are incoming or outgoing. As an example, we compute the cross section for
the dark matter annihilation process to a pair of bottom quarks
SS → H ∗ → bb̄ . (4.9)
This s-channel annihilation corresponds to the leading on-shell Higgs decay H → bb̄ with a branching ratio
around 60%. In terms of the Mandelstam variable s = (k1 + k2 )2 it gives us
X s − 4m2b
|M|2 = Nc 8λ23 m2b 2
spin,color (s − m2H ) + m2H Γ2H
s
1 1 − 4m2b /s X
⇒ σ(SS → bb̄) = |M|2
16πs 1 − 4m2S /s
s
Nc 1 − 4m2b /s s − 4m2b
= √ λ23 m2b . (4.10)
2π s s − 4m2S (s − m2H )2 + m2H Γ2H
To compute the relic density we need the velocity-averaged cross section. For the contribution of the bb̄ final state
to the dark matter annihilation rate we find the leading term in the non-relativistic limit, s = 4m2S
p
Eq.(3.19) Nc λ23 m2b 1 − 4m2b /s s − 4m2b
hσvi ≡ σv = v √
2π s mS v 2
SS→bb̄
SS→bb̄
(s − m2H ) + m2H Γ2H
s
2 2
threshold Nc λ3 mb m2 4m2S − 4m2b
= 2 1 − 2b
4πmS mS (4m2S − m2H )2 + m2H Γ2H
mS mb Nc λ23 m2b 1
= 2 . (4.11)
π (4mS − mH ) + m2H Γ2H
2 2
This expression holds for all scalar masses mS . In our estimate we identify the v-independent expression with the
thermal average. Obviously, this will become more complicated once we include the next term in the expansion
around v ≈ 0. The Breit–Wigner propagator guarantees that the rate never diverges, even in the case when the
annihilating dark matter hits the Higgs pole in the s-channel.
The simplest parameter point to evaluate this annihilation cross section is on the Higgs pole. This gives us
Nc λ23 m2b 1 mH =2mS Nc λ3 mb
2 2
15λ23
hσvi = = ≈
2 2 2
π (4m2S − m2H ) + m2H Γ2H πmH ΓH GeV2
SS→bb̄
1 25λ23 ! −9 1
⇒ hσχχ vi = hσvi ≈ 2 = 1.7 · 10 ⇔ λ3 ≈ 8 · 10−6 ,
BR(H → bb̄) GeV GeV2
SS→bb̄
(4.12)
with ΓH ≈ 4 · 10−5 mH . While it is correct that the self coupling required on the Higgs pole is very small, the full
calculation leads to a slightly larger value λ3 ≈ 10−3 , as shown in Figure 9.
Lighter dark matter scalars also probe the Higgs mediator on-shell. In the Breit-Wigner propagator of the
annihilation cross section, Eq.(4.11), we have to compare to the two terms
4m2S
2 2 2
mH − 4mS = mH 1 − 2 ⇔ mH ΓH ≈ 4 · 10−5 m2H . (4.13)
mH
4.1 Higgs portal 61
The two states would have to fulfill exactly the on-shell condition mH = 2mS for the second term to dominate.
We can therefore stick to the first term for mH > 2mS and find for the dominant decay to bb̄ pairs in the limit
m2H m2S m2b
Nc λ23 m2b λ23 ! 1
hσvi = ≈ = 1.7 · 10−9 ⇔ λ3 = 0.26 . (4.14)
4
πmH 1252 502 GeV2 GeV2
SS→bb̄
Heavier dark matter scalars well above the Higgs pole also include the annihilation channels
Unlike for on-shell Higgs decays, the bb̄ final state is not dominant for dark matter annihilation when it proceeds
through a 2 → 2 process. Heavier particles couple to the Higgs more strongly, so above the Higgs pole they will
give larger contributions to the dark matter annihilation rate. For top quarks in the final state this simply means
replacing the Yukawa coupling m2b by the much larger m2t . In addition, the Breit-Wigner propagator will no longer
scale like 1/m2H , but proportional to 1/m2S . Altogether, this gives us a contribution to the annihilation rate of the
kind
Nc λ23 m2t 2 2
2mS mH Nc λ3 mt
hσvi = = . (4.16)
2 16πm4S
π (4m2S − m2H )
SS→tt̄
The real problem is the annihilation to the weak bosons W, Z, because it leads to a different scaling of the
annihilation cross section. In the limit of large energies we can describe for example the process SS → W + W −
using spin-0 Nambu-Goldstone bosons in the final state. These Nambu-Goldstone modes in the Higgs doublet φ
appear as the longitudinal degrees of freedom, which means that dark matter annihilation to weak bosons at large
λ hSS
1
Min
XENON100
Lattice
Max
−1
10
XENONUP
XENON1T
−2
10
inv
Br = 10%
WMAP
−3
10
50 100 150 200
MDM (GeV)
Figure 9: Higgs portal parameter space in terms of the self coupling λhSS ∼ λ3 and the dark matter mass MDM =
mS . The red lines indicate the correct relic density Ωχ h2 . Figure from Ref. [11].
62 4 WIMP MODELS
energies follows the same pattern as dark matter annihilation to Higgs pairs. Because we are more used to the
Higgs degree of freedom we calculate the annihilation to Higgs pairs,
SS → HH . (4.17)
The two Feynman diagrams with the direct four-point interaction and the Higgs propagator at the threshold
s = 4m2S scale like
M4 = gSSHH = −2λ3
gSSH 3m2H threshold 2λ3 vH 3m2H mS mH 6λ3 m2H
MH = = − = − M4 . (4.18)
s − m2H vH 4m2S − m2H vH 4m2S
This means for heavy dark matter we can neglect the s-channel Higgs propagator contribution and focus on the
four-scalar interaction. In analogy to Eq.(4.11) we then compute the velocity-weighted cross section at threshold,
p s
1 1 − 4m2H /m2S 2 Eq.(3.19) λ2
3 4m2H 1
σ(SS → HH) = √ 4λ 3 = √ 1 −
m2S vmS
p
16π s s − 4m2S 4π s
s s
λ23 4m2H threshold λ23 4m2H mS mH λ23
σv = √ 1− = 1− = (4.19)
4πmS s mS2 8πmS 2 m2S 8πm2S
SS→HH
For mS = 200 GeV we can derive the coupling λ3 which we need to reproduce the observed relic density,
1 ! λ23 λ23
1.7 · 10−9 2 = 8πm2 ≈ ⇔ λ3 ≈ 0.04 . (4.20)
GeV S 106 GeV2
The curve in Figure 9 shows two thresholds related to four-point annihilation channels, one at mS = mZ and one
at mS = mH . Starting with mS = 200 GeV and corresponding values for λ3 the annihilation to Higgs and
Goldstone boson pairs dominates the annihilation rate.
One lesson to learn from our Higgs portal considerations is the scaling of the dark matter annihilation cross section
with the WIMP mass mS . It does not follow Eq.(3.3) at all and only follows Eq.(3.34) for very heavy dark matter.
For our model, where the annihilation is largely mediated by a Yukawa coupling mb , we find
2 2
λ3 mb mH
mS
m4H 2
λ2 m2
mH
3 b
σχχ ∝ 2 Γ2 mS = (4.21)
m H H 2
2
λ
3
mS > mZ , mH .
2
mS
It will turn out that the most interesting scaling is on the Higgs peak, because the Higgs width is not at all related
to the weak scale.
Inspired by the WIMP assumption in Eq.(3.3) we can use a new massive gauge boson to mediate thermal
freeze-out production. The combination of a free vector mediator mass and a free dark matter mass will allow us to
study a similar range scenarios as for the Higgs portal, Eq.(4.21). A physics argument is given by the fact that the
Standard Model has a few global symmetries which can be extended to anomaly-free gauge symmetries.
4.2 Vector Portal 63
The extension of the Standard Model with its hypercharge symmetry U (1)Y by an additional U (1) gauge group
defines another renormalizable portal to dark matter. Since U (1)-field strength tensors are gauge singlets, the
kinetic part of the Lagrangian allows for kinetic mixing,
1 µν sχ µν 1 µν 1 1 sχ B̂µν
Lgauge = − B̂ B̂µν − V̂ B̂µν − V̂ V̂µν = − B̂µν V̂µν , (4.22)
4 2 4 4 sχ 1 V̂µν
where sχ ≡ sin χ is assumed to be a small mixing parameter. In principle it does not have to be an angle, but for
p lecture notes we assume that it is small, sχ 1, so we can treat it as a trigonometric function
the purpose of these
and write cχ ≡ 1 − sχ and tχ ≡ sχ /cχ . Even if the parameter sχ is chosen to be zero at tree-level, loops of
particles charged under both U (1)X and U (1)Y introduce a non-zero value for it. Similar to the Higgs portal, there
is no symmetry that forbids it, so we do not want to assume that all quantum corrections cancel to a net value
sχ = 0.
The notation B̂µν indicates that the gauge fields are not yet canonically normalized, which means that the residue
of the propagator it not one. In addition, the gauge boson propagators derived from Eq.(4.22) are not diagonal. We
can diagonalize the matrix in Eq.(4.22) and keep the hypercharge unchanged with a non-orthogonal rotation of the
gauge fields
B̂µ Bµ 1 0 −sχ /cχ Bµ
Ŵµ3 = G(θV ) Wµ3 = 0 1 0 Wµ3 . (4.23)
V̂µ Vµ 0 0 1/cχ Vµ
We now include the third component of the SU (2)L gauge field triplet Wµ = (Wµ1 , Wµ2 , Wµ3 ) which mixes with
the hypercharge gauge boson through electroweak symmetry breaking to produce the massive Z boson and the
massless photon. Kinetic mixing between the SU (2)L field strength tensor and the U (1)X field strength tensor is
forbidden because V̂ µν Âaµν is not a gauge singlet. Assuming a mass m̂V for the V -boson we write the combined
mass matrix as
2
g 02 −g g 0 −g 0 sχ
2
Eq.(4.23) v
M2 = −g g 0 g2 g g 0 sχ + O(s3χ ) .
(4.24)
4 2
4m̂V
02 0 2 02 2
−g sχ g g sχ (1 + sχ ) + g sχ
v2
This mass matrix can be diagonalized with a combination of two block-diagonal rotations with the weak mixing
matrix and an additional angle ξ,
1 0 0 cw sw 0
R1 (ξ)R2 (θw ) = 0 cξ sξ −sw cw 0 , (4.25)
0 −sξ cξ 0 0 1
giving
2
mγ 0 0
R1 (ξ)R2 (θw )M2 R2 (θw )T R1 (ξ)T = 0 m2Z 0 , (4.26)
0 0 m2V
provided
2sχ sw
tan 2ξ = + O(s2χ ) . (4.27)
m̂2V
1− 2
m̂Z
For this brief discussion we assume for the mass ratio
m̂2V 2m̂2V
= 1, (4.28)
m̂2Z (g 2 + g 02 )v 2
64 4 WIMP MODELS
m̂2
m2γ = 0 , m2Z = m̂2Z 1 + s2χ s2w 1 + V2 m2V = m̂2V 1 + s2χ c2w .
, (4.29)
m̂Z
In addition to the dark matter mediator mass we also need the coupling of the new gauge boson V to SM matter.
Again, we start with the neutral currents for the not canonically normalized gauge fields and rotate them to the
physical gauge bosons defined in Eq.(4.26),
 A
e e
ejEM , jZ , gD jD Ẑ = ejEM , jZ , gD jD K Z , (4.30)
sin θw cos θw sin θw cos θw
Â0 V
with
1 0 −sχ cw
−1
K = R1 (ξ)R2 (θw )G−1 (θχ )R2 (θw )−1
≈ 0 1 0 . (4.31)
0 sχ sw 1
The new gauge boson couples to the electromagnetic current with a coupling strength of −sχ cw e, while to leading
order in sχ and m̂V /m̂Z its coupling to the Z-current vanishes. It is therefore referred to as hidden photon. This
behavior changes for larger masses, m̂V /m̂Z & 1, for which the coupling to the Z−current can be the dominating
coupling to SM fields. In this case the new gauge boson is called a Z 0 −boson. For the purpose of these lecture
notes we will concentrate on the light V -boson, because it will allow for a light dark matter particle.
There are two ways in which the hidden photon could be relevant from a dark matter perspective. The new gauge
boson could be the dark matter itself, or it could provide a portal to a dark matter sector if the dark matter
candidate is charged under U (1)X . The former case is problematic, because the hidden photon is not stable and
can decay through the kinetic mixing term. Even if it is too light to decay into the lightest charged particles,
electrons, it can decay into neutrinos V → ν ν̄ through the suppressed mixing with the Z-boson and into three
photons V → 3γ through loops of charged particles. For mixing angles small enough to guarantee stability on
time scales of the order of the age of the universe, the hidden photon can therefore not be thermal dark matter.
If the hidden photon couples to a new particle charged under a new U (1)X gauge group, this particle could be a
dark matter candidate. For a new Dirac fermion with U (1)X charge QX , we add a kinetic term with a covariant
derivative to the Lagrangian,
Through the U (1)X -mediator this dark fermion is in thermal contact with the Standard Model through the usual
annihilation shown in Eq.(3.1). If the dark matter is lighter than the hidden photon and the electron
χ e− χ V
χ
V
χ e+ χ V
Figure 10: Feynman diagrams contributing to the annihilation of dark matter coupled to the visible sector through
a hidden photon.
4.3 Supersymmetric neutralinos 65
mV > mχ > me , the dominant s-channel Feynman diagram contributing to the annihilation cross section is
shown on the left of Figure 10. This diagram resembles the one shown above Eq.(4.7) for the case of a Higgs
portal and the cross section can be computed in analogy to Eq.(4.10),
s
m2e
1 − 4
2m2e 2m2χ m2V
1 s
σ(χχ̄ → e+ e− ) = (sχ cw egD Qχ )2 1 + 1+ , (4.33)
(s − m2V )2 + m2V Γ2V
s
12π s s m2χ
1−4 2
mV
with ΓV the total width of the hidden photon V . For the annihilation of two dark matter particles s = 4m2χ , and
assuming mV ΓV , the thermally averaged annihilation cross section is given by
s
m2 m2e 4m2χ mχ me m2
1 χ
hσvi = (sχ cw egD Qχ ) 1 − 2e 1 +
2
≈ (sχ cw eQf gD Qχ )2 . (4.34)
4π mχ 2m2χ (4m2χ − m2V )2 πm4V
It exhibits the same scaling as in the generic WIMP case of Eq.(3.3). In contrast to the WIMP, however, the gauge
coupling is rescaled by the mixing angle sχ and for very small mixing angles the hidden photon can in principle be
very light. In Eq.(4.34) we assume that the dark photon decays into electrons. Since the hidden photon branching
ratios into SM final states are induced by mixing with the photon, for masses mV > 2mµ the hidden photon also
decays into muons and for hidden photon masses above a few 100 MeV and below 2mτ , the hidden photon decays
mainly into hadronic states. For mV > mχ , the PLANCK bound on the DM mass in Eq.(3.43) implies mV > 10
GeV and Eq.(4.34) would need to be modified by the branching ratios into the different kinematically accessible
final states. Instead, we illustrate the scaling with the scenario Qχ = 1, mχ = 10 MeV and mV = 100 MeV that is
formally excluded by the PLANCK bound, but only allows for hidden photon decays into electrons. In this case,
we find the observed relic density given in Eq.(3.31) for a coupling strength
In the opposite case of mχ > mV > me , the annihilation cross section is dominated by the diagram on the right of
Figure 10, with subsequent decays of the hidden photon. The thermally averaged annihilation cross section then
reads
32
m2
4 1 − V2
gD Q4χ 1 mχ mχ mV 4
gD Q4χ
hσvi = 2 ≈ . (4.36)
8π m2χ m2 8πm2χ
1 − V2
2mχ
The scaling with the dark matter mass is the same as for a WIMP with mχ > mZ , as shown in Eq.(3.34). The
annihilation cross section is in principle independent of the mixing angle sχ , motivating the name secluded dark
matter for such models, but the hidden photon needs to eventually decay into SM particles. Again assuming
Qχ = 1, and mχ = 10 GeV, we find
Supersymmetry is a (relatively) fashionable model for physics beyond the Standard Model which provides us with
a very general set of dark matter candidates. Unlike the portal model described in Section 4.1 the lightest
supersymmetric partner (LSP) is typically a fermion, more specifically a Majorana fermion. Majorana fermions
66 4 WIMP MODELS
are their own anti-particles. An on-shell Dirac fermion, like an electron, has four degrees of freedom; for the
particle e− we have two spin directions, and for the anti-particle e+ we have another two. The Majorana fermion
only has two degrees of freedom. The reason why the minimal supersymmetric extension of the Standard Model,
the MSSM, limits us to Majorana fermions is that the photon as a massless gauge boson only has two degrees of
freedom. This holds for both, the bino partner of the hypercharge gauge boson B and the wino partner of the still
massless SU (2)L gauge boson W 3 . Just like the gauge bosons in the Standard Model mix to the photon and the Z,
the bino and wino mix to form so-called neutralinos. The masses of the physical state can be computed from the
bino mass parameter M1 and the wino mass parameter M2 .
For reasons which we do not have to discuss in these lecture notes, the MSSM includes a non-minimal Higgs
sector: the masses of up-type and down-type fermions are not generated from one Higgs field. Instead, we have
two Higgs doublets with two vacuum expectation values vu and vd . Because both contribute to the weak gauge
boson masses, their squares have to add to
vu2 + vd2 = vH
2
= (246 GeV)2
vu
⇔ vu = vH cos β vd = vH sin β ⇔ tan β = . (4.38)
vd
Two Higgs doublets include eight degrees of freedom, out of which three Nambu-Goldstone modes are needed to
make the weak bosons massive. The five remaining degrees of freedom form a light scalar h0 , a heavy scalar H 0 , a
pseudo-scalar A0 , and a charged Higgs H ± . Altogether this gives four neutral and four charged degrees of
freedom. In the Standard Model we know that the one neutral (pseudo-scalar) Nambu-Goldstone mode forms one
particle with the W 3 gauge bosons. We can therefore expect the supersymmetric higgsinos to mix with the bino
and wino as well. Because the neutralinos still are Majorana fermions, the eight degrees of freedom form four
neutralino states χ̃0i . Their mass matrix has the form
M1 0 −mZ sw cos β mZ sw sin β
0 M2 mZ cw cos β −mZ cw sin β
M=
−mZ sw cos β
. (4.39)
mZ cw cos β 0 −µ
mZ sw sin β −mZ cw sin β −µ 0
The mass matrix is real and therefore symmetric. In the upper left corner the bino and wino mass parameters
appear, without any mixing terms between them. In the lower right corner we see the two higgsino states. Their
mass parameter is µ, the minus sign is conventional; by definition of the Higgs potential it links the up-type and
down-type Higgs or higgsino fields, so it has to appear in the off-diagonal
√ entries. The off-diagonal sub-matrices
√
are proportional to mZ . In the limit sw → 0 and sin β = cos β = 1/ 2 a universal mixing mass term mZ / 2
between the wino and each of the two higgsinos appears. It is the supersymmetric counterpart of the combined
Goldstone-W 3 mass mZ .
As any symmetric matrix, the neutralino mass matrix can be diagonalized through a real orthogonal rotation,
N M N −1 = diag mχ̃0j j = 1, 2 (4.40)
It is possible to extend the MSSM such that the dark matter candidates become Dirac fermions, but we will not
explore this avenue in these lecture notes.
Because the SU (2)L gauge bosons as well as the Higgs doublet include charged states, the neutralinos are
accompanied by chargino states. They cannot be Majorana particles, because they carry electric charge. However,
as a remainder of the neutralino Majorana property they do not have a well-defined fermion number, like electrons
or positrons have. The corresponding chargino mass matrix will not include a bino-like state, so it reads
√
√ M2 2mW sin β
M= (4.41)
2mW cos β µ
4.3 Supersymmetric neutralinos 67
It includes the remaining four degrees of freedom from the wino sector and four degrees of freedom from the
higgsino sector. As for the neutralinos, the wino and higgsino components mix via a weak mass term. Because the
chargino mass matrix is real and not symmetric, it can only be diagonalized using two unitary matrices,
U ∗ M V −1 = diag mχ̃± j = 1, 2 (4.42)
j
For the dark matter phenomenology of the neutralino–chargino sector it will turn out that the mass difference
between the lightest neutralino(s) and the lightest chargino are the relevant parameters. The reason is a possible
co-annihilation process as described in Section 3.3
We can best understand the MSSM dark matter sector in terms of the different SU (2)L representations. The bino
state as the partner of the hypercharge gauge boson is a singlet under SU (2)L . The wino fields with the mass
parameter M2 consist of two neutral degrees of freedom as well as four charged degrees of freedom, one for each
polarization of W ± . Together, the supersymmetric partners of the W boson vector field also form a triplet under
SU (2)L . Finally, each of the two higgsinos arise as supersymmetric partner of an SU (2)L Higgs doublet. The
neutralino mass matrix in Eq.(4.39) therefore interpolates between singlet, doublet, and triplet states under
SU (2)L .
The most relevant couplings of the neutralinos and charginos we need to consider for our dark matter calculations
are
g
|N13 |2 − |N14 |2
gZ χ̃01 χ̃01 =
2cw
ghχ̃01 χ̃01 = (g 0 N11 − gN12 ) (sin α N13 + cos α N14 )
gAχ̃01 χ̃01 = (g 0 N11 − gN12 ) (sin β N13 − cos β N14 )
gγ χ̃+ χ̃− = e
1 1
1 ∗ ∗
gW χ̃0 χ̃+ = g √ N14 V12 − N12 V11 , (4.43)
1 1
2
with e = gsw , s2w ≈ 1/4 and hence c2w ≈ 3/4. The mixing angle α describes the rotation from the up-type and
down-type supersymmetric Higgs bosons into mass eigenstates. In the limit of only one light Higgs boson with a
mass of 126 GeV it is given by the decoupling condition cos(β − α) → 0. The above form means for those
couplings which contribute to the (co-) annihilation of neutralino dark matter
– charginos couple to the photon diagonally, like any other charged particle
Finally, supersymmetry predicts scalar partners of the quarks and leptons, so-called squarks and sleptons. For the
partners of massless fermions, for example squarks q̃, there exists a q q̃ χ̃0j coupling induced through the gaugino
content of the neutralinos. If this kind of coupling should contribute to neutralino dark matter annihilation, the
lightest supersymmetric scalar has to be almost mass degenerate with the lightest neutralino. Because squarks are
strongly constrained by LHC searches and because of the pattern of renormalization group running, we usually
assume one of the sleptons to be this lightest state. In addition, the mixing of the scalar partners of the left-handed
and right-handed fermions into mass eigenstates is driven by the corresponding fermion mass, the most attractive
co-annihilation scenario in the scalar sector is stau–neutralino co-annihilation. However, in these lecture notes we
will focus on a pure neutralino–chargino dark matter sector and leave the discussion of the
squark–quark–neutralino coupling to Section 7 on LHC searches.
68 4 WIMP MODELS
Similar to the previous section we now compute the neutralino annihilation rate, assuming that in the
10 ... 1000 GeV mass range they are thermally produced. For mostly bino dark matter with
M1 M2 , |µ| (4.44)
the annihilation to the observed relic density is a problem. There simply is no relevant 2 → 2 Feynman diagram,
unless there is help from supersymmetric scalars f˜, as shown in Figure 11. If for example light staus appear in the
t-channel of the annihilation rate we find
g 4 m2χ̃0
σ(B̃ B̃ → f f¯) ≈ 1
with mχ̃01 ≈ M1 mf˜ . (4.45)
16πm4f˜
The problem with pure neutralino annihilation is that in the limit of relatively heavy sfermions the annihilation
cross section drops rapidly, leading to a too large predicted bino relic density. Usually, this leads us to rely on stau
co-annihilation for a light bino LSP. Along these lines it is useful to mention that with gravity-mediated
supersymmetry breaking we assume M1 and M2 to be identical at the Planck scale, which given the beta functions
of the hypercharge and the weak interaction turns into the condition M1 ≈ M2 /2 at the weak scale, i.e. light bino
dark matter would be a typical feature in these models.
1 1 g 4 s4w 1
≈ ⇒ σχχ ∝ (4.48)
16π mχ̃01 v c4w mχ̃01 m2χ̃0
1
The scaling with the mass of the dark matter agent does not follow our original postulate for the WIMP miracle in
Eq.(3.3), which was σχχ ∝ m2χ̃0 /m4W . If we only rely on the direct dark matter annihilation, the observed relic
1
density translated into a comparably light neutralino mass,
g 4 s4w 0.74 Eq.(3.34) 1
hσvi = ≈ = 1.7 · 10−9 ⇔ mχ̃01 ≈ 560 GeV . (4.49)
2 2
−
4
16πcw mχ̃0 450 mχ̃0 GeV2
W̃ W̃ →W W
+ 1 1
B̃ f¯ W̃ W− H̃ f¯
f˜ W̃ +
A
B̃ f W̃ W+ H̃ f
Figure 11: Sample Feynman diagrams for the annihilation of supersymmetric binos (left), winos (center), and
higgsinos.
4.3 Supersymmetric neutralinos 69
However, this estimate is numerically poor. The reason is that in contrast to this lightest neutralino, the
co-annihilating chargino can annihilate through a photon s-channel diagram into charged Standard Model
fermions,
− ∗ ¯
X Nc e4 X Nc g 4 s2
w
σ(χ̃+
1 χ̃1 → γ → f f ) ≈ = . (4.50)
16πm2χ̃± 16πm2χ̃±
f 1 f 1
For light quarks alone the color factor combined with the sum over flavors adds a factor 5 × 3 = 15 to the
−
annihilation rate. In addition, for χ̃+
1 χ̃1 annihilation we need to take into account the Sommerfeld enhancement
through photon exchange between the slowly moving incoming charginos, as derived in Section 3.5. This gives us
the correct values
2 2
mχ̃01 mχ̃01
Sommerfeld
ΩW̃ h2 ≈ 0.12 −→ 0.12 . (4.51)
2.1 TeV 2.6 TeV
In Figure 12 and in the left panel of Figure 13 this wino LSP mass range appears as a horizontal plateau in M2 ,
with and without the Sommerfeld enhancement. In the right panel of Figure 13 we show the mass difference
between the lightest neutralino and the lighter chargino. Typical values for a wino-LSP mass splitting are around
∆m = 150 MeV, sensitive to loop corrections to the mass matrices shown in Eq.(4.39) and Eq.(4.41).
Again from Eq.(4.43) we see that in addition to the t-channel chargino exchange, annihilation through s-channel
Higgs states is possible. Again, the corresponding Feynman diagrams are shown in Figure 11. At least in the pure
higgsino limit with Ni3 = Ni4 the two contributions to the H̃ H̃Z 0 coupling cancel, limiting the impact of
s-channel Z-mediated annihilation. Still, these channels make the direct annihilation of higgsino dark matter
significantly more efficient than for wino dark matter. The Sommerfeld enhancement plays a sub-leading role,
because it mostly affects the less relevant chargino co-annihilation,
2 2
mχ̃01 mχ̃01
2 Sommerfeld
ΩH̃ h ≈ 0.12 −→ 0.12 . (4.53)
1.13 TeV 1.14 TeV
4 tan β=10
3
M2 @TeVD
0
4 4
3 2 3
2 0 1
-1
1 @T
1
-3 -2
M -4 m@TeVD LSP mass
D
eV
mχ10= ●0.1 |●0.2 |●0.5 |●1.0 |●1.5 |●2.0 |●2.5 TeV
No Sommerfeld = ●
Figure 12: Combinations of neutralino mass parameters M1 , M2 , µ that produce the correct relic abundance, ac-
counting for Sommerfeld-enhancement, along with the LSP mass. The relic surface without Sommerfeld enhance-
ment is shown in gray. Figure from Ref. [12].
70 4 WIMP MODELS
The higgsino LSP appears in Figure 12 as a vertical plateau in µ. The corresponding mass difference between the
lightest neutralino and chargino is much larger than for the wino LSP; it now ranges around a GeV.
Also in Figure 12 we see that a dark matter neutralino in the MSSM can be much lighter than the pure wino and
higgsino results in Eqs.(4.51) and (4.53) suggest. For a strongly mixed neutralino the scaling of the annihilation
cross section with the neutralino mass changes, and poles in the s-channels appear. In the left panel of Figure 13
we add the leading Standard Model final state of the dark matter annihilation process, corresponding to the distinct
parameter regions
– the light Higgs funnel region with 2mχ̃01 = mh . The leading contribution to dark matter annihilation is the
decay to b quarks. As a consequence of the tiny Higgs width the neutralino mass has to be finely adjusted.
According to Eq.(4.43) the neutralinos couple to the Higgs though gaugino-higgsino mixing. A small,
O(10%) higgsino component can then give the correct relic density. This very narrow channel with a very
light neutralino is not represented in Figure 13. Decays of the Higgs mediator to lighter fermions, like tau
leptons, are suppressed by their smaller Yukawa coupling and a color factor;
– the Z-mediated annihilation with 2mχ̃01 ≈ mZ , with a final state mostly consisting out of light-flavor jets.
The corresponding neutralino coupling requires a sizeable higgsino content. Again, this finely tuned
low-mass channel in not shown in Figure 13;
– s-channel annihilation through the higgsino content with some bino admixture also occurs via the heavy
Higgs bosons A0 , H 0 , and H ± with their large widths. This region extends to large neutralino masses,
provided the Higgs masses follows the neutralino mass. The main decay channels are bb̄, tt̄, and tb̄. The
massive gauge bosons typically decouple from the heavy Higgs sector;
– with a small enough mass splitting between the lightest neutralino and lightest chargino, co-annihilation in
the neutralino–chargino sector becomes important. For a higgsino-bino state there appears a large
annihilation rate to χ̃01 χ̃01 → W + W − with a t-channel chargino exchange. The wino-bino state will mostly
co-annihilate into χ̃01 χ̃±
1 →W
±
→ q q̄ 0 , but also contribute to the W + W − final state. Finally, as shown in
Figure 13 the co-annihilation of two charginos can be efficient to reach the observed relic density, leading to
a W + W + final state;
– one channel which is absent from our discussion of purely neutralino and chargino dark matter appears for a
mass splitting between the scalar partner of the tau lepton, the stau, and the lightest neutralino of few
4 4 tan β=10
tan β=10
3 3
h
M2 @TeVD
w
M2 @TeVD
2 2
w
wh
1 bh 1
bw
04
04 bw
3
M1 @TeVD
3
M1 @TeVD
2 2
4 1 3 4
1 2 3 1 2
0 1 -1 0
0 -2 -1 -3 -2
-4 -3 -4 m@TeVD
m@TeVD CLSP-LSP mass splitting
●tt̄ ,bb̄ || ●ud̄ ,cs̄ || ●tb̄ || ●W+W- || ●W+W+ mχ1±-mχ10= ●<0.15 |●0.25 |●0.35 |●1 |●20 |●>40 GeV
Figure 13: Left: combinations of neutralino mass parameters M1 , M2 , µ that produce the correct relic abundance,
not accounting for Sommerfeld-enhancement, along with the leading annihilation product. Parameters excluded
by LEP are occluded with a white or black box. Right: mass splitting between the lightest chargino and lightest
neutralino. Parameters excluded by LEP are occluded with a white or black box. Figures from Ref. [13].
4.4 Effective field theory 71
per-cent or less the two states can efficiently co-annihilate. In the scalar quark sector the same mechanism
exists for the lightest top squark, but it leads to issues with the predicted light Higgs mass of 126 GeV.
In the right panel of Figure 13 we show the mass difference between the lightest chargino and the lightest
neutralino. In all regions where chargino co-annihilation is required, this mass splitting is small. From the form of
the mass matrices shown in Eq.(4.39) and Eq.(4.41) this will be the case when either M2 or µ are the lightest mass
parameters. Because of the light higgsino, the two higgsino states in the neutralino sector lead to an additional
level separation between the two lightest neutralinos, the degeneracy of the lightest chargino and the lightest
neutralino masses will be less precise here. For pure winos the mass difference between the lightest chargino and
the lightest neutralino can be small enough that loop corrections matter and the chargino becomes long-lived.
Note that all the above listed channels correspond to ways of enhancing the dark matter annihilation cross section,
to allow for light dark matter closer to the Standard Model masses. In that sense they indicate a fine tuning around
the generic scaling σχχ ∝ 1/m2χ̃0 which in the MSSM predicts TeV-scale higgsinos and even heavier winos.
1
Let us start with dark matter annihilation mediated by a heavy pseudoscalar A in the MSSM, as illustrated in the
right panel of Figure 11. The Aχ̃01 χ̃01 coupling is defined in Eq.(4.43). If we assume the heavy Higgs to decay to
two bottom quarks, the 2 → 2 annihilation channel is
This description of dark matter annihilation includes two different mass scales, the dark matter mass mχ̃01 and a
decoupled mediator mass mA mχ̃01 . The matrix element for the dark matter annihilation process includes the
A-propagator. From Section 3.2 we know that for WIMP annihilation the velocity of the incoming particles is
small, v 1. If the energy of the scattering process, which determines the momentum flowing through the
A-propagator is much smaller than the A-mass, we can approximate the intermediate propagator as
1 1 m2b
→− 2 ⇔ σ(χ̃01 χ̃01 → bb̄) ∝ gA
2 2
χ̃0 χ̃0 gAbb . (4.55)
q 2 − m2A mA 1 1 m4A
The fact that the propagator of the heavy scalar A does not include a momentum dependence is equivalent of
removing the kinetic term of the A-field from the Lagrangian. We remove the heavy scalar field from the
propagating degrees of freedom of our theory. The only actual particles we can use in our description of the
annihilation process of Eq.(4.54) are the dark matter fermions χ̃01 and the bottom quarks. Between them we
observe a four-fermion interaction.
On the Lagrangian level, such a four-fermion interactions mediated by a non-propagating state is given by an
operator of the type
where Γµ = {1, γ5 , γµ , γµ γ5 , [γµ , γν ]} represents some kind of Lorentz structure. We know that a Lagrangian has
mass dimension four, and a fermion spinor has mass dimension 3/2. The four-fermion interaction then has mass
dimension six, and has to be accompanied by a mass-dependent prefactor,
gann
L ⊃ ψ 0 Γµ ψχ̃01 ψ b Γµ ψb . (4.57)
Λ2 χ̃1
72 4 WIMP MODELS
Given this Lagrangian, the question arises if we want to use this interaction as a simplified description of the
MSSM annihilation process or view it as a more general structure without a known ultraviolet completion. For
example for the muon decay we nowadays know that the suppression is given by the W -mass of the weak
interaction. Using our derivation of Eq.(4.57) we are inspired by the MSSM annihilation channel through a heavy
pseudoscalar. In that case the scale Λ should be given by the mass of the lightest particle we integrate out. This
defines, modulo order-one factors, the matching condition
From Eq.(4.59) we see that all predictions by the effective Lagrangian are invariant under a simultaneous scaling
of the new physics scale Λ and the underlying coupling gann . Moreover, we know that the annihilation process
χ̃01 χ̃01 → f f¯ can be mediated by a scalar in the t-channel. In the limit mf mχ̃01 mf˜ this defines essentially
the same four-fermion interaction as given in Eq.(4.57).
Indeed, the effective Lagrangian is more general than its interpretation in terms of one half-decoupled model. This
suggests to regard the Lagrangian term of Eq.(4.57) as the fundamental description of dark matter, not as an
approximation to a full model. For excellent reasons we usually prefer renormalizable Lagrangians, only including
operators with mass dimension four or less. Nevertheless, we can extend this approach to examples including all
operators up to mass dimension six. This allows to describe all kinds of four-fermion interactions. From
constructing the Standard Model Lagrangian we know that given a set of particles we need selection rules to
choose which of the possible operators make it into our Lagrangian. Those rules are given by the symmetries of
the Lagrangian, local symmetries as well as global symmetries, gauge symmetries as well as accidental
symmetries. This way we define a general Lagrangian of the kind
X cj
L = LSM + Oj , (4.59)
j
Λn−4
where the operators Oj are organized by their dimensionality. The cj are couplings of the kind shown in Eq.(4.57),
called Wilson coefficients, and Λ is the new physics scale.
The one aspect which is crucial for any effective field theory or EFT analysis is the choice of operators
contributing to a Lagrangian. Like for any respectable theory we have to assume that any interaction or operator
which is not forbidden by a symmetry will be generated, either at tree level or at the quantum level. In practice,
this means that any analysis in the EFT framework will have to include a large number of operators. Limits on
individual Wilson coefficients have to be derived by marginalizing over all other Wilson coefficients using
Bayesian integration (or a frequentist profile likelihood).
From the structure of the Lagrangian we know that there are several ways to generate a higher dimensionality for
additional operators,
– external particles with field dimensions adding to more than four. The four-fermion interaction in Eq.(4.57)
is one example;
– an energy scale of the Lagrangian normalized to the suppression scale, leading to corrections to
lower-dimensional operators of the kind v 2 /Λ2 ;
– a derivative in the Lagrangian, which after Fourier transformation becomes a four-momentum in the
Feynman rule. This gives corrections to lower-dimensional operators of the kind p2 /Λ2 .
For dark matter annihilation we usually rely on dimension-6 operators of the first kind. Another example would be
a χ̃01 χ̃01 W W interaction, which requires a dimension-5 operator if we couple to the gauge boson fields and a
dimension-7 operator if we couple to the gauge field strengths. The limitations of an EFT treatment are obvious
when we experimentally observe poles, for example the A-resonance in the annihilation process of Eq.(4.54). In
4.4 Effective field theory 73
the presence of such a resonance it does not help to add higher and higher dimensions — this is similar to
Taylor-expanding a pole at a finite energy around zero. Whenever there is a new particle which can be produced
on-shell we have to add it to the effective Lagrangian as a new, propagating degree of freedom. Another limiting
aspect is most obvious from the third kind of operators: if the correction has the form p2 /Λ2 , and the available
energy for the process allows for p2 & Λ2 , higher-dimensional operators are no longer suppressed. However, this
kind of argument has to be worked out for specific observables and models to decide whether an EFT
approximation is justified.
Finally, we can estimate what kind of effective theory of dark matter can describe the observed relic density,
Ωχ h2 ≈ 0.12. As usual, we assume that there is one thermally produced dark matter candidate χ. Two mass scales
given by the propagating dark matter agent and by some non-propagating mediator govern our dark matter model.
If a dark matter EFT should ever work we need to require that the dark matter mass is significantly smaller than the
mediator mass,
mχ mmed . (4.60)
In terms of one coupling constant g governing the annihilation process we can use the usual estimate of the WIMP
annihilation rate, similar to Eq.(3.3),
Going back to our two models, the Higgs portal and the MSSM neutralino, it is less clear if an EFT description of
dark matter annihilation works well. In part of the allowed parameter space, dark matter annihilation proceeds
through a light Higgs in the s-channel on the pole. Here the mediator is definitely a propagating degree of
freedom. For neutralino dark matter we discuss t-channel chargino-mediated annihilation, where mχ̃± ≈ mχ̃01 .
1
Again, the chargino is clearly propagating at the relevant energies.
Finally, to fully rely on a dark matter EFT we need to make sure that all relevant processes are correctly described.
For our WIMP models this includes the annihilation predicting the correct relic density, indirect detection and
possibly the Fermi galactic center excess introduced in Section 5, the limits from direct detection discussed in
Section 6, and the collider searches of Section 7. We will comment on the related challenges in the corresponding
sections.
74 5 INDIRECT SEARCHES
5 Indirect searches
There exist several ways of searching for dark matter in earth-bound or satellite experiments. All of them rely on
the interaction of the dark matter particle with matter, which means they only work if the dark matter particles
interacts more than only gravitationally. This is the main assumption of these lecture notes, and it is motivated by
the fact that the weak gauge coupling and the weak mass scale happen to predict roughly the correct relic density,
as described in Section 3.1.
The idea behind indirect searches for WIMPS is that the generally small current dark matter density is significantly
enhanced wherever there is a clump of gravitational matter, as for example in the sun or in the center of the galaxy.
In these regions dark matter should efficiently annihilate even today, giving us either photons or pairs of particles
and anti-particles coming from there. Particles like electrons or protons are not rare, but anti-particles in the
appropriate energy range should be detectable. The key ingredient to the calculation of these spectra is the fact that
dark matter particles move only very slowly relative to galactic objects. This means we need to compute all
processes with incoming dark matter particles essentially at rest. This approximation is even better than at the time
of the dark matter freeze-out discussed in Section 3.2.
Indirect detection experiments search for many different particles which are produced in dark matter annihilation.
First, this might be the particles that dark matter directly annihilated into, for example in a 2 → 2 scattering
process. This includes protons and anti-protons if dark matter annihilates into quarks. Second, we might see decay
products of these particles. An example for such signatures are neutrinos. Examples for dark matter annihilation
processes are
χ̃01 χ̃01 → `+ `−
χ̃01 χ̃01 → q q̄ → pp̄ + X
χ̃01 χ̃01 → τ + τ − , W + W − , bb̄ + X → `+ `− , pp̄ + X ... (5.1)
The final state particles are stable leptons or protons propagating large distances in the Universe. While the leptons
or protons can come from many sources, the anti-particles appear much less frequently. One key experimental task
in many indirect dark matter searches is therefore the ability to measure the charge of a lepton, typically with the
help of a magnetic field. For example, we can study the energy dependence of the antiproton–proton ratio or the
ρDM (GeV/cm3)
J Factor
1000
Einasto 2
100 NFW 1
Burkert 1/70
10
r (kpc)
0.01 0.1 1 10
Dark Matter Halo Profiles
Figure 14: Dark matter galactic halo profiles, including standard Einasto and NFW profiles along with a Burkert
profile with a 3 kpc core. J factors are obtained assuming a spherical dark matter distribution and integrating over
the radius from the galactic center from r ' 0.05 to 0.15 kpc. J factors are normalized so that J(ρNFW ) = 1. Figure
from Ref.[12]
75
positron–electron ratio as a function of the energy. The dark matter signature is either a line or a shoulder in the
spectrum, with a cutoff
The main astrophysical background is pulsars, which produce for example electron–positron pairs of a given
energy. There exists a standard tool to simulate the propagation of all kinds of particles through the Universe,
which is called GALPROP. For example Pamela has seen such a shoulder with a positron flux pointing to a very
large WIMP annihilation rate. An interpretation in terms of dark matter is inconclusive, because pulsars could
provide an alternative explanation and the excess is in tension with PLANCK results from CMB measurements, as
discussed in Sec 3.4.
In these lecture notes we will focus on photons from dark matter annihilation, which we can search for in gamma
ray surveys over a wide range of energies. They also will follow one of two kinematic patterns: if they occur in the
direct annihilation process, they will appear as a mono-energetic line in the spectrum
χχ → γγ with Eγ ≈ mχ , (5.3)
for any weakly interacting dark matter particle χ. This is because the massive dark matter particles are essentially
at rest when colliding. If the photons are radiated off charged particles or appear in pion decays π 0 → γγ
χχ → τ + τ − , bb̄, W + W − → γ + · · · , (5.4)
they will follow a fragmentation pattern. We can either compute this photon spectrum or rely on precise
measurements from the LEP experiments at CERN (see Section 7.1 for a more detailed discussion of the LEP
experiments). This photon spectrum will constrain the kind of dark matter annihilation products we should
consider, as well as the mass of the dark matter particle.
The energy dependence of the photon flow inside a solid angle ∆Ω is given by
where Eγ is the photon energy, hσvi is the usual velocity-averaged annihilation cross-section, Nγ is the number of
photons produced per annihilation, and l is the distance from the observer to the actual annihilation event (line of
sight). The photon flux depends on the dark matter density squared because it arises from the annihilation of two
dark matter particles. A steeper dark matter halo profile, i.e. the dark matter density increasing more rapidly
towards the center of the galaxy, results in a more stringent bound on dark matter annihilation. The key problem in
the interpretation of indirect search results in terms of dark matter is that we cannot measure the dark matter
distributions ρχ (l) for example in our galaxy directly. Instead, we have to rely on numerical simulations of the
dark matter profile, which introduce a sizeable parametric or theory uncertainty in any dark-matter related result.
Note that the dark matter profile is not some kind of multi-parameter input which we have the freedom to assume
freely. It is a prediction of numerical dark matter simulations with associated error bars. Not all papers account for
this uncertainty properly. In contrast, the constraints derived from CMB anisotropies discussed in Section 1.4 are
largely free of astrophysical uncertainties.
There exist three standard density profiles; the steep Navarro-Frenk-White (NFW) profile is given by
ρ γ=1 ρ
ρNFW (r) = γ = , (5.6)
r r 3−γ r r 2
1+ 1+
R R R R
where r is the distance from the galactic center. Typical parameters are a characteristic scale R = 20 kpc and a
solar position dark matter density ρ = 0.4 GeV/cm3 at r = 8.5 kpc. In this form we can easily read off the
76 5 INDIRECT SEARCHES
scaling of the dark matter density in the center of the galaxy, i.e. r R; there we find ρNFW ∝ r−γ . The second
steepest is the exponential Einasto profile ,
2 r α
ρEinasto (r) = ρ exp − −1 , (5.7)
α R
with α = 0.17 and R = 20 kpc. It fits micro-lensing and star velocity data best. Third is the Burkert profile with a
constant density inside a radius R,
ρ
ρBurkert (r) = , (5.8)
r2
r
1+ 1+ 2
R R
where we assume R = 3 kpc. Assuming a large core results in very diffuse dark matter at the galactic center, and
therefore yields the weakest bound on neutralino self annihilation. Instead assuming R = 0.1 kpc only alters the
dark matter annihilation constraints by an order-one factor. We show the three profiles in Figure 14 and observe
that the difference between the Einasto and the NFW parametrizations are marginal, while the Burkert profile has a
very strongly reduced dark matter density in the center of the galaxy. One sobering result of this comparison is that
whatever theoretical considerations lie behind the NFW and Einasto profiles, once their parameters are fit to data
the possibly different underlying arguments play hardly any role. The impact on gamma ray flux of different dark
matter halo profiles is conveniently parameterized by the factor
Z Z
J∝ dΩ dz ρ2χ (z) with J(ρNFW ) ≡ 1 . (5.9)
∆Ω line of sight
Also in Figure 14 we quote the J factors integrated over the approximate HESS galactic center gamma ray search
range, r = 0.05 ... 0.15 kpc. As expected, the Burkert profile predicts a photon flow lower by almost two orders of
magnitude. In a quantitative analysis of dark matter signals this difference should be included as a theory error or a
parametric error, similar to for example parton densities or the strong coupling in LHC searches.
While at any given time there is usually a sizeable set of experimental anomalies discussed in the literature, we
will focus on one of them: the photon excess in the center of our galaxy, observed by Fermi, but discovered in
their data by several non-Fermi groups. The excess is shown in Figure 15 and covers the wide photon energy range
and clearly does not form a line. The error bars refer to the interstellar emission model, statistics, photon
fragmentation, and instrumental systematics. Note that the statistical uncertainties are dominated not by the
dN/dE ∆ E
103
102
modelling
statistics
10
fragmentation
systematics
1
1 10 102
Eγ [GeV]
Figure 15: Excess photon spectrum of the Fermi galactic center excess. Figure from Ref. [15], including the original
data and error estimates from Ref. [16].
5.1 Higgs portal 77
number of signal events, but by the statistical uncertainty of the subtracted background events. The fact that
uncertainties on photon fragmentation, means photon radiation off other Standard Model particles are included in
the analysis, indicates, that for an explanation we resort to photon radiation off dark matter annihilation products,
Eq.(5.4). This allows us to link the observed photon spectrum to dark matter annihilation, where the photon
radiation off the final state particles is known very well from many collider studies. Two aspects of Figure 15 have
to be matched by any explanation. First, the total photon rate has to correspond to the dark matter annihilation rate.
It turns out that the velocity-averaged annihilation rate has to be in the same range as the rate required for the
observed relic density,
For each of these annihilation channels the question arises if we can also generate a sizeable dark matter
annihilation rate at the center of the galaxy today, while also predicting the correct relic density Ωχ h2 .
Similar to our calculation of the relic density, we will first show what range of annihilation cross sections from the
galactic center can be explained by Higgs portal dark matter. Because the Fermi data prefers a light dark matter
10−25
τ +τ − W +W −
q̄q ZZ
c̄c hh
b̄b t̄t
gg
hσvi /A [cm3 s−1 ]
10−26
10−27
101 102
mχ [GeV]
Figure 16: Preferred dark matter masses and cross sections for different annihilation channels [18]. Figure from
Ref.[17].
78 5 INDIRECT SEARCHES
particle we will focus on the two velocity-weighted cross sections accounting for the observed relic density and for
the galactic center excess around the Higgs pole mS /2 = mH . First, we determine how large an annihilation cross
section in the galactic center we can achieve. The typical cross sections given in Eq.(5.11) can be explained by
mS = 220 GeV and λ3 = 1/10 as well as a more finely tuned mS = mH /2 = 63 GeV with λ3 ≈ 10−3 , as shown
in Figure 9.
We can for example assume that the Fermi excess is due to on-shell Higgs-mediated annihilation, while the
observed relic density does not probe the Higgs pole. The reason we can separate these two annihilation signals
based on the same Feynman diagram this way is that the Higgs width is smaller than the typical velocities,
ΓH /mH v. We start with the general annihilation rate of a dark matter scalar, Eq.(4.10) and express it
including the leading relative velocity dependence from Eq.(3.19),
v2
s = 4m2S + m2S v 2 = 4m2S 1 + . (5.12)
4
The WIMP velocity at the point of dark matter decoupling in the early universe we find roughly
mS Eq.(3.8) mS mS 2 2 1
xdec := = 28 ⇔ Tdec ≈ = v ⇔ vann = . (5.13)
Tdec 28 2 ann 14
Today the Universe is colder, and the WIMP velocity is strongly red-shifted. Typical galactic velocities today are
m 1 1
v0 ≈ 2.3 · 105 ≈ vann , (5.14)
s c 1300
This hierarchy in typical velocities between the era of thermal dark matter production and annihilation and dark
matter annihilation today is what will drive our arguments below.
Only assuming mb s the general form of the scalar dark matter annihilation rate is
Nc 2 2 1 s
σv = λ m √
2π 3 b mS s (s − m2H )2 + m2H Γ2H
SS→bb̄
v2 4m2S
Nc 2 2 1
= 1+ + O(v 4 ) λ3 mb
8 2π 2mS (4m2S − m2H + m2S v 2 )2 + m2H Γ2H
2
v 2 Nc 2 2 1 4m2S
= 1+ λ3 mb + O(v 4 ) . (5.15)
8 2π 2m2S (4m2S − m2H )2 + 2(4m2S − m2H )m2S v 2 + m2H Γ2H
The typical velocity of the dark matter states only gives a small correction for scalar, s-wave annihilation. It
includes two aspects: first, an over-all reduction of the annihilation cross section for finite velocity v > 0, and
second a combined cutoff of the Breit-Wigner propagator,
m2
max 2(4m2S − m2H )m2S v 2 , m2H Γ2H = m4S max 8v 2 1 − H2 , 16 · 10−10 .
(5.16)
4mS
Close to but not on the on-shell pole mH = mS /2 the modification of the Breit-Wigner propagator can be large
even for small velocities, while the rate reduction can clearly not account for a large boost factor describing the
galactic center excess. We therefore ignore the correction factor (1 + v 2 /8) when averaging the velocity-weighted
cross section over the velocity spectrum. If, for no good reason, we assume a narrow Gaussian velocity
distribution centered around v̄ we can approximate Eq.(5.15) as [19]
Nc 2 2 1 4m2S √
hσvi ≈ λ3 mb with a fitted ξ≈2 2. (5.17)
2π 2mS (4m2S − m2H + ξ m2S v̄ 2 )2 + 4m2S Γ2H
2
SS→bb̄
5.2 Supersymmetric neutralinos 79
This modified on-shell pole condition shifts the required dark matter mass slightly below the Higgs mass
2mS . mH . The size of this shift depends on the slowly dropping velocity, first at the time of dark matter
decoupling, v̄ ≡ vann , and then today, v̄ ≡ v0 vann . This means that during the evolution of the Universe the
Breit-Wigner propagator in Eq.(5.17) is always probed above its pole, probing the actual pole only today.
We first compute the Breit–Wigner suppression of hvσi in the early universe, starting with today’s on-shell
condition responsible for the galactic center excess,
2
! mH mH 2 2 2 2 2 vann
mS = s ≈ ⇒ 4mS − mH + ξ mS vann = 4mS 1 + √ − m2H
v 2 2 2
2 1 + √0
2 2
v v 2 vann v0 m2S
= 4m2S √ann − √0 ≈ . (5.18)
2 2 5
This means that the dark matter particle has a mass just slightly below the Higgs pole. Using Eq.(5.17) the ratio of
the two annihilation rates, for all other parameters constant, then becomes
This is the maximum enhancement we can generate to explain Fermi’s galactic center excess. The corresponding
Higgs coupling λ3 is given in Figure 9.
We can turn the question around and compute the smallest annihilation cross section in the galactic center
consistent with the observed relic abundance in the Higgs portal model. For this purpose we assume that unlike in
Eq.(5.18) the pole condition is fulfilled in the early universe, leading to a Breit-Wigner suppression today of
v2 m2
! mH vann v0
mS = s ⇒ 4m2S − m2H + ξ m2S v02 = 4m2S 1 + √0 − m2H ≈ − S . (5.20)
2
vann 2 5
2 1+ √
2
An explanation of the galactic center excess has to be based on the neutralino mass matrix given in Eq.(4.39),
defining a dark matter Majorana fermion as a mixture of the bino singlet, the wino triplet, and two higgsino
doublets. Some of its relevant couplings are given in Eq.(4.43). Correspondingly, some annihilation processes
leading to the observed relic density and underlying our interpretation of the Fermi galactic center excess are
illustrated in Figure 11. One practical advantage of the MSSM is that it offers many neutralino parameter regions
80 5 INDIRECT SEARCHES
to play with. We know that pure wino or higgsino dark matter particles reproducing the observed relic density are
much heavier than the Fermi data suggests. Instead of these pure states we will rely on mixed states. A major
obstacle of all MSSM interpretations are the mass ranges shown in Figure 16, indicating a clear preference of the
galactic center excess for neutralino masses mχ̃01 . 60 GeV. This does not correspond to the typical MSSM
parameter ranges giving us the correct relic density. This means that in an MSSM analysis of the galactic center
excess the proper error estimate for the photon spectrum is essential.
We start our discussion with the finely tuned annihilation through a SM-like light Higgs or through a Z-boson, i.e.
χ̃01 χ̃01 → h∗ , Z ∗ → bb̄. The properties of this channel are very similar to those of the Higgs portal. On the left
y-axes of Figure 17 we show the (inverse) relic density for a bino-higgsino LSP, both for a wide range of
neutralino masses and zoomed into the Higgs pole region. We decouple the wino to M2 = 700 GeV and vary M1
to give the correct relic density for three fixed, small higgsino mass values. We see that the bb̄ annihilation channel
only predicts the correct relic density in the two pole regions of the MSSM parameter space, with mχ̃01 = 46 GeV
and mχ̃01 = 63 GeV. The width of both peaks is given by the momentum smearing through velocity spectrum
rather than physical Higgs width and Z-width. The enhancement of the two peaks over the continuum is
comparable, with the Z-funnel coupled to the velocity-suppressed axial-vector current and the Higgs funnel
suppressed by the small bottom Yukawa coupling.
On the right y-axis of Figure 17, accompanied by dashed curves, we show the annihilation rate in the galactic
center. The rough range needed to explain the Fermi excess is indicated by the horizontal line. As discussed for the
Higgs portal, the difference to the relic density is that the velocities are much smaller, so the widths of the peaks
are now given by the physical widths of the two mediators. The scalar Higgs resonance now leads to a much
higher peak than the velocity-suppressed axial-vector coupling to the Z-mediator. This implies that continuum
annihilation as well as Z-pole annihilation would not explain the galactic center excess, while the Higgs pole
region could.
This is why in the right panel of Figure 17 we zoom into the Higgs peak regime. A valid explanation of the
galactic center excess requires the solid relic density curves to cross the solid horizontal line and at the same time
the dashed galactic center excess lines to cross the dashed horizontal line. We see that there exist finely tuned
regions around the Higgs pole which allow for an explanation of the galactic center excess via a thermal relic
through the process χ̃01 χ̃01 → bb̄. The physics of this channel is very similar to scalar Higgs portal dark matter.
For slightly larger neutralino masses, the dominant annihilation becomes χ̃01 χ̃01 → W W , mediated by a light
t-channel chargino combined with chargino-neutralino co-annihilation for the relic density. Equation (4.43)
indicates that in this parameter region the lightest neutralino requires either a wino content or a higgsino content.
104 10−25
(Ω h2)-1
<σ v>GCE
µ=103
µ=125
103 µ=150
10−26
102
10−27
10
40 50 60 70 10−28
mχ [GeV]
Figure 17: Inverse relic density (solid, left axis) and annihilation rate in the galactic center (dashed, right axis) for
an MSSM parameter point where the annihilation is dominated by χ̃01 χ̃01 → bb̄. Figure from Ref. [15].
5.2 Supersymmetric neutralinos 81
In the left panel of Figure 18 we show the bino–higgsino mass plane indicating the preferred regions from the
galactic center excess. The lightest neutralino mass varies from mχ̃01 ≈ 50 GeV to more than 250 GeV. Again, we
decouple the wino to M2 = 700 GeV, so the LSP is a mixture of higgsino, coupling to electroweak bosons, and
bino. For this slice in parameter space an increase in |µ| compensates any increase in M1 , balancing the bino and
higgsino contents. The MSSM parameter regions which allow for efficient dark matter annihilation into gauge
bosons are strongly correlated in M1 and µ, but not as tuned as the light Higgs funnel region with its underlying
pole condition. Around M1 = |µ| = 200 GeV a change in shape occurs. It is caused by the on-set of neutralino
annihilation to top pairs, in spite of a heavy Higgs mass scale of 1 TeV.
To trigger a large annihilation rate for χ̃01 χ̃01 → tt̄ we lower the heavy pseudoscalar Higgs mass to mA = 500 GeV.
In the right panel of Figure 18 we show the preferred parameter range again in the bino-higgsino mass plane and
for heavy winos, M2 = 700 GeV. As expected, for mχ̃01 > 175 GeV the annihilation into top pairs follows the
W W annihilation region in the mass plane. The main difference between the W W and tt̄ channels is the smaller
M1 values around |µ| = 200 GeV. The reason is that an increased bino fraction compensates for the much larger
top Yukawa coupling. The allowed LSP mass range extends to mχ̃01 & 200 GeV.
The only distinctive feature for mA = 500 GeV in the M1 vs µ plane is the set of peaks around M1 ≈ 300 GeV.
Here the lightest neutralino mass is around 250 GeV, just missing the A-pole condition. Because on the pole dark
matter annihilation through a 2 → 1 process becomes too efficient, the underlying coupling is reduced by a smaller
higgsino fraction of the LSP. The large-|M1 | regime does not appear in the upper left corner of Figure 18 because
at tree level this parameter region features mχ̃+ < mχ̃01 and we have to include loop corrections to revive it.
1
In principle, for mχ̃01 > 126 GeV we should also observe neutralino annihilation into a pair of SM-like Higgs
bosons. However, the t-channel neutralino diagram which describes this process will typically be overwhelmed by
the annihilation to weak bosons with the same t-channel mediator, shown in Figure 11. From the annihilation into
top pairs we know that s-channel mediators with mA,H ≈ 2mh are in principle available, and depending on the
MSSM parameter point the heavy scalar Higgs can have a sizeable branching ratio into two SM-like Higgses. For
comparably large velocities in the early universe both s-channel mediators indeed work fine to predict the
observed relic density. For the smaller velocities associated with the galactic center excess the CP-odd mediator A
completely dominates, while the CP-even H is strongly velocity-suppressed. On the other hand, only the latter
Figure 18: Left: lightest neutralino mass based on the Fermi photon where χ̃01 χ̃01 → W W is a dominant annihilation
channel. Right: lightest neutralino mass based on the Fermi photon spectrum for mA = 500 GeV, where we also
observe χ̃01 χ̃01 → tt̄. The five symbols indicate local best-fitting parameter points. The black shaded regions are
excluded by the Fermi limits from dwarf spheroidal galaxies.
82 5 INDIRECT SEARCHES
couples to two light Higgs bosons, so an annihilation into Higgs pairs responsible for the galactic center excess is
difficult to realize in the MSSM.
χ̃01 χ̃01 → bb̄, W W, tt̄ with mχ̃01 = 63 ... 250 GeV (5.22)
can explain the Fermi galactic center excess and the observed relic density in the MSSM. Because none of them
correspond to the central values of a combined fit to the galactic center excess, it is crucial that we take into
account all sources of (sizeable) uncertainties. An additional issue which we will only come to in Section 6 is that
direct detection constraints in addition to requiring the correct relic density and the correct galactic center
annihilation rate is a serious challenge to the MSSM explanations.
We can search for these additional singlet and singlino states at colliders. One interesting aspect is the link
between the neutralino and the Higgs sector, which can be probed by looking for anomalous Higgs decays, for
example into a pair of dark matter particles. Because an explanation of the galactic center excess requires the
singlet and the singlino to be light and to mix with their MSSM counterparts, the resulting invisible branching ratio
of the Standard-Model-like Higgs boson can be large.
Section 4.4. To achieve the currently observed density with light WIMPs we have to rely on an efficient
annihilation mechanism, which can be most clearly seen in the MSSM. For example, we invoke s-channel
annihilation or co-annihilation, both of which are not well captured by an effective theory description with a light
dark matter state and a heavy, non-propagating mediator. In the effective theory language of Section 4.4 this means
the mediators are not light compared to the dark matter agent,
mχ . mmed . (5.25)
In addition, the MSSM and the NMSSM calculations illustrate how one full model extending the Standard Model
towards large energy scales can offer several distinct explanations, only loosely linked to each other. In this
situation we can collect all the necessary degrees of freedom in our model, but ignore additional states for example
predicted by an underlying supersymmetry of the Lagrangian. This approach is called simplified models. It
typically describes the dark matter sector, including co-annihilating particles, and a mediator coupling the dark
matter sector to the Standard Model. In that language we have come across a sizeable set of simplified models in
our explanation of the Fermi galactic center excess:
– dark fermion with SM Z mediator (MSSM, χ̃01 χ̃01 → f f¯, not good for galactic center excess);
– dark fermion with heavy s-channel pseudo-scalar mediator (MSSM, χ̃01 χ̃01 → tt̄);
– dark fermion with light s-channel pseudo-scalar mediator (NMSSM, χ̃01 χ̃01 → bb̄).
In addition, we encountered a set of models in our discussion of the relic density in the MSSM in Section 4.3:
– dark fermion with fermionic co-annihilation partner and charged s-channel mediator (MSSM, χ̃01 χ̃−
1 → t̄b);
– dark fermion with fermionic co-annihilation partner and SM W -mediator (MSSM, χ̃01 χ̃−
1 → ūd);
Strictly speaking, all the MSSM scenarios require a Majorana fermion as the dark matter candidate, but we can
replace it with a Dirac neutralino in an extended supersymmetric setup.
One mediator which is obviously missing in the above list is a new, heavy vector V or axial-vector. Heavy gauge
bosons are ubiquitous in models for physics beyond the Standard Model, and the only question is how we would
link or couple them to a dark matter candidate. In principle, there exist different mass regimes in the mχ − mV
mass plane,
To allow for a global analysis including direct detection as well as LHC searches, we couple the vector mediator to
a dark matter fermion χ and the light up-quarks,
L ⊃ gu ū γ µ Vµ u + gχ χ̄ γ µ Vµ χ . (5.27)
84 5 INDIRECT SEARCHES
ΓV
. 0.4 ... 10% for gu = gχ = 0.2 ... 1 . (5.28)
mV
χχ → V ∗ → uū (5.29)
we can compute the predicted relic density or the indirect detection prospects. While the χ − χ − V interaction
also induces a t-channel process χχ → V ∗ V ∗ , its contribution to the total dark matter annihilation rate is always
strongly suppressed by its 4-body phase space. The on-shell annihilation channel
χχ → V V (5.30)
becomes important for mV < mχ , with a subsequent decay of the mediator for example to two Standard Model
fermions. In that case the dark matter annihilation rate becomes independent of the mediator coupling to the
Standard Model, giving much more freedom to avoid experimental constraints.
In Figure 19 we observe that for a light mediator the predicted relic density is smaller than the observed values,
implying that the annihilation rate is large. In the left panel we see the three kinematic regimes defined in
Eq.(5.26). First, for small mediator masses the 2 → 2 annihilation process is χχ → uū. The dependence on the
light mediator mass is small because the mediator is always off-shell and the position of its pole is far away from
the available energy of the incoming dark matter particles. Around the pole condition 2mχ ≈ mV ± ΓV the model
predicts the correct relic density with very small couplings. For heavy mediators the 2 → 2 annihilation process
rapidly decouples with large mediator masses, as follows for example from Eq.(3.3). In the right panel of
Figure 19 we assume a constant mass ratio mV /mχ & 1, finding that our simplified vector model has no problems
predicting the correct relic density over a wide range of model parameters.
One issue we can illustrate with this non-MSSM simplified model is a strong dependence of our predictions on the
assumed model features. The Lagrangian of Eq.(5.27) postulates a coupling to up-quarks, entirely driven by our
goal to link dark matter annihilation with direct detection and LHC observables. From a pure annihilation
perspective we can also define the mediator coupling to the Standard Model through muons, without changing any
Ω h2
Ω h2
10 10
1
1
10−1
10−2 10−1
10−3 10−2
10−4
mχ=10 GeV 10−3 mV/mχ=1.5
−5
10
mχ=50 GeV −4 mV/mχ=3
10
10−6 mχ=100 GeV mV/mχ=10
10−7 3
10−5 3
102 10 104 102 10 104
mV [GeV] mχ [GeV]
Figure 19: Relic density for the simplified vector mediator model of Eq.(5.27) as a function of the mediator mass
for constant dark matter mass (left) and as a function of the dark matter mass for a constant ratio of mediator to
dark matter mass (right). Over the shaded bands we vary the couplings gu = gχ = 0.2 ... 1. Figure from Ref. [21].
5.4 Simplified models and vector mediator 85
of the results shown in Figure 19. Coupling to many SM fermions simultaneously, as we expect from an extra
gauge group, will increase the predicted annihilation rate easily by an order of magnitude. Moreover, it is not clear
how the new gauge group is related to the U (1)Y × SU (2)L structure of the electroweak Standard Model. All this
reflects the fact that unlike the Higgs portal model or supersymmetric extensions a simplified model is hardly more
than a single tree-level or loop-level Feynman diagram describing dark matter annihilation. It describes the leading
effects for example in dark matter annihilation based on 2 → 2 or 2 → 1 kinematics or the velocity dependence at
threshold. However, because simplified models are usually not defined on the full quantum level, they leave a long
list of open questions. For new gauge bosons, also discussed in Section 4.2, they include fundamental properties
like gauge invariance, unitarity, or freedom from anomalies.
86 6 DIRECT SEARCHES
6 Direct searches
The experimental strategy for direct dark matter detection is based on measuring a recoil of a nucleus after
scattering with WIMP dark matter. For this process we can choose the optimal nuclear target based on the largest
possible recoil energy. We start with the non-relativistic relation between the momenta in relative coordinates
between the nucleus and the WIMP, assuming a nucleus composed of A nucleons and with charge Z. The relative
WIMP velocity v0 /2 is defined in Eq.(3.19), so in terms of the reduced mass mA mχ /(mA + mχ ) we find
2 2
v2
mA mχ v0 mA
pA |2 ≈
2mA EA = |~ ⇔ EA = 2
m2χ 0
mA + mχ 4 (mA + mχ ) 8
2
dEA 1 (−2)mA v !
⇒ = + m2χ 0 = 0 ⇔ mA = mχ
dmA (mA + mχ )2 (mA + mχ )3 8
m2χ 1 v02 mχ 2
⇒ EA = = v ≈ 104 eV , (6.1)
4 2mχ 4 32 0
with v0 ≈ 1/1300 and for a dark matter around 1 TeV. Because of the above relation, an experimental threshold
from the lowest observable recoil can be directly translated into a lower limit on dark matter masses we can probe
with such experiments. This also tells us that for direct detection all momentum transfers are very small compared
to the electroweak or WIMP mass scale. Similar masses of WIMP and nuclear targets produce the largest recoil in
the 10 keV range. Remembering that the Higgs mass in the Standard Model is roughly the same as the mass of the
gold atom we know that it should be possible to find appropriate nuclei, for example Xenon with a nucleus
including A = 131 nucleons, of which Z = 54 are protons.
Strictly speaking, the dark matter velocity relevant for direct detection is a combination of the thermal,
un-directional velocity v0 ≈ 1/1300 and the earth’s movement around the sun,
t − 152.5 d m
vearth-sun c = 15000 cos 2π
365.25 d s
− t − 152.5 d
−5 t 152.5 d v0
⇔ vearth-sun = 5 · 10 cos 2π ≈ cos 2π . (6.2)
365.25 d 15 365.25 d
If we had full control over all annual modulations in a direct detection scattering experiment we could use this
modulation to confirm that events are indeed due to dark matter scattering.
Given that a dark matter particle will (typically) not be charged under SU (3)c , the interaction of the WIMP with
the partons inside the nucleons bound in the nucleus will have to be mediated by electroweak bosons or the Higgs.
We expect a WIMP charged under SU (2)L to couple to a nucleus by directly coupling to the partons in the
nucleons through Z-exchange. This means with increased resolution we have to compute the scattering processes
for the nucleus, the nucleons, and the partons:
χ χ χ χ χ χ
Z Z Z
(A, Z) (A, Z) N = p, n N = p, n q = u, d q = u, d
This gauge boson exchange will be dominated by the valence quarks in the combinations p ≈ (uud) and
n ≈ (udd). Based on the interaction of individual nucleons, which we will calculate below, we can express the
6.1 Higgs portal 87
for mp ≈ 1 GeV. This is clearly true for the typical recoils given in Eq.(6.1). In the next step, we need to compute
the interaction to the individual A nucleons in terms of their partons. Because there are very different types of
partons, valence quarks, sea quarks, and gluons, with different quantum numbers, this calculation is best described
in a specific model.
One of the most interesting theoretical questions in direct detection is how different dark matter candidates couple
to the non-relativistic nuclei. The general trick is to link the nucleon mass (operator) to the nucleon-WIMP
interaction (operator). We know that three quarks can form a color singlet state; in addition, there will be a gluon
and a sea quark content in the nucleons, but in a first attempt we assume that those will play a sub-leading role for
the nucleon mass or its interaction to dark matter, as long as the mediator couples to the leading valence quarks.
We start with the nucleon mass operator evaluated between two nucleon states and write it in terms of the partonic
quark constituents,
X X
hN |mN 11|N i = mN hN |N i = hN |mq q̄q|N i = mq hN |q̄q|N i , (6.5)
q q
assuming an appropriate definition of the constituent masses. Based on the same formalism we can write the
nucleon–WIMP interaction operator in terms of the quark parton content,
X X
hN | χχq̄q|N i = χχ hN |q̄q|N i . (6.6)
q q
These two estimates suggest that we can link the nucleon interaction operator to the nucleon mass operator in the
naive quark parton model. Based on the nucleon mass we define a non-relativistic quark density inside the nucleon
as
Eq.(6.5) X mq X mq
fN := hN |N i = hN |q̄q|N i = fq ⇔ fq := hN |q̄q|N i
q
m N q
m N
X X mN
⇒ hN | χχq̄q|N i = χχ fq . (6.7)
q q
mq
The form factors fq describe the probability of finding a (valence) quark inside the proton or neutron at a
momentum transfer well below the nucleon mass. They can for example be computed using lattice gauge theory.
The issue with Eq.(6.7) is that it neither includes gluons nor any quantum effects. Things become more interesting
with a Higgs-mediated WIMP-nucleon interaction, as we encounter it in our Higgs portal models. To cover this
88 6 DIRECT SEARCHES
case we need to compute both, the nucleon mass and the WIMP–nucleon interaction operators beyond the quark
parton level. From LHC we know that at least for relativistic protons the dominant Higgs coupling is through the
gluon content. In the Standard Model the Higgs coupling to gluons is mediated by a top loop, which does not
decouple for large top masses. The fact that, in contrast, the top quark does decouple from the nucleon mass will
give us a non-trivial form factor for gluons.
Defining our quantum field theory framework, in proper QCD two terms contribute to the nucleon mass: the
valence quark masses accounted for in Eq.(6.5) and the strong interaction, or gluons, leading to a binding energy.
This view is supported by the fact that pions, consisting of two quarks, are almost an order of magnitude lighter
than protons and neutrons, with three quarks. We can describe both sources of the nucleon mass using the
energy–momentum tensor T µν as it appears for example in the Einstein–Hilbert action in Eq.(1.15),
mN hN |N i = hN |Tµµ |N i . (6.8)
Scale invariance, or the lack of fundamental mass scales in our theory implies that the energy–momentum tensor is
traceless. A non-zero trace of the energy–momentum tensor indicates a change in the Lagrangian with respect to a
scale variation, where in our units a variation of the length scale and a variation of the energy scale are equivalent.
Lagrangians which are symmetric under such a scale variation cannot include explicit mass terms, because those
correspond to a fixed energy scale.
In addition to the quark masses, for the general form of the nucleon mass given in Eq.(6.8) we need to consider
contributions from the running strong coupling to the trace of the energy–momentum tensor. At one-loop order the
running of αs with the underlying energy scale p2 is given by
2 1 1 2nq 11
αs (p ) = with b0 = − − Nc , (6.9)
p2 4π 3 3
b0 log 2
ΛQCD
and an appropriate reference value of ΛQCD ≈ 200 MeV. Mathematically, such a reference mass scale has to
appear in any problem which involves a logarithmic running, i.e. which would otherwise force us to take the
logarithm of a dimensionful scale p2 . Physically, this scale is defined by the point at which the strong coupling
explodes and we need to switch degrees of freedom. That occurs at positive energy scales as long as b0 > 0, or as
long as the gluons dominate the running of αs . Because the running of the strong coupling turns the dimensionless
parameter αs into the dimensionful parameter ΛQCD , this mechanism is called dimensional transmutation.
The contribution of the running strong coupling to the nucleon mass is given through the kinetic gluon term in the
Lagrangian, combined with the momentum variation of the strong coupling. Altogether we find
X 2 dαs
mN hN |N i = mq hN |q̄q|N i + 2
hN |Gaµν Ga µν |N i
q
α s d log p
X αs b0
= mq hN |q̄q|N i − hN |Gaµν Ga µν |N i
q
2
X αs 2nq 11
= mq hN |q̄q|N i + − Nc hN |Gaµν Ga µν |N i , (6.10)
q
8π 3 3
again written at one loop and neglecting the anomalous dimension of the quark fields. One complication in this
formula is the appearance of all six quark fields in the sum, suggesting that all quarks contribute to the nucleon
mass. While this is true for the up and down valence masses, and possibly for the strange mass, the three heavier
quarks hardly appear in the nucleon. Instead, they contribute to the nucleon mass through gluon splitting or self
energy diagrams in the gluon propagator. We can compute this contribution in terms of a heavy quark effective
theory, giving us the leading contribution per heavy quark
αs a a µν 1
hN |q̄q|N i =− hN |Gµν G |N i + O . (6.11)
12πmq m3q
c,b,t
6.1 Higgs portal 89
We can insert this result in the above expression and find the complete expression for the nucleon mass operator
αs 2 × 6 11
X X αs
a a µν
mN hN |N i = mq hN |q̄q|N i − hN |Gµν G |N i + − Nc hN |Gaµν Ga µν |N i
12π 8π 3 3
u,d,s c,b,t
αs 2 × 3 11
X
= mq hN |q̄q|N i + − Nc hN |Gaµν Ga µν |N i . (6.12)
8π 3 3
u,d,s
Starting from the full beta function of the strong coupling this result implies that we only need to consider the
running due to the three light-flavor quarks and the gluon itself for the nucleon mass prediction,
(u,d,s)
X αs b0 (u,d,s) 1 2nlight 11
mN hN |N i = mq hN |q̄q|N i − hN |Gaµν Ga µν |N i with b0 =− − Nc .
2 4π 3 3
u,d,s
(6.13)
This reflects a full decoupling of the heavy quarks in their contribution to the nucleon mass. From the derivation it
is clear that the same structure appears for any number of light quarks defining our theory.
Exactly in the same way we now describe the WIMP–nucleon interaction in terms of six quark flavors. The light
quarks, including the strange quark, form the actual quark content of the nucleon. Virtual heavy quarks occur
through gluon splitting at the one-loop level. In addition to the small Yukawa couplings of the light quarks we
know from LHC physics that we can translate the Higgs-top interaction into an effective Higgs–gluon interaction.
In the limit of large quark masses the loop-induced coupling defined by the Feynman diagram
H t
g
is given by
1 αs
LggH ⊃ gggH H Gµν Gµν with gggH = . (6.14)
vH 12π
In terms of an effective field theory the dimension-5 operator scales like 1/v and not 1/mt . The reason is that the
dependence on the top mass in the loop and on the Yukawa coupling in the numerator cancel exactly in the limit of
small momentum transfer through the Higgs propagator. Unlike for the nucleon mass operators this means that in
the Higgs interaction the Yukawa coupling induces a non-decoupling feature in our theory. Using this effective
field theory level we can successively compute the ggH n+1 coupling from the ggH n coupling via
∂ 1
gggH n+1 = mn+1
q g ggH n . (6.15)
∂mq mnq
This relation also holds for n = 0, which means it formally links the Higgs–nucleon coupling operator to the
nucleon mass operator in Eq.(6.13). The only difference between the effective Higgs-gluon interaction at LHC
energies and at direct detection energies is that in direct detection all three quarks c, b, t contribute to the effective
interaction defined in Eq.(6.14).
Keeping this link in mind we see that the Higgs-mediated WIMP interaction operator again consists of two terms
X X αs
hN | mq H q̄q|N i − hN | HGaµν Ga µν |N i . (6.16)
12π
u,d,s c,b,t
90 6 DIRECT SEARCHES
H H
u, d, s u, d, c g g
The Yukawa interaction, described by the first terms in Eq.(6.16) has a form similar to the nucleon mass in
Eq.(6.13). Comparing the two formulas for light quarks only we indeed find
X X Eq.(6.13)
hN | mq H q̄q|N i =H mq hN |q̄q|N i = HmN hN |N i . (6.17)
q
u,d,s u,d,s u,d,s
This reproduces the simple recipe for computing the light-quark-induced WIMP-nucleon interaction as
proportional to the nucleon mass. The remaining, numerically dominant gluonic terms is defined in the so-called
chiral limit mu,d,s = 0. Because of the non-decoupling behavior this contribution is independent of the heavy
quark mass, so we find for nheavy heavy quarks
X
Eq.(6.16) 2nheavy αs
hN | mq H q̄q|N i = − H hN |Gaµν Ga µν |N i
q
3 8π
c,b,t
αs 11 2nlight Eq.(6.13)
6= − Nc − H hN |Gaµν Ga µν |N i = HmN hN |N i .
8π 3 3
c,b,t,g
(6.18)
The contribution to the nucleon mass comes from the gluon and nlight light quark loops, while the gluonic
contribution to the nucleon–Higgs coupling is driven by the nheavy heavy quark loops. The boundary condition is
nlight + nheavy = 6. At the energy scale of direct detection we can compensate for this mismatch in the
Higgs–nucleon coupling of the naive scaling between the nucleon mass and nucleon Yukawa interaction shown in
Eq.(6.7). We simply include an additional factor
2nheavy
X mq
hN |H q̄q|N i
= 3 H hN |N i
, (6.19)
mN 11 2nlight
Nc −
q
c,b,t c,b,t,g
3 3
which we can estimate at leading order and at energy scales relevant for direct dark matter searches to be
2nheavy 2
nlight =3
3× 2
3 = 3 = . (6.20)
11 2nlight 2×3 9
Nc − 11 −
3 3 3
This effect leads to a suppression of the already small Higgs–nucleon interaction at low momentum transfer. The
exact size of the suppression depends on the number of active light quarks in our effective theory, which in turn
depends on the momentum transfer.
At the parton level, the weakly interacting part of the calculation of the nucleon–WIMP scattering rate closely
follows the calculation of WIMP annihilation in Eq.(4.7). In the case of direct detection the valence quarks in the
nucleons couple through a t-channel Higgs to the dark matter scalar S. We account for the parton nature of the
three relevant heavy quarks by writing the nucleon Yukawa coupling as fN mN × 2/9,
−2ifN mN −i
M = ū(k2 ) u(k1 ) (−2iλ3 vH ) . (6.21)
9vH (k1 − k2 )2 − m2H
6.1 Higgs portal 91
For an incoming and outgoing fermion the two spinors are ū and u. As long as the Yukawa coupling is dominated
by the heavy quarks, it will be the same for neutrons and protons, i.e. Mp = Mn . We have to square this matrix
element, paying attention to the spinors v and u, and then sum over the spins of the external fermions. In this case
we already know that we are only interested in scattering in the low-energy limit, i.e. |(k1 − k2 )2 | m2N m2H ,
X 16 X X 1
|M|2 = λ23 fN
2
m2N u(k2 )ū(k2 ) u(k1 )ū(k1 ) 2
81 [(k1 − k2 )2 − m2H ]
spin spin spin
16 2 2 2 1
= λ3 fN mN Tr [(k/ 2 + mN 11) (k/ 1 + mN 11)] 2
81 [(k1 − k2 )2 − m2H ]
32 2 2 2 1
λ3 fN mN 2k1 · k2 + 2m2N
= 2
81 [(k1 − k2 )2 − 2m2H ]
32 2 2 2 1
λ3 fN mN −(k1 − k2 )2 + 4m2N
= 2
81 [(k1 − k2 )2 − m2H ]
128 2 2 m4N X 64 2 2 m4N
≈ λ3 fN 4 ⇒ |M|2 = λ f (6.22)
81 mH 81 3 N m4H
spin,color
The cross section in the low-energy limit is by definition spin-independent and becomes
1 X
σ SI (SN → SN ) = |M|2
16πs
1 64 2 2 m4N 4λ23 fN
2
m4N 1
= λ3 f N ≈ , (6.23)
16π(mS + mN )2 81 m4H 81π m4H m2S
where in the last step we assume mS mN . For WIMP–Xenon scattering this gives us
4λ23 fN
2
A2 m4N 1 2
−7 λ3
σ SI (SA → SA) = = 6 · 10 . (6.24)
81π m4H m2S m2S
The two key ingredients to this expression can be easily understood: the suppression 1/m4H appears after we
effectively integrate out the Higgs in the t-channel, and the high power of m4N occurs because in the low-energy
limit the Higgs coupling to fermions involve a chirality flip and hence one power of mN for each coupling. The
angle-independent matrix element in the low-energy limit can easily be translated into a spectrum of the scattering
angle, which will then give us the recoil spectrum, if desired. We limit ourselves to the total rate, assuming that the
appropriate WIMP mass range ensures that the total cross section gets converted into measurable recoil. This
approach reflects the fact that we consider the kinematics of scattering processes and hence the existence of phase
space a topic for experimental lectures.
Next, we can ask which range of Higgs portal parameters with the correct relic density, as shown in Figure 9, is
accessible to direct detection experiments. According to Eq.(6.24) the corresponding cross section first becomes
small when λ3 1, which means mS . mH /2 with the possibility of explaining the Fermi galactic center excess.
Second, the direct detection cross section is suppressed for heavy dark matter and leads to a scaling λ3 ∝ mS .
From Eq.(4.21) we know that a constant annihilation rate leading to the correct relic density also corresponds to
λ3 ∝ mS . However, while the direct detection rate features an additional suppression through the nucleon mass
mN , the annihilation rate benefits from several subleading annihilation channels, like for example the annihilation
to two gauge bosons or two top quarks. This suggests that for large mS the two lines of constant cross sections in
the λ3 -mS plane run almost in parallel, with a slightly smaller slope for the annihilation rate. This is exactly what
we observe in Figure 9, leaving heavy Higgs portal dark matter with mS & 300 GeV a viable model for all
observations related to cold dark matter. This minimal dark matter mass constraint rapidly increases with new
direct detection experiments coming online. On the other hand, from our discussion of the threshold behavior in
Section 3.5 is should be clear that we can effectively switch off all direct detection constraints by making the scalar
Higgs mediator a pseudo-scalar.
92 6 DIRECT SEARCHES
Finally, we can modify our model and the quantitative link between the relic density and direct detection, as
illustrated in Figure 9. The typical renormalizable Higgs portal includes a scalar dark matter candidate. However,
if we are willing to include higher-dimensional terms in the Lagrangian we can combine the Higgs portal with
fermionic and vector dark matter. This is interesting in view of the velocity dependence discussed in Section 3.4.
The annihilation of dark matter fermions is velocity-suppressed at threshold, so larger dark matter couplings
predict the observed relic density. Because direct detection is not sensitive to the annihilation threshold, it will be
able to rule out even the mass peak region for fermionic dark matter.
In supersymmetry with its fermionic dark matter candidate, nucleon–neutralino scattering is described by
four-fermion operators, just like in Fermi’s theory. The reason is that all intermediate particles coupling the two
neutralinos to two quarks are far below their mass shell. Accounting for the mass dimension through a scalar
mediator mass scale Λ ≈ mh0 , the matrix element reads
gN N χ̃01 χ̃01
M= v̄χ̃01 vχ̃01 ūN uN
Λ2
2
X gN N χ̃0 χ̃0
h i h i
|M|2 = 1 1
Tr (p/ 2 − mχ̃01 11) (p/ 1 − mχ̃01 11) Tr (k/ 2 + mN 11) (k/ 1 + mN 11)
Λ4
spins
The corresponding spin-independent cross section mediated by the Standard Model Higgs in the low-energy limit
is then
2
gN N χ̃01 χ̃01 mN
2
σ SI
(χ̃01 N → χ̃01 N ) ≈ . (6.26)
π m4h0
As for the Higgs portal case in Eq.(6.24) the rate is suppressed by the mediator mass to the fourth power. The
lower power of m2N appears only because we absorb the Yukawa coupling in gN N χ̃01 χ̃01 = 2mN fN /9,
Figure 20: Relic density (labelled PLANCK) vs direct dark matter detection constraints. The dark matter agent is
switched from a real scalar (left) to a fermion (right). Figure from Ref. [22].
6.2 Supersymmetric neutralinos 93
following Eq.(4.43). We see that this scaling is identical to the Higgs portal case in Eq.(6.24), but with an
additional suppression through the difference in mixing angles in the neutralino and Higgs sectors.
However, in supersymmetric models the dark matter mediator will often be the Z-boson, because the interaction
gN N χ̃01 χ̃01 is not suppressed by a factor of the kind mN /v. In this case we need to describe a (transverse) vector
coupling between the WIMP and the nucleon in our four-fermion interaction. Following exactly the argument as
for the scalar exchange we can look at a Z-mediated interaction between a (Dirac) fermion χ and the nucleons,
gN N χχ
M= v̄χ γµ vχ ūN γ µ uN
Λ2
2
X gN N χχ
h i h i
|M|2 = Tr ( p
/ 2 − m χ 1)
1 γ µ ( p
/ 1 − mχ 1)
1 γ ν Tr (k/ 2 + mN 1)
1 γ µ
(k/ 1 + mN 1)
1 γ ν
Λ4
spins
2
gN N χχ
≈ (8m2χ )(8m2N )
Λ4
2
2
m2χ m2N 4gN 2
N χχ mN
≈ 64gN N χχ ⇒ σ SI (χN → χN ) ≈ . (6.28)
Λ4 π Λ4
The spin-independent cross section mediated by a gauge boson is typically several orders of magnitude larger than
the cross section mediated by Higgs exchange. This means that models with dark matter fermions coupling to the
Z-boson will be in conflict with direct detection constraints. For the entire relic neutralino surface with pure and
mixed states the spin-independent cross sections are shown in Figure 21. The corresponding current and future
exclusion limits are indicated in Figure 22. The so-called neutrino floor, which can be reached within the next
decade, is the parameter region where the expected neutrino background will make standard direct detection
searches more challenging.
The problem with this result in Eq.(6.28) is that it does not hold for Majorana fermions, like the neutralino χ̃01 .
From the discussion in Section 3.5 we know that a vector mediator cannot couple Majorana fermions to the
44 tanβ=10
tan β=10 4 tan β=2
33 3
@TeVD
M22@TeVD
M2 @TeVD
22 2
M
11 1
44
00 4
0
33 3
@TeVD
M11@TeVD
M1 @TeVD
22 2
11 33 44 1 3 4
22 2
M
-1 00 11 0 1
00 -3 -2
-2 -1 0 -2 -1
-4
-4 -3 -4 -3
m@TeVD
m@TeVD Spin-independent σnχ m@TeVD
σ(χ10n→χ10n) = ●<10-50 |●10-49 |●10-48 |●10 -47 |●10-46 |●10-45 |●>10-44 cm2
Excluded: XENON100● |LUX●● Projected Exclusion: XENON1T●●● |LZ●●●●
Figure 21: Left: spin-independent nucleon-scattering cross-section for relic neutralinos. Right: relic neutralino
exclusions from XENON100 and LUX and prospects from XENON1T and LZ. The boxed out area denotes the
LEP exclusion. Figure from Ref. [12].
94 6 DIRECT SEARCHES
For axial vector couplings the current is defined by γµ γ5 . This means it depends on the chirality or the helicity of
the fermions. The spin operator is defined in terms of the Dirac matrices as ~s = γ5 γ 0~γ . This indicates that the
axial vector coupling actually is a coupling to the spin of the nucleon. This is why the result is called a
spin-dependent cross section, which for each nucleon reads
X m2χ̃0 m2N 2
4gN 2
N χ̃01 χ̃01 mN
|M|2 ≈ 16 × 4gN
2
N χ̃0 χ̃0
1
⇒ σ SD (χ̃01 N → χ̃01 N ) ≈ . (6.30)
1 1 Λ4 π Λ4
spins
Again, we can read off Eq.(4.43) that for the light quarks q = u, d, s the effective coupling should have the form
The main difference between the spin-independent and spin-dependent scattering is that for the coupling to the
nucleon spin we cannot assume that the couplings to all nucleons inside a nucleus add coherently. Instead, we need
to link the spin representations of the nucleus to spin representations of each nucleon. Instead of finding a coherent
enhancement with Z 2 or A2 the result scales like A, weighted by Clebsch-Gordan coefficients which appear from
reducing out the combination of the spin-1/2 nucleons. Nevertheless, direct detection strongly constrains the
higgsino content of the relic neutralino, with the exception of a pure higgsino, where the two terms in gZ χ̃01 χ̃01
cancel each other.
10 CDMS Si
(2013)
12)
250-C
10 SIMP 012)
WIMP!nucleon cross section !pb#
DAMA
PP (2
COU 012)
3F8
-III (2
10!42 Su
CR
ES
ST ZE P L IN
(200
9) 10!6
pe S II Ge
rC CDM udan
!43 DM SS (2011) S S o
10!7
10 SS EDELWEI rCDM 2)
NE NOL AB Supe 100 (20510
UT
XenDon rkSide
a
10!44 7Be
RI
N O
CO LUX F3I
10!8
HER NT SCATTE
E 250-C
!45 Neutrinos PICO
RI
8B
10!9
NG
10 Neutrinos n1T
Xeno
3600
DEAP Side
G2
10!46 D a rk
LZ
10!10
10!47 (Green&ovals)&Asymmetric&DM&& 10!11
(Violet&oval)&Magne7c&DM& G
ERIN
(Blue&oval)&Extra&dimensions&& CATT
10!48 (Red&circle)&SUSY&MSSM& HER
ENT S
eutrin
os 10!12
R IN O
CO NB N
&&&&&MSSM:&Pure&Higgsino&& NEUT n d DS
!49 sph eric a
10 &&&&&MSSM:&A&funnel& Atmo 10!13
&&&&&MSSM:&BinoEstop&coannihila7on&
&&&&&MSSM:&BinoEsquark&coannihila7on&
10!50 & 10!14
Figure 22: Spin-independent WIMP–nucleon cross section limits and projections (solid, dotted, dashed curves)
and hints for WIMP signals (shaded contours) and projections (dot and dot-dashed curves) for direct detection
experiments. The yellow region indicates dangerous backgrounds from solar, atmospheric, and diffuse supernovae
neutrinos. Figure from Ref. [23].
95
7 Collider searches
Collider searches for dark matter rely on two properties of the dark matter particle: first, the new particles have to
couple to the Standard Model. This can be either a direct coupling for example to the colliding leptons and quarks,
or an indirect coupling through an mediator. Second, we need to measure traces of particles which interact with
the detectors as weakly as for example neutrinos do. And unlike dedicated neutrino detectors their collider counter
parts do not include hundreds of cubic meters of interaction material. Under those boundary conditions collider
searches for dark matter particles will benefit from several advantages:
1. we know the kinematic configuration of the dark matter production process. This is linked to the fact that
most collider detectors are so-called multi-purpose detectors which can measure a great number of
observables;
2. the large number of collisions (parametrized by the luminosity L) can give us a large number of dark matter
particles to analyze. This allows us to for example measure kinematic distributions which reflect the
properties of the dark matter particle;
3. all background processes and all systematic uncertainties can be studied, understood, and simulated in
detail. Once an observation of a dark matter particle passes all conditions the collider experiments require
for a discovery, we will know that we discovered such a new particle. Otherwise, if an anomaly turns out to
not pass these conditions we have at least in my life time always been able to identify what the problem was.
One weakness we should always keep in mind is that a particle which does not decay while crossing the detector
and which interacts weakly enough to not leave a trace does not have to be stable on cosmological time scales. To
make this statement we need to measure enough properties of the dark matter particle to for example predict its
relic density the way we discuss it in Section 3.
The key observable we can compute and analyze at colliders is the number of events expected for a certain
production and decay process in a given time interval. The number of events is the product of the luminosity L
measured for example in inverse femtobarns, the total production cross section measured in femtobarns, and the
detection efficiency measured in per-cent,3
This way the event rate is split into a collider–specific number describing the initial state, a process–specific
number describing the physical process, and a detector–specific efficiency for each final state particle. The
efficiency includes for example phase-space dependent cuts defining the regions of sensitivity of a given
experiment, as well as the so-called trigger requirements defining which events are saved and looked at. This
structure holds for every collider.
When it comes to particles with electroweak interactions the most influential experiments were ALEPH, OPAL,
DELPHI, and L3 at the Large Electron-Positron Collider (LEP) at CERN. It ran from 1989 until 2000, first with a
e+ e− energy right on the Z pole, and then with energies up to 209 GeV. Its life-time integrated luminosity is
1 fb−1 . The results form running on the Z pole are easily summarized: the SU (2)L gauge sector shows no hints
for deviations from the Standard Model predictions. Most of these results are based on an analysis of the
Breit–Wigner propagator of the Z boson which we introduce in Eq.(4.11),
Ee2+ e−
σ(e+ e− → Z) ∝ . (7.2)
(Ee2+ e− − m2Z )2 + m2Z Γ2Z
3 Cross sections and luminosities are two of the few observables which we do not measure in eV.
96 7 COLLIDER SEARCHES
If we know what the energy of the incoming e+ e− system is we can plot the cross section as a function of Ee+ e−
and measure the Z mass and the Z width,
From this Z mass measurement in relation to the W mass and the vacuum expectation value vH = 246 GeV we
can extract the top quark and Higgs masses, because these particles contribute to quantum corrections of the Z
properties. The total Z width includes a partial width from the decay Z → ν ν̄, with a branching ratio around 20%.
It comes from three generations of light neutrinos and is much larger than for example the 3.4% branching ratio of
the decay Z → e+ e− . Under the assumption that only neutrinos contribute to the invisible Z decays we can
translate the measurement of the partial width into a measurement of the number of light neutrinos, giving
2.98 ± 0.008. Alternatively, we can assume that there are three light neutrinos and use this measurement to
constrain light dark matter with couplings to the Z that would lead to an on-shell decay, for example Z → χ̃01 χ̃01 in
our supersymmetric model. If a dark matter candidate relies on its electroweak couplings to annihilate to the
observed relic density, this limit means that any WIMP has be heavier than
mZ
mχ̃01 ,S > = 45 GeV . (7.4)
2
The results from the higher-energy runs are equally simple: there is no sign of new particles which could be singly
or pair-produced in e+ e− collisions. The Feynman diagram for the production of a pair of new particles, which
could be dark matter particles, is
e+ χ
e− χ
The experimental results mean that it is very hard to postulate new particles which couple to the Z boson or to the
photon. The Feynman rules for the corresponding f f¯Z and f f¯γ couplings are
e
−iγ µ (`PL + rPR ) T3 − 2Qs2w (Zf f¯)
with ` = r = `
sw cw T3 =0
` = r = Qe (γf f¯) , (7.5)
with the isospin quantum number T3 = ±1/2 and s2w ≈ 1/4. Obviously, a pair of charged fermions will always be
produced through an s-channel photon. If a particle has SU (2)L quantum numbers, the Z-coupling can be
cancelled with the help of the electric charge, which leads to photon-induced pair production. Dark matter
particles cannot be charged electrically, so for WIMPs there will exist a production process with a Z-boson in the
s-channel. This result is important for co-annihilation in a more complex dark matter sector. For example in our
supersymmetric model the charginos couple to photons, which means that they have to be heavier than
Eemax
+ e−
mχ̃± > = 104.5 GeV; , (7.6)
1 2
in order to escape LEP constraints. The problem of producing and detecting a pair of dark matter particles at any
collider is that if we do not produce anything else those events with ‘nothing visible happening’ are hard to
identify. Lepton colliders have one big advantage over hadron colliders, as we will see later: we know the
kinematics of the initial state. This means that if, for example, we produce one invisibly decaying particle we can
reconstruct its four-momentum from the initial state momenta and the final-state recoil momenta. We can then
check whether for the majority of events the on-shell condition p2 = m2 with a certain mass is fulfilled. This is
how OPAL managed to extract limits on Higgs production in the process e+ e− → ZH without making any
assumptions about the Higgs decay, notably including a decay to two invisible states. Unfortunately, because this
analysis did not reach the observed Higgs mass of 126 GeV, it does not constrain our dark matter candidates in
Higgs decays.
7.2 Hadron colliders and mono-X 97
e+ e− → γ ∗ Z ∗ → χχ (7.7)
is hard to extract experimentally, because we cannot distinguish it from an electron and positron just missing each
other. The way out is to produce another particle in association with the dark matter particles, for example a
photon with sufficiently large transverse momentum pT
Experimentally, this photon recoils against the two dark matter candidates, defining the signature as a photon plus
missing momentum. A Feynman diagram for the production of a pair of dark matter particles and a photon
through a Z-mediator is
e+ χ
χ
Z
e− γ
Because the photon can only be radiated off the incoming electrons, this process is often referred to as initial state
radiation (ISR). Reconstructing the four-momentum of the photon allows us to also reconstruct the
four-momentum of the pair of dark matter particles. The disadvantage is that a hard photon is only present in a
small fraction of all e+ e− collisions for example at LEP. This is one of the few instances where the luminosity or
the size of the cross section makes a difference at LEP. Normally, the relatively clean e+ e− environment allows us
to build very efficient and very precise detectors, which altogether allows us to separate a signal from a usually
small background cleanly. For example, the chargino mass limit in Eq.(7.6) applies to a wide set of new particles
which decay into leptons and missing energy and is hard to avoid.
We should mention that for a long time people have discussed building another e+ e− collider. Searching for new
particles with electroweak interactions is one of the main motivations. Proposals range from a circular Higgs
factory with limited energy due to energy loss in synchrotron radiation(FCC-ee/CERN or CEPC/China) to a linear
collider with an energy up to 1 TeV (ILC/Japan), to a multi-TeV linear collider with a driving beam technology
(CLIC/CERN).
Historically, hadron colliders have had great success in discovering new, massive particles. This included UA1 and
UA2 at SPS/CERN discovering the W and Z bosons, CDF and D0 at the Tevatron/Fermilab discovering the top
quark, and most recently ATLAS and CMS at the LHC with their Higgs discovery. The simple reason is that
protons are much heavier than electrons, which makes it easier to store large amounts of kinetic energy and release
them in a collision. On the other hand, hadron collider physics is much harder than lepton collider physics,
because the experimental environment is more complicated, there is hardly any process with negligible
backgrounds, and calculations are generically less precise.
This means that at the LHC we need to consider two kinds of processes. The first involves all known particles, like
electrons or W and Z bosons, or the top quark, or even the Higgs boson. These processes we call backgrounds,
and they are described by QCD. The Higgs boson is in the middle of a transition to a background, only a few years
ago is was the most famous example for a signal. By definition, signals are very rare compared to backgrounds. As
an example, Figure 23 shows that at the LHC the production cross section for a pair of bottom quarks is larger than
105 nb or 1011 fb, the typical production rate for W or Z bosons ranges around 200 nb or 2 · 108 fb, the rate for a
pair of 500 GeV supersymmetric gluinos would have been 4 · 104 fb.
98 7 COLLIDER SEARCHES
One LHC aspect we have to mention in the context of dark matter searches is the trigger. At the LHC we can only
save and study a small number of all events. This means that we have to decide very fast if an event has the
potential of being interesting in the light of the physics questions we are asking at the LHC; only these events we
keep. For now we can safely assume that above an energy threshold we will keep all events with leptons or
photons, plus, if at all possible, events with missing energy, like neutrinos in the Standard Model and dark matter
particles in new physics models and jets with high energy coming from resonance decays.
When we search for dark matter particles at hadron colliders like the LHC, these analyses cannot rely on our
knowledge of the initial state kinematics. What we know is that in the transverse plane the incoming partons add to
zero three-momentum. In contrast, we are missing the necessary kinematic information in the beam direction. This
means that dark matter searches always rely on production with another particle, leading to un-balanced
three-momenta in the plane transverse to the beam direction. This defines an an observable missing transverse
momentum three-vector with two relevant dimensions. The missing transverse energy is the absolute value of this
two-dimensional three-vector. The big problem with missing transverse momentum is that it relies on
reconstructing the entire recoil. This causes several experimental problems:
1. there will always be particles in events which are not observed in the calorimeters. For example, a particle
can hit a support structure of the detector, generating fake missing energy;
2. in particular hadronic jets might not be fully reconstructed, leading to fake missing energy in the direction of
this jet. This is the reason why we usually require the missing momentum vector to not be aligned with any
hard object in an event;
3. slight mis-measurements of the momenta of each of the particles in an event add, approximately, in
quadrature to a mis-measurement of the missing energy vector;
proton - (anti)proton cross sections
9 9
10 10
8 8
10 10
σtot
7 7
10 Tevatron LHC 10
6 6
10 10
5 5
10 10
-2 -1
events / sec for L = 10 cm s
4
σb 4
10 10
3 3
33
10 10
jet
2 σjet(ET > √s/20) 2
10 10
σ (nb)
1 1
10 σW 10
0 σZ 0
10 10
jet
σjet(ET > 100 GeV)
-1 -1
10 10
-2 -2
10 10
-3 -3
10 σt 10
jet
-4 σjet(ET > √s/4) -4
10 10
-5 σHiggs(MH=120 GeV) -5
10 10
-6
200 GeV -6
10 10
WJS2009
500 GeV
-7 -7
10 10
0.1 1 10
√s (TeV)
Figure 23: Production rates for signal and background processes at hadron colliders. The discontinuity is due
to the Tevatron being a proton–antiproton collider while the LHC is a proton–proton collider. The two colliders
correspond to the x–axis values of 2 TeV and something between 7 TeV and 14 TeV. Figure from Ref. [24].
7.2 Hadron colliders and mono-X 99
4. QCD activity from the underlying event or from pile-up which gets subtracted before we analyze anything.
This subtraction adds to the error on the measured missing momentum;
5. non-functional parts of the detector automatically lead to a systematic bias in the missing momentum
distribution.
Altogether these effects imply that missing transverse energy below 30 ... 50 GeV at the LHC could as well be
zero. Only cuts on ET,miss & 100 GeV can guarantee a significant background rejection. Next, we want to
compute the production rates for dark matter particles at the LHC. To do that we need to follow the same path as
for direct detection in Section 6, namely link the calculable partonic cross section to the observable hadronic cross
section. We cannot compute the energy distributions of the incoming partons inside the colliding protons from first
principles, but we can start with the assumption that all partons move collinearly with the surrounding proton. In
that case the parton kinematics is described by a one-dimensional probability distribution for finding a parton just
depending on the respective fraction of the proton’s momentum, the parton density function (pdf) fi (x) with
x = 0 ... 1 and i = u, d, c, s, g. This parton density itself is not an observable; it is a distribution in the
mathematical sense, which means it is only defined when we integrate it together with a partonic cross section.
Different parton densities have very different behavior — for the valence quarks (uud) they peak somewhere
around x . 1/3, while the gluon pdf is negligible at x ∼ 1 and grows very rapidly towards small x, fg (x) ∝ x−2 .
Towards x < 10−3 it becomes even steeper.
In addition, we can make some arguments based on symmetries and properties of the hadrons. For example the
parton distributions inside an anti-proton are linked to those inside a proton through the CP symmetry, which is an
exact symmetry of QCD,
fqp̄ (x) = fq̄ (x) , fq̄p̄ (x) = fq (x) , fgp̄ (x) = fg (x) , (7.9)
for all x. The proton consists of uud quarks, plus quantum fluctuations either involving gluons or quark–antiquark
pairs. The expectation values for up- and down-quarks have to fulfill
Z 1 Z 1
dx (fu (x) − fū (x)) = 2 dx (fd (x) − fd¯(x)) = 1 . (7.10)
0 0
Finally, the proton momentum has to be the sum of all parton momenta, defining the QCD sum rule
Z 1 !
X X X
h xi i = dx x fq (x) + fq̄ (x) + fg (x) = 1 . (7.11)
0 q q̄
We can compute this sum accounting for quarks and antiquarks. The sum comes out to 1/2, which means that half
of the proton momentum is carried by gluons.
Using the parton densities we can compute the hadronic cross section,
Z 1 Z 1 X
σtot = dx1 dx2 fi (x1 ) fj (x2 ) σ̂ij (x1 x2 S) , (7.12)
0 0 ij
On the parton level, the analogy to photon radiation in e+ e− production will be dark matter production together
with a quark or a gluon. Two Feynman diagrams for this mono-jet signature with an unspecified mediator are
100 7 COLLIDER SEARCHES
q̄ χ q̄ χ
χ χ
In addition to this experimental argument there is a theoretical, QCD argument which suggests to look for initial
state radiation of a quark or a gluon. Both of the above diagrams include an intermediate quark or gluon
propagator with the denominator
1 1
=
(k1 − k2 )2 k1 − 2k1 k2 + 2(~k1~k2 ) + k22
2 0 0
1 1 1 1
= = 0 0 . (7.13)
2 |~k1 ||~k2 | cos θ12 − k10 k20 2k1 k2 cos θ12 − 1
This propagator diverges when the radiated parton is soft (k20 → 0) or collinear with the incoming parton
(θ12 → 0). Phenomenologically, the soft divergence is less dangerous, because the LHC experiments can only
detect any kind of particle above a certain momentum or transverse momentum threshold. The actual pole in the
collinear divergence gets absorbed into a re-definition of the parton densities fq,g (x), as they appear for example
in the hadronic cross section of Eq.(7.12). This so-called mass factorization is technically similar to a
renormalization procedure for example of the strong coupling, except that renormalization absorbs ultraviolet
divergences and works on the fundamental Lagrangian level [10]. One effect of this re-definition of the parton
densities is that relative to the original definition the quark and gluon densities mix, which means that the two
Feynman diagrams shown above cannot actually be separated on a consistent quantum level.
Experimentally, the scattering or polar angle θ12 is not the variable we actually measure. The reason is that it is not
boost invariant and that we do not know the partonic rest frame in the beam direction. Instead, we can use two
standard kinematic variables,
!
m2χχ 1 − cos θ12
t = −s 1 − (Mandelstam variable)
s 2
!2
2
m2χχ 1 − cos θ12 1 + cos θ12
pT = s 1 − (transverse momentum) . (7.14)
s 2 2
Comparing the two forms we see that the transverse momentum is symmetric under the switch
cos θ12 ↔ − cos θ12 , which in terms of the Mandelstam variables corresponds to t ↔ u. From Eq.(7.14) we see
that the collinear divergence appears as a divergence of the partonic transverse momentum distribution,
dσχχj 1 1
∝ |Mχχj |2 ∝ ∝ 2 . (7.15)
dpT,j t pT,j
An obvious question is whether this divergence is integrable, i.e. if it leads to a finite cross section σχχj . We can
approximate the phase space integration in the collinear regime using an appropriate constant C to write
Z pmax Z pmax
T ,j
2 C T ,j C pmax
T,j
σχχj ≈ dpT,j 2 = 2 dpT,j = 2C log min . (7.16)
min
pT ,j p T,j min
pT ,j pT,j p T,j
For an integration of the full phase space including a lower limit pmin
T,j = 0, this logarithm is divergent. When we
apply an experimental cut to generate for example a value of pmin T,j = 10 GeV, the logarithm gets large, because
pmax
T,j & 2m χ is given by the typical energy scales of the scattering process. When we absorb the collinear
divergence into re-defined parton densities and use the parton shower to enforce and simulate the correct behavior
dσχχj pT ,j →0
−→ 0 , (7.17)
dpT,j
7.2 Hadron colliders and mono-X 101
the large collinear logarithm in Eq.(7.16) gets re-summed to all orders in perturbation theory. However, over a
wide range of values the transverse momentum distribution inherits the collinearly divergent behavior. This means
that most jets radiated from incoming partons appear at small transverse momenta, and even after including the
parton shower regulator the collinear logarithm significantly enhances the probability to radiate such collinear jets.
The same is (obviously) true for the initial state radiation of photons. The main difference is that for the photon
process we can neglect the amplitude with an initial state photon due to the small photon parton density.
Once we know that at the LHC we can generally look for the production of dark matter particles with an initial
state radiation object, we can study different mono-X channels. Some example Feynman diagrams for mono-jet,
mono-photon, and mono-Z production are
q̄ χ q̄ χ q̄ χ
χ
χ χ
f
Z
q g q γ q f¯
For the radiated Z-boson we need to specify a decay. While hadronic decays Z → q q̄ come with a large branching
ratio, we need to ask what they add to the universal mono-jet signature. Leptonic decays like Z → µµ can help in
difficult experimental environments, but are suppressed by a branching ratio of 3.4% per lepton generation.
Mono-W events can occur through initial state radiation when we use a q q̄ 0 initial state to generate a hard q q̄
scattering. Finally, mono-Higgs signatures obviously make no sense for initial state radiation. From the similarity
of the above Feynman diagrams we can first assume that at least in the limit mZ → 0 the total rates for the
different mono-X processes relative to the mono-jet rate scale like
σχχγ α Q2q 1
≈ ≈
σχχj αs CF 40
σχχµµ α Q2q s2w 1
≈ BR(Z → µµ) ≈ . (7.18)
σχχj αs CF 4000
The actual suppression of the mono-Z channel is closer to 10−4 , once we include the Z-mass suppression through
the available phase space. In addition, the similar Feynman diagrams also suggest that any kinematic
x-distribution scales like
1 dσχχg 1 dσχχγ 1 dσχχf f
≈ ≈ . (7.19)
σχχj dx σχχγ dx σχχf f dx
Here, the suppression of the mono-photon is stronger, because the rapidity coverage of the detector for jets extends
to |η| < 4.5, while photons rely on an efficient electromagnetic calorimeter with |η| < 2.5. On the other hand,
photons can be detected to significantly smaller transverse momenta than jets.
Note that the same scaling as in Eq.(7.18) applies to the leading mono-X backgrounds, namely
possibly with the exception of mono-Z production, where the hard process and the collinear radiation are now
both described by Z-production. This means that the signal scaling of Eq.(7.18) also applies to backgrounds,
σννγ α Q2q 1
≈ ≈
σννj αs CF 40
σννµµ α Q2q s2w 1
≈ BR(Z → µµ) ≈ . (7.21)
σννj αs CF 4000
102 7 COLLIDER SEARCHES
If our discovery channel is statistics limited, the significances nσ for the different channels are given in terms of
the luminosity, efficiencies, and the cross sections
p σχχj p σχχγ
nσ,j = j L √ ⇒ nσ,γ = γ L √
σννj σννγ
1 γ σχχj 1 γ
p r r
≈ j L √ √ = nσ,j . (7.22)
40 j σννj 6.3 j
Unless the efficiency correction factors, including acceptance cuts and cuts rejecting other backgrounds, point
towards a very significant advantage if the mono-photon channel, the mono-jet channel will be the most promising
search strategy.
√ Using the same argument, the factor in the expected mono-jet and mono-Z significances will be
around 6000 = 77.
This estimate might change if the uncertainties are dominated by systematics or a theory uncertainty. These errors
scale proportional to the number of background events in the signal region, again with a signature-dependent
proportionality factor describing how well we know the background distributions. This means for the
significances
σχχγ γ σχχj γ
nσ,γ = γ = j = nσ,j . (7.23)
σννγ j σννj j
Typically, we understand photons better than jets, both experimentally and theoretically. On the other hand,
systematic and theory uncertainties at the LHC are usually limited by the availability and the statistics in control
regions, regions which we can safely assume to be described by the Standard Model.
We can simulate mono-X signatures for vector mediators, described in Section 5.4. In that case the three mono-X
signatures are indeed induced by initial state radiation. The backgrounds are dominated by Z-decays to neutrinos.
The corresponding LHC searches are based on the missing transverse momentum distribution and the transverse
momentum pT,X of the mono-X object. There are (at least) two strategies to control for example the mono-jet
background: first, we can measure it for example using Z → µ+ µ− decays or hard photons produced in
association with a hard jet. Second, if the dark matter signal is governed by a harder energy scale, like the mass of
a heavy mediator, we can use the low-pT region as a control region and only extrapolate the pT distributions.
Figure 24 gives an impression of the transverse momentum spectra in the mono-jet, mono-photon, and mono-Z
channels. Comparing the mono-jet and mono-photon rates we see that the shapes of the transverse momentum
Figure 24: Transverse momentum spectrum for signals and backgrounds in the different mono-X channels for a
heavy vector mediator with mZ 0 = 1 TeV. Figure from Ref. [25].
7.3 Higgs portal 103
spectra of the jet or photon, recoiling against the dark matter states, are essentially the same in both cases, for the
respective signals as well as for the backgrounds. The signal and background rates follow the hierarchy derived
above. Indeed, the mono-photon hardly adds anything to the much larger mono-jet channel, except for cases where
in spite of advanced experimental strategies the mono-jet channel is limited by systematics. The mono-Z channel
with a leptonic Z-decay is kinematically almost identical to the other two channels, but with a strongly reduced
rate. This means that for mono-X signatures induced by initial state radiation the leading mono-jet channel can be
expected to be the most useful, while other mono-X analyses will only become interesting when the production
mechanism is not initial state radiation.
Finally, one of the main challenges of mono-X signatures is that by definition the mediator has to couple to the
Standard Model and to dark matter. This means for example in the case of the simple model of Eq.(5.27)
The relative size of the branching ratios is given by the ratio of couplings gχ2 /gu2 . Instead of the mono-X signature
we can constrain part of the model parameter space through resonance searches with the same topology as the
mono-X search and without requiring a hard jet,
q̄ q q̄ q
V V
q̄ q̄
q g q
On the other hand, for the parameter space gu gχ but constant gu gχ and mediator mass, the impact of resonance
searches is reduced, whereas mono-X searches remain relevant.
In addition to the very general mono-jet searches for dark matter, we will again look at our two specific models.
The Higgs portal model only introduces one more particle, a heavy scalar with mS mH and only coupling to
the Higgs. This means that the Higgs has to act as an s-channel mediator not only for dark matter annihilation, but
also for LHC production,
pp → H ∗ → SS + jets . (7.25)
The Higgs couples to gluons in the incoming protons through a top loop, which implies that its production rate is
very small. The Standard Model predicts an on-shell Higgs rate of 50 pb for gluon fusion production at a 14 TeV
LHC. Alternatively, we can look for weak-boson-fusion off-shell Higgs production, i.e. production in association
with two forward jets. The corresponding Feynman diagram is
q q
H S
W
W S
q q
These so-called tagging jets will allow us to trigger the events. For an on-shell Higgs boson the weak boson fusion
cross section at the LHC is roughly a factor 1/10 below gluon fusion, and its advantages are discussed in detail in
Ref. [10].
104 7 COLLIDER SEARCHES
In particular in this weak-boson-fusion channel ATLAS and CMS are conducting searches for invisibly decaying
Higgs bosons. The main backgrounds are invisible Z-decays into a pair of neutrinos, and W -decays where we miss
the lepton and are only left with one neutrino. For high luminosities around 3000 fb−1 and assuming an essentially
unchanged Standard Model Higgs production rate, the LHC will be sensitive to invisible branching ratios around
The key to this analysis is to understand not only the tagging jet kinematics, but also the central jet radiation
between the two forward tagging jets.
Following the discussion in Section 4.1 the partial width for the SM Higgs boson decays into light dark matter is
s
λ23 vH
2
4m2 Γ(H → SS) λ2 2m2 λ2
Γ(H → SS) = 1 − 2S ⇔ ≈ 3 1 − 2S < 3 . (7.27)
32πMH mH mH 8π mH 8π
This value has to be compared to the Standard Model prediction ΓH /mH = 4 · 10−5 . For example, a 10%
invisible branching ratio BR(H → SS) into very light scalars mS mH /2 corresponds to a portal coupling
λ23 √
= 4 · 10−6 ⇔ λ3 = 32π · 10−3 ≈ 10−2 . (7.28)
8π
The light scalar reference point in agreement with the observed relic density Eq.(4.14) has λ3 = 0.3 and roughly
assuming mS . 50 GeV. This is well above the approximate final reach for the invisible Higgs branching ratio at
the high-luminosity LHC.
For larger dark matter masses above mS = 200 GeV the LHC cross section for pair production in weak boson
fusion is tiny, namely
λ23 λ =0.1
σ(SSjj) ≈ fb 3 = 10−3 fb (7.29)
10
Without going into much detail this means that heavy scalar dark matter is unlikely to be discovered at the LHC
any time soon, because the final state is heavy and the coupling to the Standard Model is strongly constrained
through the observed relic density.
The main feature of supersymmetry is that it is not just a theory predicting a dark matter particle, it is a complete,
renormalizable ultraviolet completion of the Standard Model valid to the Planck scale. From Section 4.3 we know
that the MSSM and the NMSSM offer a wide variety of particles, including messengers linking the visible matter
and dark matter sectors. Obviously, the usual mono-X signatures from Section 7.2 or the invisible Higgs decays
from Section 7.3 will also appear in supersymmetric models. For example SM-like Higgs decays into a pair of
light neutralinos can occur for a mixed gaugino-higgsino LSP with M1 . |µ| . 100 GeV. An efficient annihilation
towards the observed relic density goes through an s-channel Z-mediator coupling to the higgsino fraction. Here
we can find
BR(h → χ̃01 χ̃01 ) = (10 ... 50)% mχ̃01 = (35 ... 40) GeV and (50 ... 55) GeV , (7.30)
mostly constrained by direct detection. On the other hand, supersymmetric models offer many more dark matter
signatures and provide a UV completion to a number of different simplified models. They are often linked to
generic features of heavy new particle production, which is what we will focus on below.
If our signature consists of a flexible number of visible and invisible particles we rely on global observables. The
visible mass is based on the assumption that we are looking for the decay of two heavy new states, where the
parton densities will ensure that these two particles are produced close to threshold. We can then approximate the
7.4 Supersymmetric neutralinos 105
√
partonic energy ŝ ∼ m1 + m2 by some kind of visible energy. Without taking into account missing energy and
just adding leptons ` and jets j the visible mass looks like
2 2
X X
m2visible = E − p~ . (7.31)
`,j `,j
Similarly, Tevatron and LHC experiments have for a long time used an effective transverse mass scale which is
usually evaluated for jets only, but can trivially be extended to leptons,
X X
HT = ET = pT , (7.32)
`,j `,j
assuming massless final state particles. In an alternative definition of HT we sum over a number of jets plus the
missing energy and skip the hardest jet in this sum. Obviously, we can add the missing transverse momentum to
this sum, giving us
X X
meff = ET = pT . (7.33)
`,j,miss `,j,miss
This effective mass is known to trace the mass of the heavy new particles decaying for example to jets and missing
energy. This interpretation relies on the non–relativistic nature of the production process and our confidence that
all jets included are really decay jets.
In the Standard Model the neutrino produces such missing transverse energy, typically through the decays
W → `+ ν and Z → ν ν̄. In W + jets events we can learn how to reconstruct the W mass from one observed and
one missing particle. We construct a transverse mass in analogy to an invariant mass, but neglecting the
longitudinal momenta of the decay products
2 2
m2T = (ET,miss + ET,` ) − (~
pT,miss + p~T,` )
= m2` + m2miss + 2 (ET,` ET,miss − p~T,` · p~T,miss ) , (7.34)
in terms of a transverse energy ET2 = p~2T + m2 . By definition, it is invariant under — or better independent of —
longitudinal boosts. Moreover, as the projection of the invariant mass onto the transverse plane it is also invariant
under transverse boosts. The transverse mass is always smaller than the actual mass and reaches this limit for a
purely transverse momentum direction, which means that we can extract mW from the upper endpoint in the
mT,W distribution. To reject Standard Model backgrounds we can simply require mT > mW .
The first supersymmetric signature we discuss makes use of the fact that already the neutralino-chargino sector
involves six particles, four neutral and two charged. Two LHC processes reflecting this structure are
pp → χ̃02 χ̃01 → (`+ `− χ̃01 ) χ̃01
− −
pp → χ̃+ + 0 0
1 χ̃1 → (` ν` χ̃1 ) (` ν̄` χ̃1 ) . (7.35)
The leptons in the decay of the heavier neutralinos and charginos can be replaced by other fermions.
Kinematically, the main question is if the fermions arise from on-shell gauge bosons or from intermediate
supersymmetric scalar partners of the leptons. The corresponding Feynman diagrams for the first of the two above
processes are
q̄ χ̃01 q̄ χ̃01
χ̃01 `+
χ̃02 χ̃02
Z `+
`˜ `−
q `− q χ̃01
106 7 COLLIDER SEARCHES
The question which decay topologies of the heavier neutralino dominate, depends on the point in parameter space.
The first of the two diagrams predicts dark matter production in association with a Z-boson. This is the same
signature as found to be irrelevant for initial state radiation in Section 7.2, namely mono-Z production.
The second topology brings us the question how many masses we can extract from two observed external
momenta. Endpoint methods rely on lower (threshold) and upper (edge) kinematic endpoints of observed invariant
mass distributions. The art is to identify distributions where the endpoint is probed by realistic phase space
configurations. The most prominent example is m`` in the heavy neutralino decay in Eq.(7.35), proceeding
through an on-shell slepton. In the rest frame of the intermediate slepton the 2 → 2 process corresponding to the
decay of the heavy neutralino,
χ̃02 `− → `˜ → χ̃01 `− (7.36)
resembles the Drell–Yan process. Because of the scalar in the s-channel, angular correlations do not influence the
m`` distribution, so it will have a triangular shape. Its upper limit or edge can be computed in the slepton rest
frame. The incoming and outgoing three-momenta have the absolute values
|m2χ̃0 − m2`˜|
1,2
|~
p| = , (7.37)
2m`˜
assuming m` = 0. The invariant mass of the two leptons reaches its maximum if the two leptons are back–to–back
and the scattering angle is cos θ = −1
m2`` = (p`+ + p`− )2
= 2 (E`+ E`− − |~
p`+ ||~
p`− | cos θ)
< 2 (E`+ E`− + |~
p`+ ||~
p`− |)
m2χ̃0 − m2`˜ m2`˜ − m2χ̃0
=4 2 1
using E`2± = p~2`± . (7.38)
2m`˜ 2m`˜
The kinematic endpoint is then given by
(m2χ̃0 − m2`˜)(m2`˜ − m2χ̃0 )
0 < m2`` < 2 1
. (7.39)
m2`˜
A generic feature or all methods relying on decay kinematics is that it is easier to constrain the differences of
squared masses than the absolute mass scale. This is because of the form of the endpoint formulas, which involve
the difference of mass squares m21 − m22 = (m1 + m2 )(m1 − m2 ). This combination is much more sensitive to
(m1 − m2 ) than it is to (m1 + m2 ). The common lore that kinematics only constrain mass differences is not true
for two body decays, but mass differences are indeed easier.
The second set of supersymmetric dark matter signatures involves the same extended dark matter sector with its
neutralino and chargino spectra or a slepton. Because the slepton and the chargino are electrically charged, they
can be produced through a photon mediator,
pp → `˜`˜∗ → (`− χ̃01 ) (`+ χ̃01 )
− − 0
pp → χ̃+ + 0
1 χ̃1 → (π χ̃1 ) (π χ̃1 ) . (7.40)
For the slepton case one of the corresponding Feynman diagrams is
q̄ `+
`˜∗ χ̃01
γ
χ̃01
`˜
q `−
7.4 Supersymmetric neutralinos 107
Again, the question arises how many masses we can extract from the measured external momenta. For this
topology the variable mT 2 generalizes the transverse mass known from W decays to the case of two massive
invisible particles, one from each leg of the event. First, we divide the observed missing energy in the event into
two scalar fractions pT,miss = q1 + q2 . Then, we construct the transverse mass for each side of the event, assuming
that we know the invisible particle’s mass or scanning over hypothetical values m̂miss .
Inspired by the transverse mass in Eq.(7.34) we are interested in a mass variable with a well–defined upper
endpoint. For this purpose we construct some kind of minimum of mT,j as a function of the fractions qj . We know
that maximizing the transverse mass on one side of the event will minimize it on the other side, so we define
mT 2 (m̂miss ) = min max mT,j (qj ; m̂miss ) . (7.41)
pT ,miss =q1 +q2 j
For the correct value of mmiss the mT 2 distribution has a sharp edge at the mass of the decaying particle. In
favorable cases mT 2 allows the measurement of both, the decaying particle and the invisible particle masses.
These two aspects for the correct value m̂miss = mmiss we can see in Figure 25: the lower threshold is indeed given
by mT 2 − mχ̃01 = mπ , while the upper edge of mT 2 − mχ̃01 coincides with the dashed line for mχ̃+ − mχ̃01 .
1
An interesting aspect of mT 2 is that it is boost invariant if and only if m̂miss = mmiss . For a wrong assignment of
mmiss the value of mT 2 has nothing to do with the actual kinematics and hence with any kind of invariant (and
house numbers are not boost invariant). We can exploit this aspect by scanning over mmiss and looking for
so-called kinks, defined as points where different events kinematics all return the same value for mT 2 .
Finally, we can account for the fact that supersymmetry predicts new strongly interacting particles. These are the
scalar partners of the quarks and the fermionic partner of the gluon. For dark matter physics the squarks are more
interesting, because their quark-squark-neutralino coupling makes them dark matter mediators to the strongly
interacting visible matter sector. The same coupling also allows the squarks to decay into dark matter and a jet,
leading to the dark matter signature
Example Feynman diagrams showing the role of squarks as t-channel colored mediators and as heavy particles
decaying to dark matter are
m[π]
mT4 ee
mT3 eπ
mT2 ππ
m[χ+1] - m[χ01]
q̄ χ̃01 q̃ ∗
χ̃01
q̃ χ̃01
q̃
q χ̃01 q q
Note that these two squark-induced signatures cannot be separated, because they rely on the same two couplings,
the quark-squark-neutralino coupling and the QCD-induced squark coupling to a gluon. Kinematically, they add
nothing new to the above arguments: the first diagram will contribute to the mono-jet signature, with the additional
possibility to radiate a gluon off the t-channel mediator, and to pair-production of neutralinos and charginos; the
second diagram asks for a classic mT 2 analysis. Moreover, the production process of Eq.(7.43) is QCD-mediated
and the 100% branching fraction gives us no information about the mediator interaction to dark matter. In other
words, for this pair-production process there exists no link between LHC observables and dark matter properties.
The non-negligible effect of the t-channel squark mediator adding to the s-channel Z-mediator for processes of the
kind
pp → χ̃0i χ̃0j (7.44)
has to do with the couplings. From Eq.(4.43) we know that for neutralinos the higgsino content couples to the
Z-mediator while the gaugino content couples to light-flavor squarks. In addition, the s-channel and t-channel
diagrams typically interfere destructively, so we can tune the squark mass to significantly reduce the neutralino
pair production cross section. The largest cross section for direct neutralino-chargino production is usually
pp → χ̃+ 0 + 0 + − 0
1 χ̃2 → (` ν χ̃1 ) (` ` χ̃1 ) with σ(χ̃± 0
1 χ̃2 ) . 1 pb , (7.45)
for mχ > 200 GeV. This decay leads to a tri-lepton signature with off-shell gauge bosons in the decay. The
backgrounds are pair production of weak bosons and hence small. Just as a comparison, squark pair production,
pp → q̃ q̃ ∗ , can reach cross sections in the pico-barn range even for squark mass above 1 TeV.
Before the LHC started running, studies of decay chains with dark matter states at their end were in fashion. Here,
squark decays had large impact through the stereotypical cascade decay
One thing we know for example from the di-lepton edge is that invariant masses can just be an invariant way of
writing angular correlations between outgoing particles. Those depend on the spin and quantum numbers of all
particles involved. While measuring for example the spin of new particles is hard in the absence of fully
reconstructed events, we can try to probe it in the kinematics of cascade decays. The squark decay chain was the
first case where such a strategy was worked out [27]:
1. Instead of measuring individual spins in a cascade decay we assume that cascade decays radiate particles
with known spins. For radiated quarks and leptons the spins inside the decay chain alternate between
fermions and bosons. Therefore, we contrast supersymmetry with another hypothesis, where the spins in the
decay chain follow the Standard Model assignments. An example for such a model are Universal Extra
Dimensions, where each Standard Model particle acquires a Kaluza–Klein partner from the propagation in
the bulk of the additional dimensions;
7.5 Effective field theory 109
2. The kinematical endpoints are completely determined by the masses and cannot be used to distinguish
between the spin assignments. In contrast, the distributions between endpoints reflect angular correlations.
For example, the mj` distribution in principle allows us to analyze spin correlations in squark decays in a
Lorentz-invariant way. The only problem is the link between `± and their ordering in the decay chain;
3. As a proton–proton collider the LHC produces considerably more squarks than anti-squarks in the
squark–gluino production process. A decaying squark radiates a quark while an antisquark radiates an
antiquark, which means that we can define a non-zero production-side asymmetry between mj`+ and mj`− .
Such an asymmetry we show in Figure 26, for the SUSY and for the UED hypotheses. Provided the masses
in the decay chain are not too degenerate we can indeed distinguish the two hypotheses.
In Section 4.4 we introduced an effective field theory of dark matter to describe dark matter annihilation in the
early universe. If the annihilation process is the usual 2 → 2 WIMP scattering process it is formulated in terms of
a dark matter mass mχ and a mediator mass mmed , where the latter does not correspond to a propagating degree of
freedom. It can hence be identified with a general suppression scale Λ in an effective Lagrangian, like the one
illustrated in Eq.(4.59). All experimental environments discussed in the previous sections, including the relic
density, indirect detection, and direct detection, rely on non-relativistic dark matter scattering. This means they can
be described by a dark matter EFT if the mediator is much heavier than the dark matter agent,
mχ mmed . (7.47)
In contrast, LHC physics is entirely relativistic and neither the incoming partons nor the outgoing dark matter
particles in the schematic diagram shown in Section 1.3 are at low velocities. This means we have to add the
partonic energy of the scattering process to the relevant energy scales,
√
{ mχ , mmed , s } . (7.48)
In the case of mono-jet production, described in Section 7.2, the key observables here are the E/ T and pT,j
distributions. For simple hard processes the two transverse momentum distributions are rapidly dropping and
strongly correlated. This defines the relevant energy scales as
The experimentally relevant E/ T or pT,j regime is given by a combination of the signal mass scale, the kinematics
of the dominant Zνν +jets background, and triggering. Our effective theory then has to reproduce two key
Figure 26: Asymmetry in mj` /mmax j` for supersymmetry (dashed) and universal extra dimensions (solid). The
spectrum is assumed to be hierarchical, which is typical for supersymmetric theories. Figure from Ref. [27].
110 7 COLLIDER SEARCHES
observables,
d σ(mχ , mmed ) d σ(mχ , mmed )
σtot (mχ , mmed ) and ∼ . (7.50)
d E/ T d pT,j
acceptance
For the total rate, different phase space regions which individually agree poorly between the effective theory and
some underlying model, might combine to a decent rate. For the main distributions this is no longer possible.
Finally, the hadronic LHC energy of 13 TeV, combined with reasonable parton momentum fractions defines an
absolute upper limit, above which for example a particle in the s-channel cannot be produced as a propagating
state,
√
{ mχ , mmed , E/ Tmin , smax } . (7.51)
This fourth scale is not the hadronic collision√energy 13 TeV. From the typical LHC reach for heavy resonances in
the s-channel we expect it to be in the range smax = 5 ... 8 TeV, depending on the details of the mediator.
From what we know from these lecture notes, establishing a completely general dark matter EFT approach at the
LHC is not going to work. The Higgs portal results of Section 7.3 indicate that the only way to systematically
search for its dark matter scalar is through invisible Higgs decays. By definition, those will be entirely dominated
by on-shell Higgs production, not described by an effective field theory with a non-propagating mediator.
Similarly, in the MSSM a sizeable fraction of the mediators are either light SM particles or s-channel particles
within the reach of the LHC. Moreover, we need to add propagating degrees of co-annihilation partners, more or
less close to the dark matter sector.
On the other hand, the fact that some of our favorite dark matter models are not described well by an effective
Lagrangian does not mean that we cannot use such an effective Lagrangian for other classes of dark matter models.
One appropriate way to test the EFT approach at the LHC is to rely on specific simplified models, as introduced in
Section 5.4. Three simplified models come to mind for a fermion dark matter agent [21]:
2. tree-level t-channel scalar mediator, realized as light-flavor scalar quarks in the MSSM, Section 7.4;
3. loop-mediated s-channel scalar mediator, realized as heavy Higgses in the MSSM, Section 7.4.
For the tree-level vector the situation at the LHC already becomes obvious in Section 5.4. The EFT approach is
only applicable when also at the LHC the vector mediator is produced away from its mass shell, requiring roughly
mV > 5 TeV. The problem in this parameter range is that the dark matter annihilation cross section will be
typically too small to provide the observed relic density. This makes the parameter region where this mediator can
be described by global EFT analyses very small.
We start our more quantitative discussion with a tree-level t-channel scalar ũ. Unlike for the vector mediator, the
t-channel mediator model only makes sense in the half plane with mχ < mũ ; otherwise the dark matter agent
would decay. At the LHC we have to consider different production processes. Beyond the unobservable process
uū → χχ the two relevant topologies leading to mono-jet production are
They are of the same order in perturbation theory and experimentally indistinguishable. The second process can
be dominated by on-shell mediator production, ug → χũ → χ (χ̄u). We can cross its amplitude to describe the
co-annihilation process χũ → ug. The difference between the (co-) annihilation and LHC interpretations of the
7.5 Effective field theory 111
same amplitude is that it only contributes to the relic density for mũ < mχ + 10%, while it dominates mono-jet
production for a wide range of mediator masses.
Following Eq.(7.43) we can also pair-produce the necessarily strongly interacting mediators with a subsequent
decay to two jets plus missing energy,
The partonic initial state of this process can be quarks or gluons. For a wide range of dark matter and mediator
masses this process completely dominates the χχ+jets process.
When the t-channel mediator becomes heavy, for example mono-jet production with the partonic processes given
in Eq.(7.52) can be described by an effective four-fermion operator,
c
L ⊃ (ūR χ) (χ̄uR ) . (7.54)
Λ2
The natural matching scale will be around Λ = mũ . Note that this operator mediates the t-channel as well as the
single-resonant mediator production topologies and the pair production process induced by quarks. In contrast,
pair production from two gluons requires a higher-dimensional operator involving the gluon field strength, like for
example
c
L ⊃ (χ̄χ) Gµν Gµν . (7.55)
Λ3
This leads to a much faster decoupling pattern of the pair production process for a heavy mediator.
Because the t-channel mediator carries color charge, LHC constraints typically force us into the regime
mũ & 1 TeV, where an EFT approach can be viable. In addition, we again need to generate a large dark matter
annihilation rate, which based on the usual scaling can be achieved by requiring mũ & mχ . For heavy mediators,
pair production decouples rapidly and leads to a parameter region where single-resonant production plays an
important role. It is described by the same effective Lagrangian as the generic t-channel process, and decouples
more rapidly than the t-channel diagram for mũ & 5 TeV. These actual mass values unfortunately imply that the
remaining parameter regions suitable for an EFT description typically predict very small LHC rates. The third
simplified model we discuss is a scalar s-channel mediator. To generate a sizeable LHC rate we do not rely on its
Yukawa couplings to light quarks, but on a loop-induced coupling to gluons, in complete analogy to SM-like light
Higgs production at the LHC. The situation is slightly different for most of the supersymmetric parameter space
for heavy Higgses, which have reduced top Yukawa couplings and are therefore much harder to produce at the
LHC. Two relevant Feynman diagrams for mono-jet production are
g g g g
χ t χ
t
S S
g χ̄ g χ̄
Coupling the scalar only to the top quark, we define the Lagrangian for the simplified scalar mediator model as
yt mt
L ⊃− S t̄t + gχ S χ̄χ (7.56)
v
The factor mt /v in the top Yukawa coupling is conventional, to allow for an easier comparison to the Higgs case.
The scalar coupling to the dark matter fermions can be linked to mχ , but does not have to. We know that the SM
112 7 COLLIDER SEARCHES
Higgs is a very narrow resonance, while in this case the total width is bounded by the partial width from scalar
decays to the top quark,
3/2
3GF m2t yt2 4m2 2 2
ΓS mS mt 3GF mt yt
> √ 1 − 2t = √ ≈ 5% , (7.57)
mS 4 2π mS 4 2π
assuming yt ≈ 1. Again, this is different from the case of supersymmetric, heavy Higgses, which can be broad.
To get a rough idea what kind of parameter space might be interesting, we can look at the relic density. The
problem in this prediction is that for mχ < mt the annihilation channel χχ → tt̄ is kinematically closed. Going
through the same amplitude as the one for LHC production, very light dark matter will annihilate to two gluons
through a top loop. If we allow for that coupling, the tree-level process χχ → cc̄ will dominate for slightly heavier
dark matter. If there also exists a Yukawa coupling of the mediator to bottom quarks, the annihilation channel
χχ → bb̄ will then take over for slightly heavier dark matter. An even heavier mediator will annihilate into
off-shell top quarks, χχ → (W + b)(W − b̄), and for mχ > mt the tree-level 2 → 2 annihilation process χχ → tt̄
will provide very efficient annihilation. None of the aspects determining the correct annihilation channels are
well-defined within the simplified model. Moreover, in the Lagrangian of Eq.(7.56) we can easily replace the
scalar S with a pseudo-scalar, which will affect all non-relativistic processes.
For our global EFT picture this means that if a scalar s-channel mediator is predominantly coupled to up-quarks,
the link between the LHC production rate and the predicted relic density essentially vanishes. The two observables
are only related if the mediator is very light and decays through the one-loop diagram to a pair of gluons. This is
exactly where the usual dark matter EFT will not be applicable.
If we only look at the LHC, the situation becomes much simpler. The dominant production process
gg → S + jets → χχ + jets (7.58)
defines the mono-jet signature through initial-state radiation and through gluon radiation off the top loop. The
mono-jet rate will factorize into σS+j × BRχχ . The production process is well known from Higgs physics,
including the phase space region with a large jet and the logarithmic top mass dependence of the transverse
momentum distribution,
dσSj dσSj p2T,j
= ∝ log4 2 . (7.59)
dpT,j dpT,S mt
Based on the Lagrangian given in Eq.(7.56) and the transverse momentum dependence given in Eq.(7.59), the
mono-jet signal at the LHC depends on the four energy scales,
{ mχ , mS , mt , E/ T = pT,j } , (7.60)
which have to be organized in an effective field theory. If we focus on total rates, we are still left with three mass
scales with different possible hierarchies:
1. The dark matter agent obviously has to remain a propagating degree of freedom, so in analogy to the SM
Higgs case we can first assume a non-propagating top quark
mt > mS > 2mχ . (7.61)
This defines the effective Lagrangian
c
L (1) ⊃ S Gµν Gµν − gχ S χ̄χ . (7.62)
Λ
It is similar to the effective Higgs–gluon coupling in direct detection, defined in Eq.(6.14). The Wilson
coefficient can be determined at the matching scale Λ = mt and assume the simple form
c mt mS αs yt
= , (7.63)
Λ 12π v
In this effective theory the transverse momentum spectra will fail to reproduce large logarithms of the type
log(pT /mt ), limiting the agreement between the simplified model and its EFT approximation.
7.5 Effective field theory 113
leading to the usual dimension-6 four-fermion operators coupling dark matter to the resolved top loop,
c
L (2) ⊃ (t̄t) (χ̄χ) . (7.65)
Λ2
The Wilson coefficients we obtain from matching at Λ = mS are
c yt gχ mt
= 2 . (7.66)
Λ2 mS v
This effective theory will retain all top mass effects in the distributions.
3. Finally, we can decouple the top as well as the mediator,
We show the predictions for the total LHC rate based on these three effective theories and the simplified model in
the left panel of Figure 27. The decoupled top ansatz L (1) of Eq.(7.62) indeed reproduces the correct total rate for
mS < 2mt . Above that threshold it systematically overestimates the cross section. The effective Lagrangian L (2)
with a decoupled mediator, Eq.(7.65), reproduces the simplified model for mS & 5 TeV. Beyond this value the
LHC energy is not sufficient to produce the mediator on-shell. Finally, the effective Lagrangian L (3) with a
102
EFT 3
1 1
1
simp. simp.
10-2 EFT 1
10-2 simp. 10-2
10-4 EFT 2
EFT 2
10-6 EFT 2
10-4 10-4
10-8
Figure 27: Total mono-jet rate in the loop-mediated s-channel scalar model as function of the mediator mass for .
We show all three different mχ = 10 GeV (left) and mχ = 100 GeV (right). For the shaded regions the annihilation
cross section reproduces the observed relic density within Ωobs obs
χ /3 and Ωχ + 10% for a mediator coupling only to
up-type quarks (red) or to both types of quarks (green). Figure from Ref. [21].
114 7 COLLIDER SEARCHES
simultaneously decoupled top quark and mediator, Eq.(7.68), does not reproduce the total production rate
anywhere.
In the right panel of Figure 27 we show the mono-jet rate for heavier dark matter and the parameter regions where
the simplified model predicts a roughly correct relic density. In this range only the EFT with the decoupled
mediator, defined in Eq.(7.65), makes sense. Because the model gives us this freedom, we also test what happens
to the combination with the relic density when we couple the mediator to all quarks, rather than up-quarks only.
Altogether, we find that in the region of heavy mediators the EFT is valid for LHC observables if
This is similar to the range of EFT validity for the s-channel vector model.
115
8 Further reading
First, we would like to emphasize that our list of references is limited to the, legally required, sources of Figures
and to slightly more advanced material providing more details about the topics discussed in these lecture notes.
Our discussion on the general relativity background and cosmology is a very brief summary. Dedicated textbooks
include the classics by Kolb & Turner [6], Bergström & Goobar [28], Weinberg [29], as well as the more modern
books by Scott Dodelson [30] and Max Tegmark [3]. More details on the role of dark matter in the history of the
universe is given in the book by Gianfranco Bertone and Dan Hooper [31] and in the notes by Flip Tanedo [32] and
Yann Mambrini [33]. Jim Cline’s TASI lectures [34] serve as an up-to-date discussion on the role of dark matter in
the history of the Universe. Further details on the cosmic microwave background and structure formation are also
in the lecture notes on cosmological perturbation theory by Hannu Kurki-Suonio that are available online [35], as
well as in the lecture notes on Cosmology by Joao Rosa [36] and Daniel Baumann [1].
For models of particle dark matter, Ref. [37] provides a list of consistency tests. For further reading on WIMP dark
matter we recommend the didactic review article Ref. [22]. Reference [38] addresses details on WIMP
annihilation and the resulting constraints from the comic microwave background radiation. A more detailed
treatment of the calculation of the relic density for a WIMP is given in Ref. [39]. Felix Kahlhöfer has written a
nice review article on LHC searches for WIMPs [40]. For further reading on the effect of the Sommerfeld
enhancement, we recommend Ref. [41].
Extensions of the WIMP paradigm can result in a modified freeze-out mechanism, as is the case of the
co-annihilation scenario. These exceptions to the most straightforward dark matter freeze-out have originally been
discussed by Griest and Seckel in Ref. [42]. A nice systematic discussion of recent research aiming can be found
in Ref. [43].
For models of non-WIMP dark matter, the review article Ref. [44] provides many details. A very good review of
axions is given in Roberto Peccei’s notes [45]. while axions as dark matter candidates are discussed in Ref. [46].
Mariangela Lisanti’s TASI lectures [8] provide a pedagogical over these different dark matter candidates. Details
on light dark matter, in particular hidden photons, can be found in Tongyan Lin’s notes for her 2018 TASI
lecture [47].
Details on calculations for the direct search for dark matter can be found in the review by Lewin and Smith [48].
Gondolo and Silk provide details for dark matter annihilation in the galactic center [49], as do the TASI lecture
notes of Dan Hooper [50]. For many more details on indirect detection of dark matter we refer to Tracy Slatyer’s
TASI lectures [51].
Note the one aspect these lecture notes are still missing is the chapter on the discovery of WIMPs. We plan to add
an in-depth discussion of the WIMP discovery to an updated version of these notes.
Acknowledgments
TP would like to thank many friends who have taught him dark matter, starting with the always-inspiring Dan
Hooper. Dan also introduced him to deep-fried cheese curds and to the best ribs in Chicago. Tim Tait was of great
help in at least two ways: for years he showed us that it is fun to work on dark matter even as a trained collider
physicist; and then he answered every single email during the preparation of these notes. Our experimental
co-lecturer Teresa Marrodan Undagoitia showed not only our students, but also us how inspiring dark matter
physics can be. As co-authors of Refs [12, 13] Joe Bramante, Adam Martin, and Paddy Fox gave us a great course
on dark matter physics while we were writing these papers. Pedro Ruiz-Femenia was extremely helpful explaining
the field theory behind the Sommerfeld enhancement to us. Jörg Jäckel for a long time and over many coffees tried
to convince everybody that the axion is a great dark matter candidate. Teaching and discussing with Björn-Malte
Schäfer was an excellent course on how thermodynamics is actually useful. Finally, there are many people who
helped us with valuable advice while we prepared this course, like John Beacom, Martin Schmaltz and Felix
Kahlhöfer, and people who commented on the notes, like Elias Bernreuther, Johann Brehmer, Michael Baker, Anja
Butter, Björn Eichmann, Ayres Freitas, Jan Horak, Michael Ratz, or Michael Schmidt.
Index
acoustic oscillations, 18, 25 galactic center excess, 76
angular diameter distance, 11
anti-matter, 36 halo profile, 74
axion, 34 Burkert, 76
axion-like particle, 36 Einasto, 76
Navarro-Frenk-White, 75
baryon number violation, 37 heavy quark effective theory, 88
Boltzmann equation, 44, 45, 56 hidden photon, 64
Higgs
cascade decay, 108 funnel, 70
co-annihilation, 47 portal, 58, 77, 87, 103
co-moving distance, 11 potential, 58
collider, 95 Hubble constant, 5
collinear divergence, 100 Hubble law, 5
continuity equation, 22 Hulthen potential, 53
cosmic microwave background, 16
cosmological constant, 5 indirect detection, 74
CP violation, 39 inflation, 9
critical density, 8
Jeans equation, 24
curvature, 6
Jeans length, 24, 27
dark matter
kinetic mixing, 63
annihilation, 41, 48, 59, 71, 74
asymmetric, 39 Mandelstam variables, 44, 100
cold, 27 matter-radiation equality, 12
fuzzy, 28, 36 mean free path, 15
hot, 31 mediator, 40, 47, 56, 58, 64, 95
light, 28, 31 s-channel, 49, 70, 103, 111
mixed, 28 t-channel, 49, 110
secluded, 65 misalignment mechanism, 32
self-interacting, 28 missing energy, 97
supersymmetric, 68 mono-X, 97, 101
warm, 28 MSSM, 66
degrees of freedom, 12, 13, 30, 42, 71
dimensional transmutation, 88 N-body simulations, 25
direct detection, 86 Nambu-Goldstone boson, 33, 61
neutralino, 65, 92, 104
effective field theory, 71, 72, 109 bino, 67, 68, 80
Einstein equation, 7 higgsino, 66, 69
Einstein-Hilbert action, 5 singlino, 82
endpoint methods, 106 wino, 66–68
entropy, 10, 30, 37 neutrino-electron scattering, 29
Euler equation, 22 NMSSM, 82
nucleosynthesis, 15
FIMP, 55 number density, 13, 15, 30, 43, 45, 56
freeze in, 57
freeze out, 29, 37, 40, 42, 49, 62 pair production, 110
freeze-in, 55 parton density function, 99
Friedmann equation, 7, 12, 22 phase transition, 38
Friedmann–Lemaitre–Robertson–Walker model, 8 PLANCK, 22, 25, 40, 49, 65
116
INDEX 117
Poisson equation, 22
power spectrum, 18, 25
relic
abundance, 46, 57
neutrinos, 30
photons, 14
thermal abundance, 41
Sachs–Wolfe effect, 19
Sakharov conditions, 37
scale factor, 6, 7
Schrödinger equation, 52
simplified model, 83, 84, 104, 111
Sommerfeld enhancement, 50, 69
speed of sound, 20, 27
sphaleron, 38
spin-dependent cross section, 94
spin-independent cross section, 87
Stefan–Boltzmann scaling, 13
sterile neutrinos, 28
structure formation, 22
systematics, 102
Xenon, 91
yield, 45, 57
Yukawa potential, 53
118 REFERENCES
References
[1] D. Baumann, “Cosmology,” http://www.damtp.cam.ac.uk/people/d.baumann
[2] Planck Collaboration, “Planck 2015 results. XI. CMB power spectra, likelihoods, and robustness of
parameters,” Submitted to: Astron.Astrophys. [arXiv:1507.02704].
[3] M. Tegmark, “Measuring space-time: From big bang to black holes,” Lect. Notes Phys. 646, 169 (2004)
[arXiv:astro-ph/0207199].
[4] A. Schneider. ”Cosmic Structure Formation and Dark Matter,” Talk at workshop ”Dark Matter at the Dawn of
a Discovery?”, Heidelberg 2018 https://indico.cern.ch/event/678386/contributions/
2893445/attachments/1631142/2600258/Schneider_Heidelberg2018.pdf.
[5] M. Tanabashi et al. [Particle Data Group], “Review of Particle Physics,” Phys. Rev. D 98, no. 3, 030001
(2018).
[6] E. W. Kolb and M. S. Turner, “The Early Universe,” Front. Phys. 69, 1 (1990).
[7] B. Bellazzini, M. Cliche and P. Tanedo, “Effective theory of self-interacting dark matter,” Phys. Rev. D 88,
no. 8, 083506 (2013) [arXiv:1307.1129 [hep-ph]].
[9] N. Bernal, M. Heikinheimo, T. Tenkanen, K. Tuominen and V. Vaskonen, “The Dawn of FIMP Dark Matter:
A Review of Models and Constraints,” Int. J. Mod. Phys. A 32, no. 27, 1730023 (2017) [arXiv:1706.07442
[hep-ph]].
[10] T. Plehn, “Lectures on LHC Physics,” Lect. Notes Phys. 886 (2015). [arXiv:0910.4182 [hep-ph]].
http://www.thphys.uni-heidelberg.de/˜plehn/?visible=review
[11] A. Djouadi, O. Lebedev, Y. Mambrini and J. Quevillon, “Implications of LHC searches for Higgs–portal dark
matter,” Phys. Lett. B 709, 65 (2014) [arXiv:1112.3299 [hep-ph]].
[12] J. Bramante, N. Desai, P. Fox, A. Martin, B. Ostdiek and T. Plehn, “Towards the Final Word on Neutralino
Dark Matter,” Phys. Rev. D 93, no. 6, 063525 (2016) [arXiv:1510.03460 [hep-ph]].
[13] J. Bramante, P. J. Fox, A. Martin, B. Ostdiek, T. Plehn, T. Schell and M. Takeuchi, “Relic neutralino surface
at a 100 TeV collider,” Phys. Rev. D 91, 054015 (2015) [arXiv:1412.4789 [hep-ph]].
[14] J. Goodman, M. Ibe, A. Rajaraman, W. Shepherd, T. M. P. Tait and H. B. Yu, “Constraints on Light Majorana
dark Matter from Colliders,” Phys. Lett. B 695, 185 (2011) [arXiv:1005.1286 [hep-ph]].
[15] A. Butter, S. Murgia, T. Plehn and T. M. P. Tait, “Saving the MSSM from the Galactic Center Excess,” Phys.
Rev. D 96, no. 3, 035036 (2017) [arXiv:1612.07115 [hep-ph]].
[16] S. Murgia et al. [Fermi-LAT Collaboration], “Fermi-LAT Observations of High-Energy γ-Ray Emission
Toward the Galactic Center,” Astrophys. J. 819, no. 1, 44 (2016) [arXiv:1511.02938 [astro-ph.HE]].
[17] F. Calore, I. Cholis, C. McCabe and C. Weniger, “A Tale of Tails: Dark Matter Interpretations of the Fermi
GeV Excess in Light of Background Model Systematics,” Phys. Rev. D 91, no. 6, 063003 (2015)
[arXiv:1411.4647 [hep-ph]].
[18] A. Berlin, D. Hooper and S. D. McDermott, “Simplified Dark Matter Models for the Galactic Center
Gamma-Ray Excess,” Phys. Rev. D 89, no. 11, 115022 (2014) [arXiv:1404.0022 [hep-ph]].
[19] M. Ibe, H. Murayama and T. T. Yanagida, “Breit-Wigner Enhancement of Dark Matter Annihilation,” Phys.
Rev. D 79, 095009 (2009) [arXiv:0812.0072 [hep-ph]].
REFERENCES 119
[20] A. Butter, T. Plehn, M. Rauch, D. Zerwas, S. Henrot-Versillé and R. Lafaye, “Invisible Higgs Decays to
Hooperons in the NMSSM,” Phys. Rev. D 93, 015011 (2016) [arXiv:1507.02288 [hep-ph]].
[21] M. Bauer, A. Butter, N. Desai, J. Gonzalez-Fraile and T. Plehn, “Validity of dark matter effective theory,”
Phys. Rev. D 95, no. 7, 075036 (2017) [arXiv:1611.09908 [hep-ph]].
[22] G. Arcadi, M. Dutra, P. Ghosh, M. Lindner, Y. Mambrini, M. Pierre, S. Profumo and F. S. Queiroz, “The
waning of the WIMP? A review of models, searches, and constraints,” Eur. Phys. J. C 78, no. 3, 203 (2018)
[arXiv:1703.07364 [hep-ph]].
[23] J. L. Feng et al., “Planning the Future of U.S. Particle Physics (Snowmass 2013): Chapter 4: Cosmic
Frontier,” arXiv:1401.6085 [hep-ex].
[24] J. M. Campbell, J. W. Huston and W. J. Stirling, “Hard Interactions of Quarks and Gluons: A Primer for LHC
Physics,” Rept. Prog. Phys. 70, 89 (2007) [arXiv:hep-ph/0611148].
[25] E. Bernreuther, J. Horak, T. Plehn and A. Butter, “Actual Physics behind Mono-X,” SciPost
arXiv:1805.11637 [hep-ph].
[26] A. Barr, C. Lester and P. Stephens, “m(T2) : The Truth behind the glamour,” J. Phys. G 29, 2343 (2003)
[arXiv:hep-ph/0304226].
[27] J. M. Smillie and B. R. Webber, “Distinguishing Spins in Supersymmetric and Universal Extra Dimension
Models at the Large Hadron Collider,” JHEP 0510, 069 (2005) [arXiv:hep-ph/0507170].
[28] L. Bergstrom and A. Goobar, “Cosmology and particle astrophysics,” Chichester, UK: Wiley (1999).
[29] S. Weinberg, “Cosmology,” Oxford, UK: Oxford Univ. Press (2008).
[30] S. Dodelson, “Modern Cosmology,” Amsterdam, Netherlands: Academic Pr. (2003).
[31] G. Bertone and D. Hooper, “A History of Dark Matter,” arXiv:1605.04909.
[32] F. Tanedo, “Defense against the Dark Arts,”
http://www.physics.uci.edu/˜tanedo/files/notes/DMNotes.pdf
[33] Y. Mambrini, “Histories of Dark Matter in the Universe,”
http://www.ymambrini.com/My_World/Physics_files/Universe.pdf
[34] J. M. Cline, “TASI Lectures on Early Universe Cosmology: Inflation, Baryogenesis and Dark Matter,”
arXiv:1807.08749 [hep-ph].
[35] H. Kurki-Suonio, “Cosmology I and II,” http://www.helsinki.fi/˜hkurkisu
[36] J. G. Rosa, “Introduction to Cosmology” http://gravitation.web.ua.pt/cosmo
[37] M. Taoso, G. Bertone and A. Masiero, “Dark Matter Candidates: A Ten-Point Test,” JCAP 0803, 022 (2008)
[arXiv:0711.4996 [astro-ph]].
[38] T. R. Slatyer, N. Padmanabhan and D. P. Finkbeiner, “CMB Constraints on WIMP Annihilation: Energy
Absorption During the Recombination Epoch,” Phys. Rev. D 80, 043526 (2009) [arXiv:0906.1197
[astro-ph.CO]].
[39] G. Steigman, B. Dasgupta and J. F. Beacom, “Precise Relic WIMP Abundance and its Impact on Searches for
Dark Matter Annihilation,” Phys. Rev. D 86, 023506 (2012) [arXiv:1204.3622 [hep-ph]].
[40] F. Kahlhoefer, “Review of LHC Dark Matter Searches,” Int. J. Mod. Phys. A 32, 1730006 (2017)
[arXiv:1702.02430 [hep-ph]].
[41] N. Arkani-Hamed, D. P. Finkbeiner, T. R. Slatyer and N. Weiner, “A Theory of Dark Matter,” Phys. Rev. D
79, 015014 (2009) [arXiv:0810.0713 [hep-ph]].
120 REFERENCES
[42] K. Griest and D. Seckel, “Three exceptions in the calculation of relic abundances,” Phys. Rev. D 43, 3191
(1991).
[43] R. T. D’Agnolo, D. Pappadopulo and J. T. Ruderman, “Fourth Exception in the Calculation of Relic
Abundances,” Phys. Rev. Lett. 119, no. 6, 061102 (2017) [arXiv:1705.08450 [hep-ph]].
[44] H. Baer, K. Y. Choi, J. E. Kim and L. Roszkowski, “Dark matter production in the early Universe: beyond
the thermal WIMP paradigm,” Phys. Rept. 555, 1 (2015) [arXiv:1407.0017 [hep-ph]].
[45] R. D. Peccei, “The Strong CP problem and axions,” Lect. Notes Phys. 741, 3 (2008) [arXiv:hep-ph/0607268].
[46] P. Arias, D. Cadamuro, M. Goodsell, J. Jaeckel, J. Redondo and A. Ringwald, “WISPy Cold Dark Matter,”
JCAP 1206, 013 (2012) [arXiv:1201.5902 [hep-ph]].
[47] T. Lin. ”Dark Matter Models and Direct Searches,” Lecture at TASI 2018, lecture notes to appear
https://www.youtube.com/watch?v=fQSWMsOfOcc
[48] J. D. Lewin and P. F. Smith, “Review of mathematics, numerical factors, and corrections for dark matter
experiments based on elastic nuclear recoil,” Astropart. Phys. 6, 87 (1996).
[49] P. Gondolo and J. Silk, “Dark matter annihilation at the galactic center,” Phys. Rev. Lett. 83, 1719 (1999)
[arXiv:astro-ph/9906391].
[50] D. Hooper, “Particle Dark Matter,” arXiv:0901.4090 [hep-ph].