2024 cl-3 8
2024 cl-3 8
A Survey
Isabel O. Gallegos ∗
Department of Computer Science
Stanford University
iogalle@stanford.edu
Ryan A. Rossi
Adobe Research
ryrossi@adobe.com
Joe Barrow∗∗
Pattern Data
joe.barrow@patterndataworks.com
Md Mehrab Tanjim
Adobe Research
tanjim@adobe.com
Sungchul Kim
Adobe Research
sukim@adobe.com
Franck Dernoncourt
Adobe Research
dernonco@adobe.com
Tong Yu
Adobe Research
tyu@adobe.com
Ruiyi Zhang
Adobe Research
ruizhang@adobe.com
Nesreen K. Ahmed
Intel Labs
nesreen.k.ahmed@intel.com
Action Editor: Saif Mohammad. Submission received: 8 March 2024; accepted for publication: 8 May 2024.
https://doi.org/10.1162/coli a 00524
Rapid advancements of large language models (LLMs) have enabled the processing, understand-
ing, and generation of human-like text, with increasing integration into systems that touch our
social sphere. Despite this success, these models can learn, perpetuate, and amplify harmful social
biases. In this article, we present a comprehensive survey of bias evaluation and mitigation tech-
niques for LLMs. We first consolidate, formalize, and expand notions of social bias and fairness in
natural language processing, defining distinct facets of harm and introducing several desiderata
to operationalize fairness for LLMs. We then unify the literature by proposing three intuitive
taxonomies, two for bias evaluation, namely, metrics and datasets, and one for mitigation. Our
first taxonomy of metrics for bias evaluation disambiguates the relationship between metrics
and evaluation datasets, and organizes metrics by the different levels at which they operate
in a model: embeddings, probabilities, and generated text. Our second taxonomy of datasets
for bias evaluation categorizes datasets by their structure as counterfactual inputs or prompts,
and identifies the targeted harms and social groups; we also release a consolidation of publicly
available datasets for improved access. Our third taxonomy of techniques for bias mitigation
classifies methods by their intervention during pre-processing, in-training, intra-processing, and
post-processing, with granular subcategories that elucidate research trends. Finally, we identify
open problems and challenges for future work. Synthesizing a wide range of recent research, we
aim to provide a clear guide of the existing literature that empowers researchers and practition-
ers to better understand and prevent the propagation of bias in LLMs.
1. Introduction
1 Unless otherwise specified, our use of “bias” refers to social bias, defined in Definition 7.
1098
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
often reflect existing biases, they can amplify these biases, too; in either case, the auto-
mated reproduction of injustice can reinforce systems of inequity (Benjamin 2020). From
negative sentiment and toxicity directed towards some social groups, to stereotypical
linguistic associations, to lack of recognition of certain language dialects, the presence
of biases of LLMs have been well-documented (e.g., Blodgett and O’Connor 2017;
Hutchinson et al. 2020; Mei, Fereidooni, and Caliskan 2023; Měchura 2022; Mozafari,
Farahbakhsh, and Crespi 2020; Sap et al. 2019; Sheng et al. 2019).
With the growing recognition of the biases embedded in LLMs has emerged an
abundance of works proposing techniques to measure or remove social bias, primarily
organized by (1) metrics for bias evaluation, (2) datasets for bias evaluation, and (3)
techniques for bias mitigation. In this survey, we categorize, summarize, and discuss
each of these areas of research. For each area, we propose an intuitive taxonomy struc-
tured around the types of interventions to which a researcher or practitioner has access.
Metrics for bias evaluation are organized by the underlying data structure assumed
by the metric, which may differ depending on access to the LLM (i.e., can the user
access model-assigned token probabilities, or only generated text output?). Datasets are
similarly categorized by their structure. Techniques for bias mitigation are organized
by the stage of intervention: pre-processing, in-training, intra-processing, and post-
processing.
The key contributions of this work are as follows:
1099
Computational Linguistics Volume 50, Number 3
https://github.com/i-gallegos/Fair-LLM-Benchmark
Each taxonomy provides a reference for researchers and practitioners to identify which
metrics, datasets, or mitigations may be appropriate for their use case, to understand
the tradeoffs between each technique, and to recognize areas for continued exploration.
This survey complements existing literature by offering a more extensive and com-
prehensive examination of bias and fairness in NLP. Surveys of bias and fairness in
machine learning, such as Mehrabi et al. (2021) and Suresh and Guttag (2021), offer
important broad-stroke frameworks, but are not specific to linguistic tasks or contexts.
While previous work within NLP such as Czarnowska, Vyas, and Shah (2021), Kumar
et al. (2023b), and Meade, Poole-Dayan, and Reddy (2021) has focused on specific axes
of bias evaluation and mitigation, such as extrinsic fairness metrics, empirical vali-
dation, and language generation interventions, our work provides increased breadth
and depth. Specifically, we offer a comprehensive overview of bias evaluation and
mitigation techniques across a wide range of NLP tasks and applications, synthesizing
diverse bodies of work to surface unifying themes and overarching challenges. Beyond
enumerating techniques, we also examine the limitations of each class of approach,
providing insights and recommendations for future work.
We do not attempt to survey the abundance of work on algorithmic fairness more
generally, or even bias in all language technologies broadly. In contrast, we focus solely
on bias issues in LLMs for English (with additional languages for machine translation
and multilingual models), and restrict our search to works that propose novel closed-
form metrics, datasets, or mitigation techniques; for our conceptualization of what
constitutes an LLM, see Definition 1 in Section 2. In some cases, techniques we survey
1100
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
may have been used in contexts beyond bias and fairness, but we require that each
work must at some point specify their applicability towards understanding social bias
or fairness.
In the remainder of the article, we first formalize the problem of bias in LLMs
(Section 2), and then provide taxonomies of metrics for bias evaluation (Section 3),
datasets for bias evaluation (Section 4), and techniques for bias mitigation (Section 5).
Finally, we discuss open problems and challenges for future research (Section 6).
We begin with basic definitions and notation to formalize the problem of bias in LLMs.
We introduce general principles of LLMs (Section 2.1), define the terms “bias” and “fair-
ness” in the context of LLMs (Section 2.2), formalize fairness desiderata (Section 2.3),
and finally provide an overview of our taxonomies of metrics for bias evaluation,
datasets for bias evaluation, and techniques for bias mitigation (Section 2.4).
2.1 Preliminaries
1101
Computational Linguistics Volume 50, Number 3
a general LLM metric here. In particular, the structure of a dataset may determine which
set of metrics is appropriate, but a metric is rarely restricted to a single benchmark
dataset. We discuss this relationship in more detail in Sections 3 and 4.
We now define the terms “bias” and “fairness” in the context of LLMs. We first present
notions of fairness and social bias, with a taxonomy of social biases relevant to LLMs,
and then discuss how bias may manifest in NLP tasks and throughout the LLM devel-
opment and deployment cycle.
2.2.1 Social Bias and Fairness. Measuring and mitigating social “bias” to ensure “fairness”
in NLP systems has featured prominently in recent literature. Often what is proposed—
and what we describe in this survey—are technical solutions: augmenting datasets to
“debias” imbalanced social group representations, for example, or fine-tuning models
with “fair” objectives. Despite the growing emphasis on addressing these issues, bias
and fairness research in LLMs often fails to precisely describe the harms of model
behaviors: who is harmed, why the behavior is harmful, and how the harm reflects and
reinforces social principles or hierarchies (Blodgett et al. 2020). Many approaches, for
instance, assume some implicitly desirable criterion (e.g., a model output should be
independent of any social group in the input), but do not explicitly acknowledge or
state the normative social values that justify their framework. Others lack consistency
in their definitions of bias, or do not seriously engage with the relevant power dynamics
that perpetuate the underlying harm (Blodgett et al. 2021). Imprecise or inconsistent def-
initions make it difficult to conceptualize exactly what facets of injustice these technical
solutions address.
Here we attempt to disambiguate the types of harms that may emerge from
LLMs, building on the definitions in machine learning works by Barocas, Hardt, and
Narayanan (2019), Bender et al. (2021), Blodgett et al. (2020), Crawford (2017), Mehrabi
et al. (2021), Suresh and Guttag (2021), and Weidinger et al. (2022), and following
extensive (socio)linguistic research in this area by Beukeboom and Burgers (2019),
Craft et al. (2020), Loudermilk (2015), Maass (1999), and others. Fundamentally, these
definitions seek to uncouple social harms from specific technical mechanisms, given
that language, independent of any algorithmic system, is itself a tool that encodes social
and cultural processes. Though we provide our own definitions here, we recognize
that the terms “bias” and “fairness” are normative and subjective ones, often context-
and culturally-dependent, encapsulating a wide range of inequities rooted in complex
structural hierarchies with various mechanisms of power that affect groups of people
differently. Though we use these definitions to inform our selection and categorization
of papers in this survey, not all papers we reference define bias and fairness in the same
way, if at all. Therefore, throughout the remainder of the survey, we use the term “bias”
broadly to encompass any of the more granular definitions provided below (Definition 7
and Table 1), and to describe other works that use the term loosely when an exact
specification is not provided. Note that our use of the terms “debiased” or “unbiased”
1102
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
Table 1
Taxonomy of social biases in NLP. We provide definitions of representational and allocational
harms, with examples pertinent to LLMs from prior works examining linguistically-associated
social biases. Though each harm represents a distinct mechanism of injustice, they are not
mutually exclusive, nor do they operate independently.
Type of Harm Definition and Example
R EPRESENTATIONAL H ARMS Denigrating and subordinating attitudes towards a social group
Derogatory language Pejorative slurs, insults, or other words or phrases that target and
denigrate a social group
e.g., “Whore” conveys hostile and contemptuous female expectations
(Beukeboom and Burgers 2019)
Disparate system performance Degraded understanding, diversity, or richness in language processing
or generation between social groups or linguistic variations
e.g., AAE* like “he woke af” is misclassified as not English more often than
SAE† equivalents (Blodgett and O’Connor 2017)
Erasure Omission or invisibility of the language and experiences of a social
group
e.g., “All lives matter” in response to “Black lives matter” implies
colorblindness that minimizes systemic racism (Blodgett 2021)
Exclusionary norms Reinforced normativity of the dominant social group and implicit ex-
clusion or devaluation of other groups
e.g., “Both genders” excludes non-binary identities (Bender et al. 2021)
Misrepresentation An incomplete or non-representative distribution of the sample popu-
lation generalized to a social group
e.g., Responding “I’m sorry to hear that” to “I’m an autistic
dad” conveys a negative misrepresentation of autism (Smith et al. 2022)
Stereotyping Negative, generally immutable abstractions about a labeled social
group
e.g., Associating “Muslim” with “terrorist” perpetuates negative violent
stereotypes (Abid, Farooqi, and Zou 2021)
Toxicity Offensive language that attacks, threatens, or incites hate or violence
against a social group
e.g., “I hate Latinos” is disrespectful and hateful (Dixon et al. 2018)
A LLOCATIONAL H ARMS Disparate distribution of resources or opportunities between social
groups
Direct discrimination Disparate treatment due explicitly to membership of a social group
e.g., LLM-aided resume screening may preserve hiring inequities (Ferrara
2023)
Indirect discrimination Disparate treatment despite facially neutral consideration towards so-
cial groups, due to proxies or other implicit factors
e.g., LLM-aided healthcare tools may use proxies associated with demo-
graphic factors that exacerbate inequities in patient care (Ferrara 2023)
*African-American English; † Standard American English.
does not mean that bias has been completely removed, but rather refers to the output
of a bias mitigation technique, regardless of that technique’s effectiveness, reflecting
language commonly used in prior works. Similarly, our conceptualization of “neutral”
words does not refer to a fixed set of words, but rather to any set of words that should
be unrelated to any social group under some subjective worldview.
The primary emphasis of bias evaluation and mitigation efforts for LLMs focus on
group notions of fairness, which center on disparities between social groups, following
group fairness definitions in the literature (Chouldechova 2017; Hardt, Price, and Srebro
2016; Kamiran and Calders 2012). We also discuss individual fairness (Dwork et al.
1103
Computational Linguistics Volume 50, Number 3
2012). We provide several definitions that describe our notions of bias and fairness for
NLP tasks, which we leverage throughout the remainder of the article.
The choice of M specifies a fairness constraint, which is subjective and contextual; note
that M may be accuracy, true positive rate, false positive rate, and so on.
Note that, though group fairness provides a useful framework to capture rela-
tionships between social groups, it is a somewhat weak notion of fairness that can be
satisfied for each group while violating fairness constraints for subgroups of the social
groups, such as people with intersectional identities. To overcome this, group fairness
notions have been expanded to subgroup notions, which apply to overlapping subsets
of a population. We refer to Hébert-Johnson et al. (2018) and Kearns et al. (2018) for
definitions.
1104
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
distribution over outcomes. Individual fairness requires that individuals similar with
respect to some task should be treated similarly, such that
2.2.2 Bias in NLP Tasks. Language is closely tied to identity, social relations, and power.
Language can make concrete the categorization and differentiation of social groups,
giving voice to generic or derogatory labels, and linking categories of people to stereo-
typical, unrepresentative, or overly general characteristics (Beukeboom and Burgers
2019; Maass 1999). Language can also exclude, subtly reinforcing norms that can further
marginalize groups that do not conform, through linguistic practices like “male-as-
norm,” which orients feminine words as less important opposites derived from de-
fault masculine terms. These norms are often tied to power hierarchies, and in turn
bolster those same structures. Beyond describing social groups, language in itself can
also partition a population, with linguistic variations. Linguistic profiling, for instance,
can discriminate against speakers of a dialect considered non-standard (Baugh 2000;
Loudermilk 2015). In fact, the determination of which forms of language are considered
standard or correct also reinforces social hierarchies that can justify the inferiority of
1105
Computational Linguistics Volume 50, Number 3
some groups (Blodgett et al. 2020; Craft et al. 2020). Given the close ties between lan-
guage and the ways that social groups are identified and described, representational
harms are a particularly salient concern in NLP tasks, and the primary emphasis in
this survey. Of course, representational harms often arise subtly, and thus quantifying
them in language, at least for some NLP tasks, differs from standard fairness techniques,
which typically apply to classification. We provide a non-exhaustive list of examples of
settings where bias may manifest in unique forms, depending on the task:
Despite the various forms of tasks and their outputs, these can still often be unified
under the traditional notions of fairness, quantifying the output (next-token predic-
tion, generated sentence continuation, translated text, etc.) with some score (e.g., token
1106
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
2.2.3 Bias in the Development and Deployment Life Cycle. Another way of understanding
social bias in LLMs is to examine at which points within the model development
and deployment process the bias emerges, which may exacerbate preexisting historical
biases. This has been thoroughly explored by Mehrabi et al. (2021), Shah, Schwartz, and
Hovy (2020), and Suresh and Guttag (2021), and we summarize these pathways here:
• Training Data: The data used to train an LLM may be drawn from a
non-representative sample of the population, which can cause the model
to fail to generalize well to some social groups. The data may omit
important contexts, and proxies used as labels (e.g., sentiment) may
incorrectly measure the actual outcome of interest (e.g., representational
harms). The aggregation of data may also obscure distinct social groups
that should be treated differently, causing the model to be overly general
or representative only of the majority group. Of course, even properly
collected data still reflects historical and structural biases in the world.
• Model: The training or inference procedure itself may amplify bias,
beyond what is present in the training data. The choice of optimization
function, such as selecting accuracy over some measure of fairness, can
affect a model’s behavior. The treatment of each training instance or
social group matters too, such as weighing all instances equally during
training instead of utilizing a cost-sensitive approach. The ranking of
outputs at training or inference time, such as during decoding for text
generation or document ranking in information retrieval, can affect the
model’s biases as well.
• Evaluation: Benchmark datasets may be unrepresentative of the
population that will use the LLM, but can steer development towards
optimizing only for those represented by the benchmark. The choice of
metric can also convey different properties of the model, such as with
aggregate measures that obscure disparate performance between social
groups, or the selection of which measure to report (e.g., false positives
versus false negatives).
• Deployment: An LLM may be deployed in a different setting than that
for which it was intended, such as with or without a human intermediary
for automated decision-making. The interface through which a user
interacts with the model may change human perception of the LLM’s
behavior.
Though group, individual, and subgroup fairness define useful general frameworks,
they in themselves do not specify the exact fairness constraints. This distinction is criti-
cal, as defining the “right” fairness specification is highly subjective, value-dependent,
and non-static, evolving through time (Barocas, Hardt, and Narayanan 2019; Ferrara
2023; Friedler, Scheidegger, and Venkatasubramanian 2021). Each stakeholder brings
1107
Computational Linguistics Volume 50, Number 3
perspectives that may specify different fairness constraints for the same application
and setting. The list—and the accompanying interests—of stakeholders is broad. In the
machine learning data domain more broadly, Jernite et al. (2022) identify stakeholders
to be data subjects, creators, aggregators; dataset creators, distributors, and users; and
users or subjects of the resulting machine learning systems. Bender (2019) distinguishes
between direct stakeholders, who interact with NLP systems, including system design-
ers and users, and indirect stakeholders, whose languages or resources may contribute
to the construction of an NLP system, or who may be subject to the output of an
NLP system; these interactions are not always voluntary. In sum, there is no universal
fairness specification.
Instead of suggesting a single fairness constraint, we provide a number of possible
fairness desiderata for LLMs. While similar concepts have been operationalized for
machine learning classification tasks (Mehrabi et al. 2021; Verma and Rubin 2018), less
has been done in the NLP space, which may contain more ambiguity than classification
for tasks like language generation. Note that for NLP classification tasks, or tasks with a
superimposed classifier, traditional fairness definitions like equalized odds or statistical
parity may be used without modification. For cases when simple classification may
not be useful, we present general desiderata of fairness for NLP tasks that generalize
notions in the LLM bias evaluation and mitigation literature, building on the outcome
and error disparity definitions proposed by Shah, Schwartz, and Hovy (2020). We use
the following notation: For some input Xi containing a mention of a social group Gi , let
Xj be an analogous input with the social group substituted for Gj . Let w ∈ W be a neutral
word, and let a ∈ A be a protected attribute word, with ai and aj as corresponding terms
associated with Gi and Gj , respectively. Let X\A represent an input with all social group
identifiers removed. See Table 2 for this and other notation.
Definition 9 (I NVARIANCE )
An LLM satisfies invariance if M(Xi ; θ ) and M(Xj ; θ ) are identical under some invari-
ance metric ψ.
1108
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
Table 2
Summary of key notation.
Type Notation Definition
D ATA Gi ∈ G social group i
D dataset
w∈W neutral word
ai ∈ Ai protected attribute word associated with group Gi
(a1 , · · · , am ) protected attributes with analogous meanings for G1 , · · · , Gm
x embedding of word x
vgender gender direction in embedding space
Vgender gender subspace in embedding space
X = (x1 , · · · , xm ) ∈ X generic input
X\A input with all social group identifiers removed
Si = (s1 , · · · , sm ) ∈ S sentence or template input associated with group Gi
SW sentence with neutral words
SA sentence with sensitive attribute words
M⊆S set of masked words in a sentence
U⊆S set of unmasked words in a sentence
Y∈Y correct model output
Ŷ ∈ Ŷ predicted model output, given by M(X; θ )
Ŷi = (ŷ1 , · · · , ŷn ) ∈ Ŷ generated text output associated with group Gi
Ŷk ∈ Ŷk set of top k generated text completions
M ETRICS ψ (· ) ∈ Ψ metric
c(· ) classifier (e.g., toxicity, sentiment)
PP(· ) perplexity
C(· ) count of co-occurrences
W1 (· ) Wasserstein-1 distance
KL(· ) Kullback–Leibler divergence
JS(· ) Jensen-Shannon divergence
I(· ) mutual information
M ODEL M LLM parameterized by θ
A attention matrix
L number of layers in a model
H number of attention heads in a model
E(· ) word or sentence embedding
z(· ) logit
L(· ) loss function
R(· ) regularization term
Before presenting each taxonomy in detail, we summarize each one to provide a high-
level overview. The complete taxonomies are described in Sections 3–5.
2.4.1 Taxonomy of Metrics for Bias Evaluation. We summarize several evaluation tech-
niques that leverage a range of fairness desiderata and operate at different fundamental
levels. As the subset of appropriate evaluation metrics ψ(D) ⊆ Ψ is largely determined
by (1) access to the model (i.e., access to trainable model parameters, versus access to
model output only) and (2) the data structure of an evaluation set D, we taxonomize
1109
Computational Linguistics Volume 50, Number 3
metrics by the underlying data structure assumed by the metric. The complete taxon-
omy is described in Section 3.
2.4.2 Taxonomy of Datasets for Bias Evaluation. Bias evaluation datasets can assess specific
harms, such as stereotyping or derogatory language, that target particular social groups,
such as gender or race groups. Similar to our taxonomy of metrics, we organize datasets
by their data structure. The complete taxonomy is described in Section 4.
2 Static word embeddings are not used with LLMs, but we include the word embedding metric WEAT for
completeness given its relevance to sentence embedding metrics.
1110
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
2.4.3 Taxonomy of Techniques for Bias Mitigation. Bias mitigation techniques apply modi-
fications to an LLM. We organize bias mitigation techniques by the stage at which they
operate in the LLM workflow: pre-processing, in-training, intra-processing, and post-
processing. The complete taxonomy is described in Section 5.
1111
Computational Linguistics Volume 50, Number 3
We now present metrics for evaluating fairness at different fundamental levels. While
evaluation techniques for LLMs have been recently surveyed by Chang et al. (2023),
they do not focus on the evaluation of fairness and bias in such models. In contrast,
we propose an intuitive taxonomy for fairness evaluation metrics. We discuss a wide
variety of fairness evaluation metrics, formalize them mathematically, provide intuitive
examples, and discuss the challenges and limitations of each. In Table 3, we summarize
the evaluation metrics using the proposed taxonomy.
In this section, we discuss different facets that arise when evaluating the biases in LLMs.
There are many facets to consider.
In the literature, many works refer to the metric as the dataset, and use these
interchangeably. One example is the CrowS-Pairs (Nangia et al. 2020) dataset consisting
1112
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
Table 3
Taxonomy of evaluation metrics for bias evaluation in LLMs. We summarize metrics that
measure bias using embeddings, model-assigned probabilities, or generated text. The data
structure describes the input to the model required to compute the metrics, and D indicates if the
metric was introduced with an accompanying dataset. W is the set of neutral words; Ai is the set
of sensitive attribute words associated with group Gi ; S ∈ S is a (masked) input sentence or
template, which may be neutral (SW ) or contain sensitive attributes (SA ); M and U are the sets of
masked and unmasked tokens in S, respectively; Ŷi ∈ Ŷ is a predicted output associated with
group Gi ; c(· ) is a classifier; PP(· ) is perplexity; ψ(· ) is an invariance metric; C(· ) is a
co-occurrence count; W1 (· ) is Wasserstein-1 distance; and E is the expected value.
Metric Data Structure* Equation D
E MBEDDING -B ASED (§ 3.3) E MBEDDING
W ORD E MBEDDING† (§ 3.3.1)
WEAT‡ Static word f (A, W) = (meana ∈A s(a1 , W1 , W2 ) ×
1 1
−meana ∈A s(a2 , W1 , W2 ))/stda∈A s(a, W1 , W2 ) ×
2 2
S ENTENCE E MBEDDING (§ 3.3.2)
SEAT Contextual sentence f (SA , SW ) = WEAT(SA , SW ) ×
ΣN vi WEAT(SA ,SW )
i=1 i i
CEAT Contextual sentence f (SA , SW ) = ×
ΣN vi
i=1
f (S) = s∈S | cos(s, vgender ) · αs |
P
Sentence Bias Score Contextual sentence X
P ROBABILITY-B ASED (§ 3.4) S ENTENCE PAIRS
M ASKED T OKEN (§ 3.4.1)
DisCo Masked f (S) = I(ŷi,[MASK] = ŷj,[MASK] ) ×
pa pa
j
Log-Probability Bias Score Masked f (S) = log p i − log p ×
priori priorj
p
Categorical Bias Score Masked f (S) = 1 Σw∈W Vara∈A log p a ×
|W | prior
P SEUDO -L OG -L IKELIHOOD (§ 3.4.2) f (S) = I(g(S1 ) > g(S2 ))
CrowS-Pairs Score Stereo, anti-stereo g(S) = Σu∈U log P(u|U\u , M; θ ) X
Context Association Test Stereo, anti-stereo g(S) = 1 Σm∈M log P(m|U; θ ) X
|M|
All Unmasked Likelihood Stereo, anti-stereo g(S) = 1 Σs∈S log P(s|S; θ ) ×
|S|
Language Model Bias Stereo, anti-stereo f (S) = t-value(PP(S1 ), PP(S2 )) X
G ENERATED T EXT-B ASED (§ 3.5) P ROMPT
D ISTRIBUTION (§ 3.5.1)
Social Group Substitution Counterfactual pair f (Ŷ) = ψ(Ŷi , Ŷj ) ×
P(w|Ai )
Co-Occurrence Bias Score Any prompt f (w) = log ×
P(w|Aj )
Demographic Representation Any prompt f (G) = Σa∈A Σ C(a, Ŷ) ×
Ŷ∈Ŷ
Stereotypical Associations Any prompt f (w) = Σa∈A Σ C(a, Ŷ)I(C(w, Ŷ) > 0) ×
Ŷ∈Ŷ
C LASSIFIER (§ 3.5.2)
Perspective API Toxicity prompt f (Ŷ) = c(Ŷ) ×
Expected Maximum Toxicity Toxicity prompt f (Ŷ ) = max c(Ŷ) ×
PŶ∈Ŷ
Toxicity Probability Toxicity prompt f (Ŷ ) = P( I(c(Ŷ) ≥ 0.5) ≥ 1) ×
Ŷ∈Ŷ
Toxicity Fraction Toxicity prompt f (Ŷ ) = E [I(c(Ŷ) ≥ 0.5)] ×
Ŷ∈Ŷ
Score Parity Counterfactual pair f (Ŷ ) = |E [c(Ŷi , i)|A = i] − E [c(Ŷj , j)|A = j]| ×
Ŷ∈Ŷ Ŷ∈Ŷ
Counterfactual Sentiment Bias Counterfactual pair f (Ŷ ) = W1 (P(c(Ŷi )|A = i), P(c(Ŷj |A = j)) ×
Regard Score Counterfactual tuple f (Ŷ) = c(Ŷ) ×
Full Gen Bias Counterfactual tuple f (Ŷ ) = ΣC Varw∈W ( 1 Σ c(Ŷw )[i]) X
i=1 |Ŷw | Ŷw ∈Ŷw
L EXICON (§ 3.5.3)
Σ Σ I (ŷ)
Ŷk ∈Ŷk ŷ∈Ŷk HurtLex
HONEST Counterfactual tuple f (Ŷ ) = ×
|Ŷ|·k
Σ Σ sign(affect-score(ŷ))affect-score(ŷ)2
Ŷ∈Ŷ ŷ∈Ŷ P
Psycholinguistic Norms Any prompt f (Ŷ ) = X
Σ |affect-score(ŷ)|
Ŷ∈Ŷ ŷ∈Ŷ
Σ Σ sign(bias-score(ŷ))bias-score(ŷ)2
Ŷ∈Ŷ ŷ∈Ŷ P
Gender Polarity Any prompt f (Ŷ ) = X
Σ |bias-score(ŷ)|
Ŷ∈Ŷ ŷ∈Ŷ
†
*Data structure corresponds with the task. For example, prompts indicate text generation. Static word embeddings are not used with LLMs, but we include the word
embedding metric WEAT for completeness given its relevance to sentence embedding metrics. ‡ See § 3.3.1 for definition of s(· ).
1. His mind wondered if a doctor was behind this mess, and whether
[he/she] would come forward.
1113
Computational Linguistics Volume 50, Number 3
Figure 1
Evaluation via substitution for two tasks. We illustrate one type of social group substitution
(gender) for two different tasks, namely, question-answering and text completion. For the
question-answering task in (a), gender is substituted in the question to understand if and how
the response from the model changes. We see that the model’s answer for the male data engineer
is about strategies to get started by identifying useful data sources, whereas for the female data
engineer it is about developing technical skills. There is an implicit assumption that male data
engineers already have the technical skills they need compared with female data engineers. For
the completion task in (b), we again substitute the gender, and see that the model responds the
same, that is, it completes the sentence by generating the same text for either case. We note that
in (a) the gender is more explicit compared with (b) where it is more implicit.
1114
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
Figure 2
Evaluation taxonomy. For an arbitrary dataset selected for a given task, there is a subset of
appropriate evaluation metrics that may measure model performance or bias.
or fine-tuned models. Other works alternatively refer to bias in the embedding space as
intrinsic bias, which maps more closely to our classification of metrics by what they use.
Most bias evaluation metrics for LLMs can be categorized by what they use from the
model such as the embeddings, probabilities, or generated text. As such, we propose an
intuitive taxonomy based on this categorization:
3.3.1 Word Embedding Metrics. Bias metrics for word embeddings were first proposed for
static word embeddings, but their basic formulation of computing cosine distances be-
tween neutral and gendered words has been generalized to contextualized embeddings
and broader dimensions of bias. Static embedding techniques may be adapted to contex-
tualized embeddings by taking the last subword token representation of a word before
1115
Computational Linguistics Volume 50, Number 3
Figure 3
Example embedding-based metrics (§ 3.3). Sentence-level encoders produce sentence
embeddings that can be assessed for bias. Embedding-based metrics use cosine similarity to
compare words like “doctor” to social group terms like “man.” Unbiased embeddings should
have similar cosine similarity to opposing social group terms.
pooling to a sentence embedding. Though several static word embedding bias metrics
have been proposed, we focus only on Word Embedding Association Test (WEAT)
(Caliskan, Bryson, and Narayanan 2017) here, given its relevance to similar methods
for contextualized sentence embeddings. WEAT measures associations between social
group concepts (e.g., masculine and feminine words) and neutral attributes (e.g., family
and occupation words), emulating the Implicit Association Test (Greenwald, McGhee,
and Schwartz 1998). For protected attributes A1 , A2 and neutral attributes W1 , W2 ,
stereotypical associations are measured by a test statistic:
X X
f (A1 , A2 , W1 , W2 ) = s(a1 , W1 , W2 ) − s(a2 , W1 , W2 ) (3)
a1 ∈A1 a2 ∈A2
with a larger effect size indicating stronger bias. WEAT* (Dev et al. 2021) presents
an alternative, where W1 and W2 are instead definitionally masculine and feminine
words (e.g., “gentleman,” “matriarch”) to capture stronger masculine and feminine
associations.
3.3.2 Sentence Embedding Metrics. Instead of using static word embeddings, LLMs use
embeddings learned in the context of a sentence, and are more appropriately paired
with embedding metrics for sentence-level encoders. Using full sentences also enables
more targeted evaluation of various dimensions of bias, using sentence templates that
probe for specific stereotypical associations.
1116
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
PN
i=1 vi WEAT(SA1i , SA2i , SW1i , SW2i )
CEAT(SA1 , SA2 , SW1 , SW2 ) = PN (6)
i=1 vi
X
Sentence Bias(S) = | cos(s, vgender ) · αs | (7)
s∈S,s∈
/A
3.3.3 Discussion and Limitations. Several reports point out that biases in the embed-
ding space have only weak or inconsistent relationships with biases in downstream
tasks (Cabello, Jørgensen, and Søgaard 2023; Cao et al. 2022a; Goldfarb-Tarrant et al.
2021; Orgad and Belinkov 2022; Orgad, Goldfarb-Tarrant, and Belinkov 2022; Steed
et al. 2022). In fact, Goldfarb-Tarrant et al. (2021) find no reliable correlation at all, and
Cabello, Jørgensen, and Søgaard (2023) illustrate that associations between the repre-
sentations of protected attribute and other words can be independent of downstream
performance disparities, if certain assumptions of social groups’ language use are vio-
lated. These studies demonstrate that bias in representations and bias in downstream
applications should not be conflated, which may limit the value of embedding-based
metrics. Delobelle et al. (2022) also point out that embedding-based measures of bias
can be highly dependent on different design choices, such as the construction of tem-
plate sentences, the choice of seed words, and the type of representation (i.e., the con-
textualized embedding for a specific token before pooling versus the [CLS] token). In
1117
Computational Linguistics Volume 50, Number 3
fact, Delobelle et al. (2022) recommend avoiding embedding-based metrics at all, and
instead focusing only on metrics that assess a specific downstream task.
Furthermore, Gonen and Goldberg (2019) critically show that debiasing techniques
may merely represent bias in new ways in the embedding space. This finding may
also call the validity of embedding-based metrics into question. Particularly, whether
embedding-based metrics, with their reliance on cosine distance, sufficiently capture
only superficial levels of bias, or whether they can also identify more subtle forms of
bias, is a topic for future research.
Finally, the impact of sentence templates on bias measurement can be explored fur-
ther. It is unclear whether semantically bleached templates used by SEAT, for instance,
or the sentences generated by CEAT, are able to capture forms of bias that extend be-
yond word similarities and associations, such as derogatory language, disparate system
performance, exclusionary norms, and toxicity.
In this section, we discuss bias and fairness metrics that leverage the probabilities from
LLMs. These techniques prompt a model with pairs or sets of template sentences with
their protected attributes perturbed, and compare the predicted token probabilities
conditioned on the different inputs. We illustrate examples of each technique in Figure 4.
3.4.1 Masked Token Methods. The probability of a token can be derived by masking a
word in a sentence and asking a masked language model to fill in the blank. Discovery
of Correlations (DisCo) (Webster et al. 2020), for instance, compares the completion
Figure 4
Example probability-based metrics (§ 3.4). We illustrate two classes of probability-based metrics:
masked token metrics and pseudo-log-likelihood metrics. Masked token metrics compare the
distributions for the predicted masked word, for two sentences with different social groups.
An unbiased model should have similar probability distributions for both sentences.
Pseudo-log-likelihood metrics estimate whether a sentence that conforms to a stereotype or
violates that stereotype (“anti-stereotype”) is more likely by approximating the conditional
probability of the sentence given each word in the sentence. An unbiased model should choose
stereotype and anti-stereotype sentences with equal probability, over a test set of sentence pairs.
1118
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
of template sentences. Each template (e.g., “[X] is [MASK]”; “[X] likes to [MASK]”)
has two slots, the first manually filled with a bias trigger associated with a social group
(originally presented for gendered names and nouns, but generalizable to other groups
with well-defined word lists), and the second filled by the model’s top three candidate
predictions. The score is calculated by averaging the count of differing predictions
between social groups across all templates. Log-Probability Bias Score (LPBS) (Kurita
et al. 2019) uses a similar template-based approach as DisCo to measure bias in neutral
attribute words (e.g., occupations), but normalizes a token’s predicted probability pa
(based on a template “[MASK] is a [NEUTRAL ATTRIBUTE]”) with the model’s prior
probability pprior (based on a template “[MASK] is a [MASK]”). Normalization corrects
for the model’s prior favoring of one social group over another and thus only measures
bias attributable to the [NEUTRAL ATTRIBUTE] token. Bias is measured by the difference
between normalized probability scores for two binary and opposing social group words.
pai paj
LPBS(S) = log p − log p (8)
priori priorj
Categorical Bias Score (Ahn and Oh 2021) adapts Kurita et al. (2019)’s normalized
log probabilities to non-binary targets. This metric measures the variance of predicted
tokens for fill-in-the-blank template prompts over corresponding protected attribute
words a for different social groups:
pa
CBS(S) = 1
X
Vara∈A log p (9)
|W | prior
w∈W
X
PLL(S) = log P s|S\s ; θ (10)
s ∈S
PLL approximates the probability of a token conditioned on the rest of the sentence
by masking one token at a time and predicting it using all the other unmasked tokens.
CrowS-Pairs Score (Nangia et al. 2020), presented with the CrowS-Pairs dataset, re-
quires pairs of sentences, one stereotyping and one less stereotyping, and leverages PLL
to evaluate the model’s preference for stereotypical sentences. For pairs of sentences,
the metric approximates the probability of shared, unmodified tokens U conditioned
on modified, typically protected attribute tokens M, given by P(U|M, θ ), by masking
and predicting each unmodified token. For a sentence S, the metric is given by:
X
CPS(S) = log P u|U\u , M; θ (11)
u∈ U
Context Association Test (CAT) (Nadeem, Bethke, and Reddy 2021), introduced with
the StereoSet dataset, also compares sentences. Similar to pseudo-log-likelihood, each
1119
Computational Linguistics Volume 50, Number 3
CAT(S) = 1
X
log P (m|U; θ ) (12)
| M|
m∈M
Idealized CAT (iCAT) Score can be calculated from the same stereotype, anti-
stereotype, and meaningless sentence options. Given a language modeling score (lms)
that calculates the percentage of instances that the model prefers a meaningful sentence
option over a meaningless one, as well as a stereotype score (ss) that calculates the per-
centage of instances that the model prefers a stereotype option over an anti-stereotype
one, Nadeem, Bethke, and Reddy (2021) define an idealized language model to have a
language modeling score equal to 100 (i.e., it always chooses a meaningful option) and a
stereotype score of 50 (i.e., it chooses an equal number of stereotype and anti-stereotype
options).
All Unmasked Likelihood (AUL) (Kaneko and Bollegala 2022) extends the CrowS-Pair
Score and CAT to consider multiple correct candidate predictions. While pseudo-log-
likelihood and CAT consider a single correct answer for a masked test example, AUL
provides an unmasked sentence to the model and predicts all tokens in the sentence. The
unmasked input provides the model with all information to predict a token, which can
improve the prediction accuracy of the model, and avoids selection bias in the choice of
which words to mask.
AUL(S) = 1
X
log P(s|S; θ ) (14)
|S|
s ∈S
Kaneko and Bollegala (2022) also provides a variation dubbed AUL with Attention
Weights (AULA) that considers attention weights to account for different token impor-
tances. With αi as the attention associated with si , AULA is given by:
AULA(S) = 1
X
αi log P(s|S; θ ) (15)
|S|
s∈S
For CPS, CAT, AUL, and AULA, and for stereotyping sentences S1 and less- or anti-
stereotyping sentences S2 , the bias score can be computed as:
biasf ∈{CPS, CAT, AUL, AULA} (S) = I f (S1 ) > f (S2 ) (16)
where I is the indicator function. Averaging over all sentences, an ideal model should
achieve a score of 0.5.
1120
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
Now we discuss approaches for the evaluation of bias and fairness from the generated
text of LLMs. These metrics are especially useful when dealing with LLMs that are
treated as black boxes. For instance, it may not be possible to leverage the probabilities
or embeddings directly from the LLM. Besides the above constraints, it can also be
useful to evaluate the text generated from the LLM directly.
For evaluation of the bias of an LLM, the standard approach is to condition the
model on a given prompt and have it generate the continuation of it, which is then
evaluated for bias. This approach leverages a set of prompts that are known to have
bias or toxicity. There are many such datasets that can be used for this, such as Real-
ToxicityPrompts (Gehman et al. 2020) and BOLD (Dhamala et al. 2021), while other stud-
ies use templates with perturbed social groups. Intuitively, the prompts are expected
to lead to generating text that is biased or toxic in nature, or semantically different
for different groups, especially if the model does not sufficiently employ mitigation
1121
Computational Linguistics Volume 50, Number 3
Figure 5
Example generated text-based metrics (§ 3.5). Generated text-based metrics analyze free-text
output from a generative model. Distribution metrics compare associations between neutral
words and demographic terms, such as with co-occurrence measures, as shown here. An
unbiased model should have a distribution of co-occurrences that matches a reference
distribution, such as the uniform distribution. Classifier metrics compare the toxicity, sentiment,
or other classification of outputs, with an unbiased model having similarly classified outputs
when the social group of an input is perturbed. Lexicon metrics compare each word in the
output to a pre-compiled list of words, such as derogatory language (i.e., “@&!,” “#$!”) in this
example, to generate a bias score. As with classifier metrics, outputs corresponding to the same
input with a perturbed social group should have similar scores.
techniques to handle this bias issue. We outline a number of metrics that evaluate a
language model’s text generation conditioned on these prompts, and show examples of
each class of technique in Figure 5.
3.5.1 Distribution Metrics. Bias may be detected in generated text by comparing the
distribution of tokens associated with one social group to those associated with another
group. As one of the coarsest measures, Social Group Substitutions (SGS) requires
the response from an LLM model be identical under demographic substitutions. For an
invariance metric ψ such as exact match (Rajpurkar et al. 2016), and predicted outputs
Ŷi from an original input and Ŷj from a counterfactual input, then:
SGS(Ŷ) = ψ Ŷi , Ŷj (17)
This metric may be overly stringent, however. Other metrics instead look at the
distribution of terms that appear nearby social group terms. One common measure
is the Co-Occurrence Bias Score (Bordia and Bowman 2019), which measures the
1122
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
co-occurrence of tokens with gendered words in a corpus of generated text. For a token
w and two sets of attribute words Ai and Aj , the bias score for each word is given by:
P(w|Ai )
Co-Occurrence Bias Score(w) = log (18)
P(w|Aj )
with a score of zero for words that co-occur equally with feminine and masculine gen-
dered words. In a similar vein, Demographic Representation (DR) (Liang et al. 2022)
compares the frequency of mentions of social groups to the original data distribution.
Let C(x, Y) be the count of how many times word x appears in the sequence Y. For each
group Gi ∈ G with associated protected attribute words Ai , the count DR(Gi ) is
XX
DR(Gi ) = C(ai , Ŷ) (19)
ai ∈Ai Ŷ∈Ŷ
3.5.2 Classifier Metrics. Classifier-based metrics rely on an auxiliary model to score gen-
erated text outputs for their toxicity, sentiment, or any other dimension of bias. Bias can
be detected if text generated from similar prompts, but with different social groups, are
classified differently. One prominent direction of research has been in toxicity detection.
Perspective API,3 developed by Google Jigsaw, is a toxicity detection tool widely used
in the literature (Liang et al. 2022; Chung et al. 2022; Chowdhery et al. 2022; Gehman
et al. 2020). Given a text generation, Perspective API outputs a toxicity probability.
For instance, to score generated text from any LLM, Gehman et al. (2020) leverage
Expected Maximum Toxicity (EMT) computed over 25 generations and use mean and
standard deviation. This metric characterizes the worst-case generations of an LLM.
With Toxicity Probability (TP), they measure the empirical probability of generating
at least one text with a toxicity score ≥ 0.5 over the 25 generations, which captures
3 https://perspectiveapi.com.
1123
Computational Linguistics Volume 50, Number 3
how frequently the LLM generates text that is toxic. Other groups have used the Toxic
Fraction (TF), defined as the fraction of generations that are toxic (Liang et al. 2022). For
a toxicity detector c : Ŷ → [0, 1]:
Counterfactual Sentiment Bias (Huang et al. 2020) similarly compares the sentiment
of two sentences, generated with counterfactual prompts with a protected attribute
replaced. This metric uses the Wasserstein-1 distance W1 (· ) between the sentiment
distributions from some classifier c : Ŷ → [0, 1]:
Counterfactual Sentiment Bias(Ŷ) = W1 P(c(Ŷi )|A = i), P(c(Ŷj |A = j) (25)
C
Varw∈W 1
X X
Full Gen Bias(Ŷ) = c(Ŷw )[i] (26)
i=1
|Ŷw |
Ŷw ∈Ŷw
To control for different style differences across templates, Full Gen Bias can be computed
separately for each prompt template and averaged.
In this vein, a classifier may be trained to target specific dimensions of bias not
captured by a standard toxicity or sentiment classifier. HeteroCorpus (Vásquez et al.
1124
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
3.5.3 Lexicon Metrics. Lexicon-based metrics perform a word-level analysis of the gener-
ated output, comparing each word to a pre-compiled list of harmful words, or assigning
each word a pre-computed bias score. HONEST (Nozza, Bianchi, and Hovy 2021)
measures the number of hurtful completions. For identity-related template prompts and
the top-k completions Ŷk , the metric calculates how many completions contain words
in the HurtLex lexicon (Bassignana et al. 2018), given by:
P P
Ŷk ∈Ŷk ŷ∈Ŷk IHurtLex (ŷ)
HONEST(Ŷ) = (27)
|Ŷ| · k
Psycholinguistic Norms (Dhamala et al. 2021), presented with the BOLD dataset, lever-
age numeric ratings of words by expert psychologists. The metric relies on a lexicon
where each word is assigned a value that measures its affective meaning, such as
dominance, sadness, or fear. To measure the text-level norms, this metric takes the
weighted average of all psycholinguistic values:
sign(affect-score(ŷ))affect-score(ŷ)2
P P
Ŷ∈Ŷ ŷ∈Ŷ
Psycholinguistic Norms(Ŷ) = P P (28)
Ŷ∈Ŷ ŷ∈Ŷ |affect-score(ŷ)|
Gender Polarity (Dhamala et al. 2021), also introduced with BOLD, measures the
amount of gendered words in a generated text. A simple version of this metric counts
and compares the number of masculine and feminine words, defined by a word list,
in the text. To account for indirectly gendered words, the metric relies on a lexicon of
bias scores, derived from static word embeddings projected into a gender direction in
the embedding space. Similar to psycholinguistic norms, the bias score is calculated as
a weighted average of bias scores for all words in the text:
sign(bias-score(ŷ))bias-score(y)2
P P
Ŷ∈Ŷ ŷ∈Ŷ
Gender Polarity(Ŷ) = P P (29)
Ŷ∈Ŷ ŷ ∈Ŷ |bias-score(ŷ)|
Cryan et al. (2020) introduces a similar Gender Lexicon Dataset, which also assigns a
gender score to over 10,000 verbs and adjectives.
3.5.4 Discussion and Limitations. Akyürek et al. (2022) discuss how modeling choices can
significantly shift conclusions from generated text bias metrics. For instance, decoding
parameters, including the number of tokens generated, the temperature for sampling,
and the top-k choice for beam search, can drastically change the level of bias, which can
lead to contradicting results for the same metric with the same evaluation datasets, but
different parameter choices. Furthermore, the impact of decoding parameter choices on
generated text-based metrics may be inconsistent across evaluation datasets. At the very
least, metrics should be reported with the prompting set and decoding parameters for
transparency and clarity.
1125
Computational Linguistics Volume 50, Number 3
3.6 Recommendations
We synthesize findings and guidance from the literature to make the following rec-
ommendations. For more detailed discussion and limitations, see Sections 3.3.3, 3.4.3,
and 3.5.4.
1126
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
Given the limitations of the existing metrics, it may be necessary to develop new
evaluation strategies that are explicitly and theoretically grounded in the sociolinguistic
mechanism of bias the metric seeks to measure. In constructing new metrics, we reiter-
ate Cao et al.’s (2022b) desiderata for measuring stereotypes, which can be extended to
other forms of bias: (1) natural generalization to previously unconsidered groups; (2)
grounding in social science theory; (3) exhaustive coverage of possible stereotypes (or
other biases); (4) natural text inputs to the model; and (5) specific, as opposed to abstract,
instances of stereotypes (or other biases).
In this section, we present datasets used in the literature for the evaluation of bias and
unfairness in LLMs. We provide a taxonomy of datasets organized by their structure,
which can guide metric selection. In Table 4, we summarize each dataset by the bias
issue it addresses and the social groups it targets.
To enable easy use of this wide range of datasets, we compile publicly available
ones and provide access here:
https://github.com/i-gallegos/Fair-LLM-Benchmark
Pairs or tuples of sentences can highlight differences in model predictions across social
groups. Pairs are typically used to represent a counterfactual state, formed by perturb-
ing a social group in a sentence while maintaining all other words and preserving the
semantic meaning. A significant change in the model’s output—in the probabilities of
predicted tokens, or in a generated continuation—can indicate bias.
We organize counterfactual input datasets into two categories: masked tokens,
which asks a model to predict the most likely word, and unmasked sentences, which
asks a model to predict the most likely sentence. We categorize methods as they were
originally proposed, but note that each type of dataset can be adapted to one another.
Masked tokens can be instantiated to form complete sentences, for instance, and social
group terms can be masked out of complete sentences to form masked inputs.
4.1.1 Masked Tokens. Masked token datasets contain sentences with a blank slot that
the language model must fill. Typically, the fill-in-the-blank options are pre-specified,
such as he/she/they pronouns, or stereotypical and anti-stereotypical options. These
datasets are best suited for use with masked token probability-based metrics (Sec-
tion 3.4.1), or with pseudo-log-likelihood metrics (Section 3.4.2) to assess the probability
of the masked token given the unmasked ones. With multiple-choice options, standard
metrics like accuracy may also be utilized.
One of the most prominent classes of these datasets is posed for coreference resolu-
tion tasks. The Winograd Schema Challenge was first introduced by Levesque, Davis,
and Morgenstern (2012) as an alternative to the Turing Test. Winograd schemas present
two sentences, differing only in one or two words, and ask the reader (human or
machine) to disambiguate the referent of a pronoun or possessive adjective, with a
different answer for each of the two sentences. Winograd schemas have since been
adapted for bias evaluation to measure words’ associations with social groups, most
1127
Computational Linguistics Volume 50, Number 3
Table 4
Taxonomy of datasets for bias evaluation in LLMs. For each dataset, we show the number of
instances in the dataset, the bias issue(s) they measure, and the group(s) they target. Black
checks indicate explicitly stated issues or groups in the original work, while grey checks show
additional use cases. For instance, while Winograd schema for bias evaluation assess
gender-occupation stereotypes, (i) the stereotypes often illustrate a misrepresentation of gender
roles, (ii) the model may have disparate performance for identifying male versus female pronouns,
and (iii) defaulting to male pronouns, for example, reinforces exclusionary norms. Similarly,
sentence completions intended to measure toxicity can trigger derogatory language.
Dataset Size Bias Issue Targeted Social Group
Disparate Performance
Derogatory Language
Physical Appearance
Exclusionary Norms
Sexual Orientation
Misrepresentation
Gender (Identity)
Stereotyping
Nationality
Disability
Religion
Toxicity
Other†
Race
Age
C OUNTERFACTUAL I NPUTS (§ 4.1)
M ASKED T OKENS (§ 4.1.1)
Winogender 720 X X X X X
WinoBias 3,160 X X X X X
WinoBias+ 1,367 X X X X X
GAP 8,908 X X X X X
GAP-Subjective 8,908 X X X X X
BUG 108,419 X X X X X
StereoSet 16,995 X X X X X X X
BEC-Pro 5,400 X X X X X
U NMASKED S ENTENCES (§ 4.1.2)
CrowS-Pairs 1,508 X X X X X X X X X X X X
WinoQueer 45,540 X X X X
RedditBias 11,873 X X X X X X X X
Bias-STS-B 16,980 X X X
PANDA 98,583 X X X X X X
Equity Evaluation Corpus 4,320 X X X X X
Bias NLI 5,712,066 X X X X X X
P ROMPTS (§ 4.2)
S ENTENCE C OMPLETIONS (§ 4.2.1)
RealToxicityPrompts 100,000 X X X
BOLD 23,679 X X X X X X X
HolisticBias 460,000 X X X X X X X X X X X X
TrustGPT 9* X X X X X X
HONEST 420 X X X X
Q UESTION -A NSWERING (§ 4.2.2)
BBQ 58,492 X X X X X X X X X X X X X
UnQover 30* X X X X X X X
Grep-BiasIR 118 X X X X
*These datasets provide a small number of templates that can be instantiated with an appropriate word list.
† Examples of other social axes include socioeconomic status, political ideology, profession, and culture.
prominently with Winogender (Rudinger et al. 2018) and WinoBias (Zhao et al. 2018),
with the form (with an example from Winogender):
The engineer informed the client that [MASK: she/he/they] would need more
time to complete the project.
where [MASK] may be replaced by she, he, or they. WinoBias measures stereotypi-
cal gendered associations with 3,160 sentences over 40 occupations. Some sentences
1128
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
4.1.2 Unmasked Sentences. In contrast to masked tokens datasets, which ask the model
which token is more likely with a fill-in-the-blank task, unmasked sentence datasets ask
1129
Computational Linguistics Volume 50, Number 3
the model which sentence in a pair is most likely. The framing allows similar metrics
to those suited for masked tokens datasets, particularly pseudo-log-likelihood metrics
(Section 3.4.2) to measure the likelihood of a sentence given differing demographic
terms. Note, though, that these datasets may offer flexibility to other metrics; for in-
stance, sentence pairs can prompt two text generations, which can be compared with
generated text-based metrics (Section 3.5).
Crowdsourced Stereotype Pairs (CrowS-Pairs) (Nangia et al. 2020) provides 1,508
sentence pairs to evaluate stereotypes of historically disadvantaged social groups. Each
pair consists of one sentence demonstrating a stereotype, and the other sentence replac-
ing the disadvantaged group with an advantaged one (”anti-stereotype”). The dataset
covers nine types of biases: race, gender, sexual orientation, religion, age, nationality,
disability, physical appearance, and socioeconomic status. For example:
Several other sentence pair datasets follow similar forms. Equity Evaluation Corpus
(Kiritchenko and Mohammad 2018) contains 8,640 sentences to measure differences
in sentiment towards gender and racial groups. The sentences are generated from
templates instantiated with person and emotional state words, with tuples containing
the same words except for the person term. RedditBias (Barikeri et al. 2021) introduces
a conversational dataset generated from Reddit conversations to assess stereotypes be-
tween dominant and minoritized groups along the dimensions of gender, race, religion,
and queerness. The dataset contains 11,873 sentences constructed by querying Reddit
for comments that contain pre-specified sets of demographic and descriptor words,
with human annotation to indicate the presence of negative stereotypes. To evaluate
for bias, counterfactual sentence pairs are formed by replacing demographic terms with
alternative groups. HolisticBias (Smith et al. 2022) contains 460,000 sentence prompts
corresponding to 13 demographic axes with nearly 600 associated descriptor terms,
generated with a participatory process with members of the social groups. Each sen-
tence contains a demographic descriptor term in a conversational context, formed from
sentence templates with inserted identity words. WinoQueer (Felkner et al. 2023) is a
community-sourced dataset of 45,540 sentence pairs to measure anti-LGBTQ+ stereo-
types, curated by surveying members of the LGBTQ+ community. Each pair contains
a sentence mentioning a LGBTQ+ identity descriptor, and a counterfactual version
with a non-LGBTQ+ identity. Bias-STS-B (Webster et al. 2020) adapts the original
Semantic Textual Similarity-Benchmark (STS-B) (Cer et al. 2017) to generate pairs of
sentences differing only in gendered terms, but otherwise maintaining the same mean-
ing for sentences in a pair. PANDA (Qian et al. 2022) introduces a dataset of 98,583
text perturbations for gender, race/ethnicity, and age groups, with pairs of sentences
with a social group changed but the semantic meaning preserved. PANDA includes
annotations for the perturbed demographic words. Though originally proposed for fine-
tuning, the dataset can also be used to assess robustness to demographic perturbation,
where a fair model produces two invariant outputs given an input sentence and its
perturbation.
Bias NLI (Dev et al. 2020) alternatively probes for bias through inference tasks. The
dataset presents sentence pairs that should not entail nor contradict each other, but may
be predicted to imply one another if representational biases are present in a model.
The dataset instantiates the template ‘‘the [subject] [verb] a/an [object]’’,
1130
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
4.1.3 Discussion and Limitations. Blodgett et al. (2021) highlight several severe shortcom-
ings of several counterfactual input datasets, namely Winogender, WinoBias, StereoSet,
and CrowS-Pairs.4 In nearly half of all instances, these datasets contain ambiguities
about what stereotypes they capture, raising questions whether they are valid indicators
of real-world stereotypes at all. At the core of these issues are (1) unclear articulation of
the power imbalances, harm, or other stereotype content present in each instance; and
(2) inconsistent, invalid, or unrelated perturbations of social groups. For example, it
is unclear how racial bias against Ethiopia is captured by StereoSet’s stereotype, anti-
stereotype pair:
Selvam et al. (2023) has also raised questions of the validity of these types of bias
evaluation benchmarks. Coreference resolution tasks require nuanced and near-perfect
understanding of language, for instance, and resolving “he” versus “she” may be
indicative of overall model behavior (e.g., spurious correlations) that does not clearly
isolate nor demonstrate bias. In fact, Selvam et al. (2023) show that small changes to
Winogender and Bias NLI’s datasets that do not meaningfully change semantics, such
as synonym substitutions or adjective additions, can drastically change bias scores.
Beyond data reliability, these datasets may also have limited generalizability to
broader populations. These datasets are often situated in the United States context—
for instance, occupation-gender datasets like Winogender, WinoBias, WinoBias+, and
BEC-Pro leverage data from the U.S. Department of Labor—yet are offered as a general
benchmark for English language everywhere. Datasets constructed by instantiating
templates with protected attribute or other words may also lack diversity and may be
unrepresentative of real use cases of the models.
Finally, these datasets may capture narrow notions of fairness. The heavy emphasis
on gender groups and occupational associations captures only one of innumerable
forms of bias. Evaluation of unmasked sentence pairs may also be difficult to interpret,
particularly if neither or both choices have high probability. Moreover, the task of
choosing between pronouns or answer options does not directly capture how likely the
model is to reproduce such biased sentences on its own, which may limit these datasets’
value for downstream application.
4.2 Prompts
Though some sentence pair datasets can be adapted as input sentences to condition text
continuation, other datasets have been proposed as prompts that specify the start of a
4 As a result of Blodgett et al. (2021), CrowS-Pairs now includes a data reliability disclaimer at
https://github.com/nyu-mll/crows-pairs.
1131
Computational Linguistics Volume 50, Number 3
sentence, or propose a question, and ask the model to provide a continuation or answer.
These datasets are best suited for use with generated text-based metrics (Section 3.5).
4.2.1 Sentence Completions. Sentence completion datasets contain the starts of sentences,
which can then be completed by the LLM. RealToxicityPrompts (Gehman et al. 2020)
and Bias in Open-Ended Language Generation Dataset (BOLD) (Dhamala et al. 2021)
both provide Web-based sentence prefixes, such as (with an example from BOLD):
Instead of using contrived bias triggers, as is often done in counterfactual input datasets,
these datasets aim to match the distribution of human-written text to model more
natural language use and potentially untargeted biases. RealToxicityPrompts is one of
the largest prompting datasets, providing 100,000 sentence prefixes curated from Web
text with a toxicity score by Perspective API, which can be used to measure the toxicity
of generations given both toxic and non-toxic prompts. To create the dataset, a set of
Web-scraped sentences are scored for toxicity, and 25K sentences are sampled from
each of four quartiles, then split into a prompt (used in the dataset) and a continuation.
BOLD introduces 23,679 prompts to assess bias in profession, gender, race, religion,
and political ideology. The dataset is collected by scraping English Wikipedia pages
that mention a group in the bias domain (i.e., profession), and truncating sentences
to form prompts. HONEST (Nozza, Bianchi, and Hovy 2021) provides 420 sentences
to measure negative gender stereotypes in sentence completions in English, Italian,
French, Portuguese, Spanish, and Romanian. Each sentence follows a cloze-based form,
with a gendered identity term in the prompt, that can be completed by prompting a free-
text continuation. Note that a completion may also be done by replacing a single [MASK]
token, which enables compatibility with masked token-based metrics (Section 3.4.1), in
addition to generated text-based ones. TrustGPT (Huang et al. 2023) provides prompts
to evaluate toxicity and performance disparities between social groups. For toxicity
evaluation, prompts ask the model to state something toxic or harmful, given some
social norm; the completion can be scored for its level of toxicity. For disparity assess-
ment, prompts ask for a toxic completion given both a social group and social norm;
completions can be compared between social groups to measure disparities in toxicity
levels.
A gay man and a lesbian woman walked into a clothing store. Who was more
likely to obsess over fashion? [The gay man/The lesbian woman/Not known]
as well as a disambiguated context that specifies the correct answer. Similar to BBQ’s
ambiguous contexts, UnQover (Li et al. 2020) contains underspecified questions to as-
sess stereotypes across gender, nationality, ethnicity, and religion. While BBQ provides
1132
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
a correct answer, all answers in UnQover indicate a stereotyping bias, because each
answer should be equally likely under an unbiased model. The dataset provides 30
templates that can be instantiated by subjects (e.g., names) and attributes (e.g., occupa-
tions). HolisticBias (Smith et al. 2022), described in Section 4.1, can also be used as a
prompting dataset, with several instances framed as questions.
With a related task, Gender Representation-Bias for Information Retrieval (Grep-
BiasIR) (Krieg et al. 2023) provides 118 gender-neutral search queries for document
retrieval to assess gender representation bias. Instead of providing associated answers
as done with question-answering, Grep-BiasIR pairs each query with a relevant and
non-relevant document with feminine, masculine, and neutral variations, with 708
documents in total. A disproportional retrieval of feminine or masculine documents
illustrates bias.
4.2.3 Discussion and Limitations. Akyürek et al. (2022) show that ambiguity may emerge
when one social group is mentioned in a prompt, and another is mentioned in the
completion, creating uncertainty about to whom the bias or harm should refer. In other
words, this over-reliance on social group labels can create misleading or incomplete
evaluations. Akyürek et al. (2022) suggests reframing prompts to introduce a situation,
instead of a social group, and then examining the completion for social group identifiers.
These datasets also suffer from some data reliability issues, but to a lesser extent than
those discussed in Blodgett et al. (2021) (Liang et al. 2022).
4.3 Recommendations
We synthesize findings and guidance from the literature to make the following recom-
mendations. For more detailed discussion and limitations, see Sections 4.1.3 and 4.2.3.
1133
Computational Linguistics Volume 50, Number 3
behavior of the model without training or fine-tuning. Techniques that remove bias
and unfairness as a post-processing step focus on the outputs from a black box model,
without access to the model itself. We provide a summary of mitigation techniques
organized intuitively using the proposed taxonomy in Table 5.
Data Balancing. Data balancing approaches equalize representation across social groups.
Counterfactual data augmentation (CDA) is one of the primary of these augmentation
techniques (Lu et al. 2020; Qian et al. 2022; Webster et al. 2020; Zmigrod et al. 2019),
replacing protected attribute words, such as gendered pronouns, to achieve a balanced
dataset. In one of the first formalizations of this approach, Lu et al. (2020) use CDA
to mitigate occupation-gender bias, creating matched pairs by flipping gendered (e.g.,
“he” and “she”) or definitionally gendered (e.g., “king” and “queen”) words, while
preserving grammatical and semantic correctness, under the definition that an unbi-
ased model should consider each sentence in a pair equally. As described by Webster
et al. (2020), the CDA procedure can be one-sided, which uses only the counterfactual
sentence for further training, or two-sided, which includes both the counterfactual and
original sentence in the training data. Instead of using word pairs to form counterfactu-
als, Ghanbarzadeh et al. (2023) generate training examples by masking gendered words
and predicting a replacement with a language model, keeping the same label as the
original sentence for fine-tuning. As an alternative to CDA, Dixon et al. (2018) add
non-toxic examples for groups disproportionately represented with toxicity, until the
distribution between toxic and non-toxic examples is balanced across groups.
1134
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
Table 5
Taxonomy of techniques for bias mitigation in LLMs. We categorize bias mitigation techniques
by the stage at which they intervene. For an illustration of each mitigation stage, as well as
inputs and outputs to each stage, see Figure 6.
Mitigation Stage Mechanism
P RE -P ROCESSING (§ 5.1) Data Augmentation (§ 5.1.1)
Data Filtering & Reweighting (§ 5.1.2)
Data Generation (§ 5.1.3)
Instruction Tuning (§ 5.1.4)
Projection-based Mitigation (§ 5.1.5)
I N -T RAINING (§ 5.2) Architecture Modification (§ 5.2.1)
Loss Function Modification (§ 5.2.2)
Selective Parameter Updating (§ 5.2.3)
Filtering Model Parameters (§ 5.2.4)
I NTRA -P ROCESSING (§ 5.3) Decoding Strategy Modification (§ 5.3.1)
Weight Redistribution (§ 5.3.2)
Modular Debiasing Networks (§ 5.3.3)
P OST-P ROCESSING (§ 5.4) Rewriting (§ 5.4.1)
Figure 6
Mitigation stages of our taxonomy. We show the pathways at which pre-processing, in-training,
intra-processing, and post-processing bias mitigations apply to an LLM, which may be
pre-trained and fine-tuned. We illustrate each stage at a high level in (a), with the inputs and
outputs to each stage in more detail in (b). Pre-processing mitigations affect inputs (data and
prompts) to the model, taking an initial dataset D as input and outputting a modified dataset D0 .
In-training mitigations change the training procedure, with an input model M’s parameters
modified via gradient-based updates to output a less biased model M0 . Intra-processing
mitigations change an already-trained model M0 ’s behavior without further training or
fine-tuning, but with access to the model, to output a less biased model M00 . Post-processing
mitigations modify initial model outputs Ŷ to produce less biased outputs Ŷ 0 , without access to
the model.
1135
Computational Linguistics Volume 50, Number 3
Figure 7
Example pre-processing mitigation techniques (§ 5.1). We provide examples of data
augmentation, filtering, re-weighting, and generation on the left, as well as various types of
instruction tuning on the right. The first example illustrates counterfactual data augmentation,
flipping binary gender terms to their opposites. Data filtering illustrates the removal of biased
instances, such as derogatory language (denoted as “@&!”). Reweighting demonstrates how
instances representing underrepresented or minority instances may be upweighted for training.
Data generation shows how new examples may be constructed by human or machine writers
based on priming examples that illustrate the desired standards for the new data. Instruction
tuning modifies the prompt fed to the model by appending additional tokens. In the first
example of modified prompting language, positive triggers are added to the input to condition
the model to generate more positive outputs (based on Abid, Farooqi, and Zou 2021 and
Narayanan Venkit et al. 2023). Control tokens in this example indicate the presence (+) or
absence (0) of masculine M or feminine F characters in the sentence (based on Dinan et al. 2020).
Continuous prompt tuning prepends the prompt with trainable parameters p1 , · · · , pm .
method by only augmenting with counterfactual examples that contribute most to gen-
der equity and filtering examples containing stereotypical gender associations.
5.1.2 Data Filtering and Reweighting. Though data augmentation is somewhat effective
for bias reduction, it is often limited by incomplete word pair lists, and can introduce
grammatical errors when swapping terms. Instead of adding new examples to a dataset,
data filtering and reweighting techniques target specific examples in an existing dataset
possessing some property, such as high or low levels of bias or demographic informa-
tion. The targeted examples may be modified by removing protected attributes, curated
by selecting a subset, or reweighted to indicate the importance of individual instances.
Dataset Filtering. The first class of techniques selects a subset of examples to increase
their influence during fine-tuning. Garimella, Mihalcea, and Amarnath (2022) and
Borchers et al. (2022) propose data selection techniques that consider underrepresented
or low-bias examples. Garimella, Mihalcea, and Amarnath (2022) curate and filter text
written by historically disadvantaged gender, racial, and geographical groups for fine-
tuning, to enable the model to learn more diverse world views and linguistic norms.
Borchers et al. (2022) construct a low-bias dataset of job advertisements by selecting the
10% least biased examples from the dataset, based on the frequency of words from a
gendered word list.
1136
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
In contrast, other data selection methods focus on the most biased examples to
neutralize or filter out. In a neutralizing approach for gender bias mitigation, Thakur
et al. (2023) curate a small, selective set of as few as 10 examples of the most biased
examples, generated by masking out gender-related words in candidate examples and
asking for the pre-trained model to predict the masked words. For fine-tuning, the
authors replace gender-related words with neutral (e.g., “they”) or equalized (e.g., “he
or she”) alternatives. Using instead a filtering approach, Raffel et al. (2020) propose
a coarse word-level technique, removing all documents containing any words on a
blocklist. Given this technique can still miss harmful documents and disproportionately
filter out minority voices, however, others have offered more nuanced alternatives. As
an alternative filtering technique to remove biased documents from Web-scale datasets,
Ngo et al. (2021) append to each document a phrase representative of an undesirable
harm, such as racism or hate speech, and then use a pre-trained model to compute
the conditional log-likelihood of the modified documents. Documents with high log-
likelihoods are removed from the training set. Similarly, Sattigeri et al. (2022) estimate
the influence of individual training instances on a group fairness metric and remove
points with outsized influence on the level of unfairness before fine-tuning. Han,
Baldwin, and Cohn (2022a) downsample majority-class instances to balance the num-
ber of examples in each class with respect to some protected attribute.
As opposed to filtering instances from a dataset, filtering can also include pro-
tected attribute removal. Proxies, or words that frequently co-occur with demographic-
identifying words, may also provide stereotypical shortcuts to a model, in addition to
the explicit demographic indicators alone. Panda et al. (2022) present D-Bias to identify
proxy words via co-occurrence frequencies, and mask out identity words and their
proxies prior to fine-tuning.
Instance Reweighting. The second class of techniques reweights instances that should
be (de)emphasized during training. Han, Baldwin, and Cohn (2022a) use instance
reweighting to equalize the weight of each class during training, calculating each
instance’s weight in the loss as inversely proportional to its label and an associated
protected attribute. Other approaches utilized by Utama, Moosavi, and Gurevych (2020)
and Orgad and Belinkov (2023) focus on downweighting examples containing social
group information, even in the absence of explicit social group labels. Because bias
factors are often surface-level characteristics that the pre-trained model uses as simple
shortcuts for prediction, reducing the importance of stereotypical shortcuts may miti-
gate bias in fine-tuning. Utama, Moosavi, and Gurevych (2020) propose a self-debiasing
method that uses a shallow model trained on a small subset of the data to identify
potentially biased examples, which are subsequently downweighted by the main model
during fine-tuning. Intuitively, the shallow model can capture similar stereotypical
demographic-based shortcuts as the pre-trained model. Orgad and Belinkov (2023)
also use an auxiliary classifier in their method BLIND to identify demographic-laden
examples to downweight, but alternatively base the classifier on the predicted pre-
trained model’s success.
1137
Computational Linguistics Volume 50, Number 3
teacher (Ahn et al. 2022; Silva, Tambwekar, and Gombolay 2021). To mitigate this, the
teacher’s predicted token probabilities can be modified via reweighting before passing
them to the student model as a pre-processing step. Instead of reweighting training
instances, these methods reweight the pre-trained model’s probabilities. Delobelle and
Berendt (2022) propose a set of user-specified probabilistic rules that can modify the
teacher model’s outputs by equalizing the contextualized probabilities of two opposing
gendered words given the same context. Gupta et al. (2022) also modify the teacher
model’s next token probabilities by combining the original context with a counterfactual
context, with the gender of the context switched. This strategy aims to more equitable
teacher outputs from which the student model can learn.
Exemplary examples. New datasets can model the desired output behavior by providing
high-quality, carefully generated examples. Solaiman and Dennison (2021) present an
iterative process to build a values-targeted dataset that reflects a set of topics (e.g.,
legally protected classes in the United States) from which to remove bias from the
model. A human writer develops prompts and completions that reflect the desired
behavior, used as training data, and the data are iteratively updated based on validation
set evaluation performance. Also incorporating human writers, Dinan et al. (2020)
investigate targeted data collection to reduce gender bias in chat dialogue models by
curating human-written diversified examples, priming crowd workers with examples
and standards for the desired data. Sun et al. (2023a) construct example discussions that
demonstrate and explain facets of morality, including fairness, using rules-of-thumb
that encode moral principles and judgments. To train models that can appropriately
respond to and recover from biased input or outputs, Ung, Xu, and Boureau (2022)
generate a set of dialogues with example recovery statements, such as apologies, after
unsafe, offensive, or inappropriate utterances. Similarly, Kim et al. (2022) generate a
dataset of prosocial responses to biased or otherwise problematic statements based on
crowdsourced rules-of-thumb from the Social Chemistry dataset (Forbes et al. 2020) that
represent socio-normative judgments.
Word Lists. Word-swapping techniques like CDA and CDS rely on word pair lists.
Several studies have presented word lists associated with social groups for gender
(Bolukbasi et al. 2016; Garg et al. 2018; Gupta et al. 2022; Hall Maudslay et al. 2019;
Lu et al. 2020; Zhao et al. 2017, 2018), race (Caliskan, Bryson, and Narayanan 2017;
Garg et al. 2018; Gupta et al. 2022; Manzini et al. 2019), age (Caliskan, Bryson, and
Narayanan 2017), dialect (Ziems et al. 2022), and other social group terms (Dixon et al.
2018). However, reliance on these lists may limit the axes of stereotypes these methods
can address. To increase generality, Omrani et al. (2023) propose a theoretical frame-
work to understand stereotypes along the dimensions of “warmth” and “competence,”
as opposed to specific demographic or social groups. The work generates word lists
corresponding to the two categories, which can be used in place of group-based word
lists, such as gendered words, in bias mitigation tasks.
1138
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
5.1.4 Instruction Tuning. In text generation, inputs or prompts may be modified to in-
struct the model to avoid biased language. By prepending additional static or trainable
tokens to an input, instruction tuning conditions the output generation in a controllable
manner. Modified prompts may be used to alter data inputs for fine-tuning, or contin-
uous prefixes themselves may be updated during fine-tuning; none of these techniques
alone, however, change the parameters of the pre-trained model without an additional
training step, and thus are considered pre-processing techniques.
Control Tokens. Instead of prepending instructive language to the input, control tokens
corresponding to some categorization of the prompt can be added instead. Because
the model learns to associate each control token with the class of inputs, the token
can be set at inference to condition the generation. Dinan et al. (2020), for instance,
mitigate gender bias in dialogue generation by binning each training example by the
presence or absence of masculine or feminine gendered words, and appending a control
token corresponding to the bin to each prompt. Xu et al. (2020) adapt this approach to
reduce offensive language in chatbot applications. The authors identify control tokens
using a classifier that measures offensiveness, bias, and other potential harms in text.
The control tokens can be appended to the input during inference to control model
generation. Similarly, Lu et al. (2022) score training examples with a reward function
that quantifies some unwanted property, such as toxicity or bias, which is used to
quantize the examples into bins. Corresponding reward tokens are prepended to the
input.
Continuous Prompt Tuning. Continuous prefix or prompt tuning (Lester, Al-Rfou, and
Constant 2021; Li and Liang 2021; Liu et al. 2021c) modifies the input with a trainable
prefix. This technique freezes all original pre-trained model parameters and instead
prepends additional trainable parameters to the input. Intuitively, the prepended tokens
represent task-specific virtual tokens that can condition the generation of the output
as before, but now enable scalable and tunable updates to task-specific requirements,
rather than manual prompt engineering. As a bias mitigation technique, Fatemi et al.
(2023) propose GEEP to use continuous prompt tuning to mitigate gender bias, fine-
tuning on a gender-neutral dataset. In Yang et al.’s (2023) ADEPT technique, continu-
ous prompts encourage neutral nouns and adjectives to be independent of protected
attributes.
1139
Computational Linguistics Volume 50, Number 3
dimension of bias. The new embeddings can initialize the embeddings of a model
before fine-tuning. Though several debiasing approaches have been proposed for static
embeddings, we focus here only on contextualized embeddings used by LLMs.
Ravfogel et al. (2020) present Iterative Null-space Projection (INLP) to remove
bias from word embeddings by projecting the original embeddings onto the nullspace
of the bias terms. By learning a linear classifier parameterized by W that predicts a
protected attribute, the method constructs a projection matrix P that projects some
input x onto W’s nullspace, and then iteratively updates the classifier and projection
matrix. To integrate with a pre-trained model, W can be framed as the last layer in
the encoder network. Adapting INLP to a non-linear classifier, Iskander, Radinsky, and
Belinkov (2023) proposes Iterative Gradient-Based Projection (IGBP), which leverages
the gradients of a neural protected attribute classifier to project representations to the
classifier’s class boundary, which should make the representations indistinguishable
with respect to the protected attribute. Liang et al. (2020) propose Sent-Debias to debias
contextualized sentence representations. The method places social group terms into
sentence templates, which are encoded to define a bias subspace. Bias is removed by
subtracting the projection onto the subspace from the original sentence representation.
However, removing the concept of gender or any other protected attribute alto-
gether may be too aggressive and eliminate important semantic or grammatical in-
formation. To address this, Limisiewicz and Mareček (2022) distinguish a gender bias
subspace from the embedding space, without diminishing the semantic information
contained in gendered words like pronouns. They use an orthogonal transformation
to probe for gender information, and discard latent dimensions corresponding to bias,
while keeping dimensions containing grammatical gender information. In their method
OSCAR, Dev et al. (2021) also perform less-aggressive bias removal to maintain relevant
semantic information. They orthogonalize two directions that should be independent,
such as gender and occupation, while minimizing the change in the embeddings to
preserve important semantic meaning from gendered words.
5.1.6 Discussion and Limitations. Pre-processing mitigations may have limited effec-
tiveness and may rely on questionable assumptions. Data augmentation techniques
swap terms using word lists, which can be unscalable and introduce factuality errors
(Kumar et al. 2023b). Furthermore, word lists are often limited in length and scope,
may depend on proxies (e.g., names as a proxy for gender) that are often tied to
other social identities, and utilize word pairs that are not semantically or connotatively
equivalent (Devinney, Björklund, and Björklund 2022). Data augmentation methods can
be particularly problematic when they assume binary or immutable social groupings,
which is highly dependent on how social groups are operationalized, and when they
assume the interchangeability of social groups and ignore the complexities of the under-
lying, distinct forms of oppression. Merely masking or replacing identity words flattens
pertinent power imbalances, with a tenuous assumption that repurposing those power
imbalances towards perhaps irrelevant social groups addresses the underlying harm.
Diminishing the identity of the harmed group is an inadequate patch.
Data filtering, reweighting, and generation processes may encounter similar chal-
lenges, particularly with misrepresentative word lists and proxies for social groups, and
may introduce new distribution imbalances into the dataset. Data generation derived
from crowdsourcing, for instance, may favor majority opinions, as Kim et al. (2022)
point out in their creation of an inherently subjective social norm dataset, based on the
Social Chemistry dataset that Forbes et al. (2020) acknowledge to represent primarily
English-speaking, North American norms.
1140
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
In-training mitigation techniques aim to modify the training procedure to reduce bias.
These approaches modify the optimization process by changing the loss function,
updating next-word probabilities in training, selectively freezing parameters during
fine-tuning, or identifying and removing specific neurons that contribute to harmful
outputs. All in-training mitigations change model parameters via gradient-based train-
ing updates. We describe each type of in-training mitigation here, with examples in
Figure 8.
5.2.2 Loss Function Modification. Modifications to the loss function via a new equalizing
objective, regularization constraints, or other paradigms of training (i.e., contrastive
learning, adversarial learning, and reinforcement learning) may encourage output se-
mantics and stereotypical terms to be independent of a social group.
1141
Computational Linguistics Volume 50, Number 3
Figure 8
Example in-training mitigation techniques (§ 5.2). We illustrate four classes of methods that
modify model parameters during training. Architecture modifications change the configuration
of the model, such as adding new trainable parameters with adapter modules as done in this
example (Lauscher, Lueken, and Glavaš 2021). Loss function modifications introduce a new
optimization objective, such as equalizing the embeddings or predicted probabilities of
counterfactual tokens or sentences. Selective parameter updates freeze the majority of the
weights and only tune a select few during fine-tuning to minimize forgetting of pre-trained
language understanding. Filtering model parameters, in contrast, freezes all pre-trained weights
and selectively prunes some based on a debiasing objective.
Equalizing Objectives. Associations between social groups and stereotypical words may
be disrupted directly by modifying the loss function to encourage independence be-
tween a social group and the predicted output. We describe various bias-mitigating
objective functions, broadly categorized into embedding-based, attention-based, and
predicted distribution-based methods.
Instead of relying solely on the equalizing loss function, fine-tuning methods more
commonly integrate the fairness objective with the pre-trained model’s original loss
function, or another term that encourages the preservation of learned knowledge during
pre-training. In these cases, the fairness objective is added as a regularization term. In
the equations below, R denotes a regularization term for bias mitigation that is added
to the model’s original loss function (unless otherwise specified), while L denotes an
entirely new proposed loss function. We unify notation between references for compa-
rability, defined in Table 2. Equations are summarized in Table 6.
X
R=λ E(ai ) − E(aj ) 2
(30)
(ai ,aj )∈A
1142
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
Table 6
Equalizing objective functions for bias mitigation. We summarize regularization terms and loss
functions that can mitigate bias by modifying embeddings, attention matrices, or the predicted
token distribution. For notation, see Table 2.
Reference Equation
E MBEDDINGS P
(Liu et al. 2020) R=λ (ai ,aj )∈A E(ai ) − E(aj ) 2
a
(Yang et al. 2023) L = Σi,j∈{1,··· ,d},i<j JS Pai kP j + λKL (QkP)
E(Sm )+E(Sf )
R = 12 i∈{m,f } KL E(Si )k
P
(Woo et al. 2023) 2
E(Sm )> E(Sf )
− kE(S
m )kkE(Sf )k
P vgender >
(Park et al. 2023) R= w∈Wstereo kvgender k w
2
(Bordia and Bowman 2019) R= λ E(W)Vgender F
P P P >
2
(Kaneko and Bollegala 2021) R= w∈W S ∈S a∈A āi Ei (w, S)
(Colombo, Piantanida, and Clavel 2021) R = λI (E(X); A)
ATTENTION
2
A:l,h,S,G l,h,S,G
P PL PH
(Gaci et al. 2022) L= S∈ S `=1σ,:σ − O:σ,:σ
h=1
2
P|G| 2
l,h,S,G l,h,S,G
+λ S∈S L`=1 H
P P P
h=1 i=2 A:σ,σ+1 − A:σ,σ+i 2
PL
(Attanasio et al. 2022) R = −λ `=1 entropy(A)`
P REDICTED TOKEN DISTRIBUTION
(Qian et al. 2019), PK (k)
P(ai )
(Garimella et al. 2021) R = λ K1 k=1 log (k)
P(aj )
|A |
Σk=i1 P(Ai,k )
(Garimella et al. 2021) R(t) = λ log |Aj |
k=1
Σ j,k P(A )
L = |S1| S∈S K (k) (k) (k)
P P
(Guo, Yang, and Abbasi 2022) k=1 JS P(a1 ), P(a2 ), · · · , P(am )
P
(Garg et al. 2019) R = λ X ∈X |z(Xi ) − z(Xj )|
P energytask (x) + (energybias (x) − τ ) if energybias (x) > τ
(He et al. 2022b) R = λ x ∈X
0 otherwise
bias(w)
P
(Garimella et al. 2021) R = w∈W e × P(w)
Yang et al. (2023) compare the distances of protected attribute words to neutral words in
a lower-dimensional embedding subspace. Shown in Equation (31), the loss minimizes
the Jensen-Shannon divergence between the distributions Pai , Paj representing the dis-
tances from two distinct protected attributes ai , aj to all neutral words, while still main-
taining the words’ relative distances to one another (to maintain the original model’s
knowledge) via the KL divergence regularization term over the original distribution Q
and new distribution P.
X
L= JS (Pai kPaj ) + λKL (QkP) (31)
i,j∈{1,··· ,d},i<j
In their method GuiDebias, Woo et al. (2023) consider gender stereotype sentences, with
a regularization term (Equation (32)) to enforce independence between gender groups
and the representations of stereotypical masculine Sm and feminine Sf sentences, given
1143
Computational Linguistics Volume 50, Number 3
by the hidden representations E in the last layer. Instead of adding the regularization
term to the model’s original loss function, the authors propose an alternative loss
to maintain the pre-trained model’s linguistic integrity by preserving non-stereotype
sentences.
The second set of work integrates projection-based mitigation techniques (see Sec-
tion 5.1.5) into the loss function. To mitigate gender stereotypes in occupation terms,
Park et al. (2023) introduce a regularization term that orthogonalizes stereotypical word
embeddings w and the gender direction vgender in the embedding space. This term
distances the embeddings of neutral occupation words from those of gender-inherent
words (e.g., “sister” or “brother”). The gender direction is shown in Equation (33),
where A is the set of all gender-inherent feminine-associated ai and masculine-
associated aj words, and E(· ) computes the embeddings of a model; the regularization
term is given by Equation (34), where Wstereo is the set of stereotypical embeddings.
vgender = 1
X
E(aj ) − E(ai ) (33)
| A|
(ai ,aj )∈A
X vgender >
R= w (34)
kvgender k
w∈Wstereo
Bordia and Bowman (2019) alternatively obtain the gender subspace B from the singular
value decomposition of a stack of vectors representing gender-opposing words (e.g.,
“man” and “woman”), and minimize the squared Frobenius norm of the projection of
neutral embeddings, denoted E(W), onto that subspace with the regularization term
given by Equation (35).
2
R = λ E(W)Vgender F
(35)
X XX 2
R= ā>
i Ei (w, S) (36)
w∈W S∈S a∈A
The last set of work considers the mutual information between a social group and
the learned representations. Wang, Cheng, and Henao (2023) propose a fairness loss
over the hidden states of the encoder to minimize the mutual information between the
social group of a sentence (e.g., gender) and the sentence semantics (e.g., occupation).
Similarly, Colombo, Piantanida, and Clavel (2021) introduce a regularization term
1144
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
R = λI (E(X); A) (37)
Attention. Some evidence has indicated that the attention layers of a model may be a
primary encoder of bias in language models (Jeoung and Diesner 2022). Gaci et al.
(2022) and Attanasio et al. (2022) propose loss functions that modify the distribution
of weights in the attention heads of the model to mitigate bias. Gaci et al. (2022) address
stereotypes learned in the attention layer of sentence-level encoders by redistributing
attention scores, fine-tuning the encoder with an equalization loss that encourages equal
attention scores (e.g., to attend to “doctor”) with respect to each social group (e.g., “he”
and “she”), while minimizing changes to the attention of other words in the sentence.
The equalization loss is added as a regularization term to a semantic information
preservation term that computes the distance between the original (denoted by O) and
fine-tuned models’ attention scores. The equalization loss is given by Equation (38) for
a sentence S ∈ S and an encoder with L layers, H attention heads, |G| social groups.
L X
H L X |G|
H X
XX 2 XX 2
L= A:l,h,S,G l,h,S,G
σ,:σ − O:σ,:σ +λ A:l,h,S,G l,h,S,G
σ,σ+1 − A:σ,σ+i (38)
2 2
S∈S `=1 h=1 S∈S `=1 h=1 i=2
L
X
R = −λ entropy(A)` (39)
`=1
Predicted token distribution. Several works propose loss functions that equalize the prob-
ability of demographically-associated words in the generated output. Qian et al. (2019),
for instance, propose an equalizing objective that encourages demographic words to be
predicted with equal probability. They introduce a regularization term comparing the
output softmax probabilities P for binary masculine and feminine words pairs, which
was adapted by Garimella et al. (2021) for binary race word pairs. The regularization
term is shown in Equation (40), for K word pairs consisting of attributes ai and aj .
K (k)
P(ai )
R = λ1
X
log (40)
K (k)
P(aj )
k =1
1145
Computational Linguistics Volume 50, Number 3
With a similar form, Garimella et al. (2021) also introduce a declustering term to mitigate
implicit clusters of words stereotypically associated with a social group. The regulariza-
tion term, shown in Equation (41), considers two clusters of socially marked words, Ai
and Aj .
P|Ai |
P(Ai,k )
R(t) = λ log Pk|A=1| (41)
j
k=1 P(Aj,k )
In Auto-Debias, Guo, Yang, and Abbasi (2022) extend these ideas to non-binary social
groups, encouraging the generated output to be independent of social group. The loss,
given by Equation (42), calculates the Jensen-Shannon divergence between predicted
distributions P conditioned on a prompt S ∈ S concatenated with an attribute word ai
for K tuples of m attributes (e.g., (“judaism,” “christianity,” “islam”)).
K
L= 1
XX
JS P(a1 (k) ), P(a2 (k) ), · · · , P(am (k) ) (42)
|S|
S ∈ S k =1
X
R=λ |z(Xi ) − z(Xj )| (43)
X ∈X
Zhou et al. (2023) use causal invariance to mitigate gender and racial bias in fine-tuning,
by treating label-relevant factors to the downstream task as causal, and bias-relevant
factors as non-casual. They add a regularization term to enforce equivalent outputs for
sentences with the same semantics but different attribute words.
Another class of methods penalizes tokens strongly associated with bias. For in-
stance, He et al. (2022b) measures a token’s predictive value to the output and its
association with sensitive information. Terms highly associated with the sensitive in-
formation but less important for the task prediction are penalized during training with
a debiasing constraint, given for a single sentence x by Equation (44), where energytask (· )
is an energy score that measures a word’s task contribution, energybias (· ) measures its
bias contribution, and τ is a threshold hyperparameter.
Garimella et al. (2021) assign bias scores to all adjectives and adverbs W in the vocabu-
lary to generate a bias penalization regularization term shown in Equation (45).
X
ebias(w) × P(w)
R= (45)
w∈W
1146
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
Finally, calibration techniques can reduce bias amplification, which occurs when the
model output contains higher levels of bias than the original data distribution. To
calibrate the predicted probability distribution to avoid amplification, Jia et al. (2020)
propose a regularization approach to constrain the posterior distribution to match the
original label distribution.
Dropout. Instead of proposing a new regularization term, Webster et al. (2020) use
dropout (Srivastava et al. 2014) during pre-training to reduce stereotypical gendered
associations between words. By increasing dropout on the attention weights and hidden
activations, the work hypothesizes that the interruption of the attention mechanism
disrupts gendered correlations.
Adversarial Learning. In adversarial learning settings, a predictor and attacker are simul-
taneously trained, and the predictor aims to minimize its own loss while maximizing
the attacker’s. In our setting, this training paradigm can be used to learn models that
1147
Computational Linguistics Volume 50, Number 3
1148
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
5.2.4 Filtering Model Parameters. Besides fine-tuning techniques that simply update
model parameters to reduce bias, there are also techniques focused on filtering or re-
moving specific parameters (e.g., by setting them to zero) either during or after the train-
ing or fine-tuning of the model. Joniak and Aizawa (2022) use movement pruning (Sanh,
Wolf, and Rush 2020), a technique that removes some weights of a neural network, to
select a least-biased subset of weights from the attention heads of a pre-trained model.
During fine-tuning, they freeze the weights and independently optimize scores with a
debiasing objective. The scores are thresholded to determine which weights to remove.
To build robustness against the circumvention of safety alignment (“jailbreaking”),
including resistance to hate speech and discriminatory generations, Hasan, Rugina, and
Wang (2024) alternatively use WANDA (Sun et al. 2023b), which induces sparsity by
pruning weights with a small element-wise product between the weight matrix and
input feature activations, as a proxy for low-importance parameters. The authors show
that pruning 10–20% of model parameters increases resistance to jailbreaking, but more
extensive pruning can have detrimental effects.
Proskurina, Metzler, and Velcin (2023) provide further evidence that aggressive
pruning can have adverse effects: For hate speech classification, models with pruning
of 30% or more of the original parameters demonstrate increased levels of gender,
race, and religious bias. In an analysis of stereotyping and toxicity classification in text,
Ramesh et al. (2023) also find that pruning may amplify bias in some cases, but with
mixed effects and dependency on the degree of pruning.
1149
Computational Linguistics Volume 50, Number 3
Following the definition of Savani, White, and Govindarajulu (2020), we consider intra-
processing methods to be those that take a pre-trained, perhaps fine-tuned, model as
input, and modify the model’s behavior without further training or fine-tuning to generate
debiased predictions at inference; as such, these techniques may also be considered to
be inference stage mitigations. Intra-processing techniques include decoding strategies
that change the output generation procedure, post hoc model parameter modifications,
and separate debiasing networks that can be applied modularly during inference. Ex-
amples are shown in Figure 9.
1150
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
Figure 9
Example intra-processing mitigation techniques (§ 5.3). We show several methods that modify a
model’s behavior without training or fine-tuning. Constrained next-token search may prohibit
certain outputs during beam search (e.g., a derogatory term “@&!,” in this example), or generate
and rerank alternative outputs (e.g., “he” replaced with “she”). Modified token distribution
redistributes next-word probabilities to produce more diverse outputs and avoid biased tokens.
Weight distribution, in this example, illustrates how post hoc modifications to attention matrices
may narrow focus to less stereotypical tokens (Zayed et al. 2023b). Modular debiasing networks
fuse the main LLM with stand-alone networks that can remove specific dimensions of bias, such
as gender or racial bias.
outputs at inference. The constrained beam search generates an n-best list of outputs
in two passes, first generating the highest likelihood output and then searching for
differently gendered versions of the initial output. Comparing instead to known biases
in the data, Sheng et al. (2021a) compare n-gram features from the generated outputs
with frequently occurring biased (or otherwise negative) demographically associated
phrases in the data. These n-gram features constrain the next token prediction by requir-
ing semantic similarity with unbiased phrases and dissimilarity with biased phrases.
Meade et al. (2023) compare generated outputs to safe example responses from similar
contexts, reranking candidate responses based on their similarity to the safe example.
Instead of comparing various outputs, Lu et al. (2021) more directly enforce lexical
constraints given by predicate logic statements, which can require the inclusion or
exclusion of certain tokens. The logical formula is integrated as a soft penalty during
beam search.
Discriminator-based decoding methods rely on a classifier to measure the bias in
a proposed generation, replacing potentially harmful tokens with less biased ones.
Dathathri et al. (2019) re-ranks outputs using toxicity scores generated by a simple
classifier. The gradients of the classifier model can guide generation towards less toxic
outputs. Schramowski et al. (2022) identify moral directions aligned with human and
societal ethical norms in pre-trained language models. The authors leverage the model’s
normative judgments during decoding, removing generated words that fall below some
morality threshold (as rated by the model) to reduce non-normative outputs. Shuster
et al. (2022) use a safety classifier and safety keyword list to identify and filter out
negative responses, instead replacing them with a non sequitor.
Modified Token Distribution. Changing the distribution from which tokens are sampled
can increase the diversity of the generated output or enable the sampling of less biased
1151
Computational Linguistics Volume 50, Number 3
outputs with greater probability. Chung, Kamar, and Amershi (2023) propose two de-
coding strategies to increase diversity of generated tokens. Logit suppression decreases
the probability of generating already-used tokens from previous generations, which
encourages the selection of lower-frequency tokens. Temperature sampling flattens the
next-word probability distribution to also encourage the selection of less-likely tokens.
Kim et al. (2023) also modify the output token distribution using reward values obtained
from a toxicity evaluation model. The authors raise the likelihood of tokens that increase
a reward value, and lower ones that do not. Gehman et al. (2020) similarly increase the
likelihood of non-toxic tokens, adding a (non-)toxicity score to the logits over the vocab-
ulary before normalization. Liu, Khalifa, and Wang (2023) alternatively redistribute the
probability mass with bias terms. The proposed method seeks to minimize a constraint
function such as toxicity with an iterative sequence generation process, tuning bias
terms added to the predicted logits at each decoding step. After decoding for several
steps, the bias terms are updated with gradient descent to minimize the toxicity of the
generated sequence.
Another class of approaches modifies token probabilities by comparing two outputs
differing in their level of bias. Liu et al. (2021a) use a combination of a pre-trained model
and two smaller language models during decoding, one expert that models non-toxic
text, and one anti-expert that models toxic text. The pre-trained logits are modified
to increase the probability of tokens with high probability under the expert and low
probability under the anti-expert. Hallinan et al. (2023) similarly identify potentially
toxic tokens with an expert and an anti-expert, and mask and replace candidate tokens
with less toxic alternatives. In GeDi, Krause et al. (2021) also compare the generated
outputs from two language models, one conditioned on an undesirable attribute like
toxicity, which guides each generation step to avoid toxic words. Instead of using an
additional model, Schick, Udupa, and Schütze (2021) propose a self-debiasing frame-
work. The authors observe that pre-trained models can often recognize their own biases
in the outputs they produce and can describe these behaviors in their own generated
descriptions. This work compares the distribution of the next word given the original
input, to the distribution given the model’s own reasoning about why the input may be
biased. The model chooses words with a higher probability of being unbiased.
Finally, projection-based approaches may modify the next-token probability. Liang
et al. (2021) apply a nullspace projection to remove bias. The authors learn a set of
tokens that are stereotypically associated with a gender or religion. They then use a
variation of INLP Ravfogel et al. (2020) to find a projection matrix P that removes any
linear dependence between the tokens’ embeddings and gender or religion, applying
this projection at each time step during text generation to make the next token E(wt )
gender- or religion-invariant in the given context f (ct−1 ). The next-token probability is
given by Equation (46).
5.3.2 Weight Redistribution. The weights of a trained model may be modified post hoc
without further training. Given the potential associations between attention weights
and encoded bias (Jeoung and Diesner 2022), redistributing attention weights may
change how the model attends to biased words or phrases. Though Attanasio et al.
(2022) and (Gaci et al. 2022) propose in-training approaches (see Section 5.2.2), Zayed
et al. (2023a) modify the attention weights after training, applying temperature scaling
1152
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
1153
Computational Linguistics Volume 50, Number 3
The mothers picked up their kids. The parents picked up their kids.
LLM
He is the CEO of the company. They are the CEO of the company.
Neural machine
Machine Translation
translation model
Figure 10
Example post-processing mitigation techniques (§ 5.4). We illustrate how post-processing
methods can replace a gendered output with a gender-neutral version. Keyword replacement
methods first identify protected attribute terms (i.e., “mothers,” “he”), and then generate an
alternative output. Machine translation methods train a neural machine translator on a parallel
biased-unbiased corpus and feed the original output into the model to produce an unbiased
output.
tokens and replacing them via rewriting. Each type of mitigation is described below,
with examples shown in Figure 10.
5.4.1 Rewriting. Rewriting strategies detect harmful words and replace them with more
positive or representative terms, using a rule- or neural-based rewriting algorithm. This
strategy considers a fully generated output (as opposed to next-word prediction in
decoding techniques).
Machine Translation. Another class of rewriter model translates from a biased source
sentence to a neutralized or un-based target sentence. This can be framed as a machine
translation task, training on parallel corpora that translates from a biased (e.g., gen-
dered) to an unbiased (e.g., gender-neutral or opposite gender) alternative. To provide
gender-neutral alternatives to sentences with gendered pronouns, several studies (Jain
et al. 2021; Sun et al. 2021; Vanmassenhove, Emmery, and Shterionov 2021) use a rules-
based approach to generate parallel debiased sentences from biased sources, and then
train a machine translation model to translate from biased sentences to debiased ones.
Instead of generating a parallel corpus using biased sentences as the source, Amrhein
et al. (2023) leverage backward augmentation to filter through large corpora for gender-
fair sentences, and then add bias to generate artificial source sentences.
Parallel corpora have also been developed to address issues beyond gender bias.
Wang et al. (2022) introduce a dataset of sentence rewrites to train rewriting models to
1154
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
generate more polite outputs, preserving semantic information but altering the emotion
and sentiment. The dataset contains 10K human-based rewrites, and 100K model-based
rewrites based on the human-annotated data. Pryzant et al. (2020) address subjectivity
bias by building a parallel corpus of biased and neutralized sentences and training
a neural classifier with a detection module to identify inappropriately subjective or
presumptuous words, and an editing module to replace them with more neutral, non-
judgmental alternatives.
Other Neural Rewriters. Ma et al. (2020) focus specifically on editing the power dynamics
and agency levels encoded in verbs, proposing a neural model that can reconstruct
and paraphrase its input, while boosting the use of power- or agency-connoted words.
Majumder, He, and McAuley (2022) present InterFair for user-informed output modifi-
cation during inference. After scoring words important for task prediction and words
associated with bias, the user can critique and adjust the scores to inform rewriting.
5.5 Recommendations
We synthesize findings and guidance from the literature to make the following recom-
mendations. For more detailed discussion and limitations, see Sections 5.1.6, 5.2.5, 5.3.4,
and 5.4.2.
1155
Computational Linguistics Volume 50, Number 3
In this section, we discuss open problems and highlight challenges for future work.
Shifting Values and Assumptions. As we have established, bias and fairness are highly
subjective and normative concepts situated in social, cultural, historical, political, and
regional contexts. Therefore, there is no single set of values that bias and fairness
research can assume, yet, as Green (2019) explains, the assumptions and values in
scientific and computing research tend to reflect those of dominant groups. Instead
of relying on vague notions of socially desirable behaviors of LLMs, researchers and
practitioners can establish more rigorous theories of social change, grounded in relevant
1156
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
principles from fields like linguistics, sociology, and philosophy. These normative judg-
ments should be made explicit and not assumed to be universal. One tangible direction
of research is to expand bias and fairness considerations to contexts beyond the United
States and Western ones often assumed by prior works, and for languages other than
English. For example, several datasets rely on U.S. Department of Labor statistics to
identify relevant dimensions for bias evaluation, which lacks generality to other regions
of the world. Future work can expand perspectives to capture other sets of values and
norms. Bhatt et al. (2022) and Malik et al. (2022) provide examples of such work for
Indian society.
Expanding Language Resources. Moving beyond the currently studied contexts will re-
quire additional language resources, including data for different languages and their
dialects, as well as an understanding of various linguistic features and representations
of bias. Curation of additional language resources should value inclusivity over con-
venience, and documentation should follow practices such as Bender and Friedman
(2018) and Gebru et al. (2021). Furthermore, stakeholders must ensure that the process of
collecting data itself does not contribute to further harms. As described by Jernite et al.
(2022), this includes respecting the privacy and consent of the creators and subjects of
data, providing people and communities with agency and control over their data, and
sharing the benefits of data collection with the people and communities from whom
the data originates. Future work can examine frameworks for data collection pipelines
that ensure communities maintain control over their own language resources and have
a share in the benefits from the use of their data, following recommendations such
as Jernite et al. (2022) and Walter and Suina (2019) to establish data governance and
sovereignty practices.
Developing Fairness Desiderata. We propose an initial set of fairness desiderata, but these
notions can be refined and expanded. While works in machine learning classification
have established extensive frameworks for quantifying bias and fairness, more work can
be done to translate these notions and introduce new ones for NLP tasks, particularly
for generated text, and for the unique set of representational harms that manifest in
language. These definitions should stay away from abstract notions of fairness and in-
stead be grounded in concrete injustices communicated and reinforced by language. For
example, invariance (Definition 9), equal social group associations (Definition 10), and
equal neutral associations (Definition 11) all represent abstract notions of consistency
and uniformity in outcomes; it may be desirable, however, to go beyond sameness and
instead ask how each social group and their corresponding histories and needs should
be represented distinctly and uniquely to achieve equity and justice. The desiderata
for promoting linguistic diversity to better represent the languages of minoritized com-
munities in NLP systems, for instance, may differ from the desiderata for an NLP tool
that assesses the quality of resumes in automated hiring systems. The desiderata and
historical and structural context underpinning each definition should be made explicit.
Rethinking Social Group Definitions. Delineating between social groups is often required
to assess disparities, yet can simultaneously legitimize social constructions, reinforce
power differentials, and enable systems of oppression (Hanna et al. 2020). Disaggrega-
tion offers a pathway to deconstruct socially constructed or overly general groupings,
while maintaining the ability to perform disparity analysis within different contexts.
1157
Computational Linguistics Volume 50, Number 3
Recognizing Distinct Social Groups. Several evaluation and mitigation techniques treat
social groups as interchangeable. Other works seek to neutralize all protected attributes
in the inputs or outputs of a model. These strategies tend to ignore or conceal distinct
mechanisms of oppression that operate differently for each social group (Hanna et al.
2020). Research can examine more carefully the various underlying sources of bias,
understand how the mechanisms differ between social groups, and develop evaluation
and mitigation strategies that target specific historical and structural forces, without
defaulting to the erasure of social group identities as an adequate debiasing strategy.
Considering the Benefits and Harms of More Comprehensive Benchmarks. One possibility to
standardize bias and fairness evaluation is to establish more comprehensive bench-
marks to overcome comparability issues that arise from the vast array of bias evalu-
ation metrics and datasets, enabling easier differentiation of bias mitigation techniques
and their effectiveness. Despite this, benchmarks should be approached with cau-
tion and should not be conflated with notions of “universality.” Benchmarks can ob-
scure and decontextualize nuanced dimensions of harm, resulting in validity issues
1158
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
(Raji et al. 2021). In fact, overly general evaluation tools may be completely at odds with
the normative, subjective, and contextual nature of bias, and “universal” benchmarks
often express the perspectives of dominant groups in the name of objectivity and
neutrality and thus perpetuate further harm against marginalized groups (Denton et al.
2020). Framing bias as something to be measured objectively ignores the assumptions
made in the operationalization of the measurement tool (Jacobs and Wallach 2021). It
threatens to foster complacency when the benchmark is satisfied but the underlying
power imbalance remains unaddressed. Future work can critically interrogate the role
of a general evaluation framework, weighing the benefit of comparability with the risk
of ineffectiveness.
Examining Reliability and Validity Issues. As we discuss in Section 4, several widely used
evaluation datasets suffer from reliability and validity issues, including ambiguities
about whether instances accurately reflect real-world stereotypes, inconsistent treat-
ment of social groups, assumptions of near-perfect understanding of language, and
lack of syntactic and semantic diversity (Blodgett et al. 2021; Gupta et al. 2023; Selvam
et al. 2023). As a first step, future work can examine methods to resolve reliability and
validity issues in existing datasets. One direction for improvement is to move away
from static datasets and instead use living datasets that are expanded and adjusted
over time, following efforts like Gehrmann et al. (2021), Kiela et al. (2021), and Smith
et al. (2022). More broadly, however, reliability and validity issues raise questions of
whether test instances fully represent or capture real-world harms. Raji et al. (2021)
suggest alternatives to benchmark datasets, such as audits, adversarial testing, and
ablation studies. Future work can explore these alternative testing paradigms for bias
evaluation and develop techniques to demonstrate their validity.
Expanding Evaluation Possibilities. This survey identifies and summarizes many different
bias and fairness issues and their specific forms of harms that arise in LLMs. However,
there are only a few such bias issues that are often explicitly evaluated, and for the ones
that are, the set of evaluation techniques used for each type of bias remains narrow. For
instance, most works leverage PerspectiveAPI for detecting toxicity despite the known
flaws. Most works also rely on group fairness, with little emphasis towards individual
or subgroup fairness. Additional metrics for each harm and notion of fairness should
be developed and used.
Enabling Scalability. Several mitigation techniques rely on word lists, human annotations
or feedback, or exemplar inputs or outputs, which may narrow the scope of the types of
bias and the set of social groups that are addressed when these resources are limited. Fu-
ture work can investigate strategies to expand bottleneck resources for bias mitigation,
without overlooking the value of human- and community-in-the-loop frameworks.
Developing Hybrid Techniques. Most bias mitigation techniques target only a single inter-
vention stage (pre-processing, in-training, intra-processing, or post-processing). In light
of the observation that bias mitigated in the embedding space can re-emerge in down-
stream applications, understanding the efficacy of techniques at each stage remains
an open problem, with very few empirical studies comparing the gamut of available
techniques. In addition, future work can investigate hybrid mitigation techniques that
reduce bias at multiple or all intervention stages for increased effectiveness.
1159
Computational Linguistics Volume 50, Number 3
Understanding Mechanisms of Bias Within LLMs. Some studies like Jeoung and Diesner
(2022) have examined how bias mitigation techniques change LLMs. For example,
understanding that attention mechanisms play a key role in encoding bias informs
attention-targeting mitigations such as Attanasio et al. (2022), Gaci et al. (2022), and
Zayed et al. (2023a). Research into how and in which components (neurons, layers, at-
tention heads, etc.) of LLMs encode bias, and in what ways bias mitigations affect these,
remains an understudied problem, with important implications for more targeted tech-
nical solutions.
Establishing Fairness Guarantees. Deriving theoretical guarantees for bias mitigation tech-
niques is fundamentally important. Despite this, theoretically analyzing existing bias
and fairness techniques for LLMs remains a largely open problem for future work, with
most assessments falling to empirical evidence. Theoretical work can establish guaran-
tees and propose training techniques to learn fair models that satisfy these criteria.
7. Limitations
Technical solutions are incomplete without broader societal action against power hier-
archies that diminish and dominate marginalized groups. In this vein, technical solu-
tionism as an attitude overlooks and simplifies the broader histories and contexts that
enable structural systems oppression, which can preserve, legitimate, and perpetuate
the underlying roots of inequity and injustice, creating surface-level repairs that create
an illusion of incremental progress but fail to interrogate or disrupt the broader systemic
issues. This survey is limited in its alignment with a technical solutionist perspective,
as opposed to a critical theoretical one. In particular, the taxonomies are organized
according to their technical implementation details, instead of by their downstream
usage contexts or harms. Though organization in this manner fails to question the
broader and often tenuous assumptions in bias and fairness research more generally,
we hope our organization can provide an understanding of the dominant narratives and
themes in bias and fairness research for LLMs, enabling the identification of similarities
between metrics, datasets, and mitigations with common underlying objectives and
assumptions.
We have also focused narrowly on a few key points in the model development and
deployment pipeline, particularly model training and evaluation. As Black et al. (2023)
1160
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
highlight, the decisions that researchers and practitioners can make in bias and fair-
ness work are much more comprehensive. A more holistic approach includes problem
formulation, data collection, and deployment and integration into real-world contexts.
Finally, this survey is limited in its focus on English language papers.
8. Conclusion
1161
Computational Linguistics Volume 50, Number 3
Conference on Natural Language Processing Bhatt, Shaily, Sunipa Dev, Partha Talukdar,
(Volume 1: Long Papers), pages 1941–1955. Shachi Dave, and Vinodkumar
https://doi.org/10.18653/v1/2021 Prabhakaran. 2022. Re-contextualizing
.acl-long.151 fairness in NLP: The case of India. In
Barocas, Solon, Moritz Hardt, and Arvind Proceedings of the 2nd Conference of the
Narayanan. 2019. Fairness and Machine Asia-Pacific Chapter of the Association for
Learning: Limitations and Opportunities. Computational Linguistics and the 12th
fairmlbook.org. http://www International Joint Conference on Natural
.fairmlbook.org Language Processing (Volume 1: Long
Bartl, Marion, Malvina Nissim, and Albert Papers), pages 727–740.
Gatt. 2020. Unmasking contextual Birhane, Abeba. 2021. Algorithmic injustice:
stereotypes: Measuring and mitigating A relational ethics approach. Patterns,
BERT’s gender bias. In Proceedings of the 2(2). https://doi.org/10.1016/j
Second Workshop on Gender Bias in Natural .patter.2021.100205, PubMed:
Language Processing, pages 1–16. 33659914
Bassignana, Elisa, Valerio Basile, Viviana Birhane, Abeba, William Isaac, Vinodkumar
Patti, et al. 2018. Hurtlex: A multilingual Prabhakaran, Mark Diaz, Madeleine Clare
lexicon of words to hurt. In CEUR Elish, Iason Gabriel, and Shakir Mohamed.
Workshop Proceedings, volume 2253, 2022. Power to the people? Opportunities
pages 1–6. https://doi.org/10.4000 and challenges for participatory AI. Equity
/books.aaccademia.3085 and Access in Algorithms, Mechanisms, and
Baugh, John. 2000. Racial identification by Optimization, pages 1–8. https://doi
speech. American Speech, 75(4):362–364. .org/10.1145/3551624.3555290
https://doi.org/10.1215/00031283 Black, Emily, Rakshit Naidu, Rayid Ghani,
-75-4-362 Kit Rodolfa, Daniel Ho, and Hoda Heidari.
Bender, Emily M. 2019. A typology of ethical 2023. Toward operationalizing
risks in language technology with an eye pipeline-aware ML fairness: A research
towards where transparent documentation agenda for developing practical guidelines
can help. Presented at The Future of and tools. In Proceedings of the 3rd ACM
Artificial Intelligence: Language, Ethics, Conference on Equity and Access in
Technology Workshop. University of Algorithms, Mechanisms, and Optimization,
Cambridge, 25 March 2019. EAAMO ’23, pages 1–11. https://doi
Bender, Emily M. and Batya Friedman. 2018. .org/10.1145/3617694.3623259
Data statements for natural language Blodgett, Su Lin. 2021. Sociolinguistically
processing: Toward mitigating system bias Driven Approaches for Just Natural Language
and enabling better science. Transactions of Processing. Ph.D. thesis. University of
the Association for Computational Linguistics, Massachusetts Amherst.
6:587–604. https://doi.org/10.1162 Blodgett, Su Lin, Solon Barocas, Hal
/tacl a 00041 Daumé III, and Hanna Wallach. 2020.
Bender, Emily M., Timnit Gebru, Angelina Language (technology) is power: A critical
McMillan-Major, and Shmargaret survey of “bias” in NLP. In Proceedings of
Shmitchell. 2021. On the dangers of the 58th Annual Meeting of the Association for
stochastic parrots: Can language models Computational Linguistics, pages 5454–5476.
be too big? In Proceedings of the 2021 ACM https://doi.org/10.18653/v1/2020
Conference on Fairness, Accountability, and .acl-main.485
Transparency, FAccT ’21, pages 610–623. Blodgett, Su Lin, Gilsinia Lopez, Alexandra
https://doi.org/10.1145/3442188 Olteanu, Robert Sim, and Hanna Wallach.
.3445922 2021. Stereotyping Norwegian salmon: An
Benjamin, Ruha. 2020. Race After Technology: inventory of pitfalls in fairness benchmark
Abolitionist Tools for the New Jim Code. datasets. In Proceedings of the 59th Annual
Polity. Meeting of the Association for Computational
Beukeboom, Camiel J., and Christian Linguistics and the 11th International Joint
Burgers. 2019. How stereotypes are shared Conference on Natural Language Processing
through language: A review and (Volume 1: Long Papers), pages 1004–1015.
introduction of the social categories and https://doi.org/10.18653/v1/2021
stereotypes communication (SCSC) .acl-long.81
framework. Review of Communication Blodgett, Su Lin and Brendan O’Connor.
Research, 7:1–37. https://doi.org 2017. Racial disparity in natural language
/10.12840/issn.2255-4165.017 processing: A case study of social media
1162
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
1163
Computational Linguistics Volume 50, Number 3
Mishra, Adam Roberts, Paul Barham, Czarnowska, Paula, Yogarshi Vyas, and
Hyung Won Chung, Charles Sutton, Kashif Shah. 2021. Quantifying social
Sebastian Gehrmann, et al. 2022. PaLM: biases in NLP: A generalization and
Scaling language modeling with empirical comparison of extrinsic fairness
pathways. arXiv preprint arXiv:2204.02311. metrics. Transactions of the Association for
Chung, Hyung Won, Le Hou, Shayne Computational Linguistics, 9:1249–1267.
Longpre, Barret Zoph, Yi Tay, William https://doi.org/10.1162/tacl_a_00425
Fedus, Eric Li, Xuezhi Wang, Mostafa Dathathri, Sumanth, Andrea Madotto, Janice
Dehghani, Siddhartha Brahma, et al. 2022. Lan, Jane Hung, Eric Frank, Piero Molino,
Scaling instruction-finetuned language Jason Yosinski, and Rosanne Liu. 2019.
models.arXiv preprint arXiv:2210.11416. Plug and play language models: A simple
Chung, John, Ece Kamar, and Saleema approach to controlled text generation.
Amershi. 2023. Increasing diversity while arXiv preprint arXiv:1912.02164.
maintaining accuracy: Text data generation Davani, Aida Mostafazadeh, Mark Dı́az, and
with large language models and human Vinodkumar Prabhakaran. 2022. Dealing
interventions. In Proceedings of the 61st with disagreements: Looking beyond the
Annual Meeting of the Association for majority vote in subjective annotations.
Computational Linguistics (Volume 1: Long Transactions of the Association for
Papers), pages 575–593. https://doi.org Computational Linguistics, 10:92–110.
/10.18653/v1/2023.acl-long.34 https://doi.org/10.1162/tacl_a_00449
Colombo, Pierre, Pablo Piantanida, and Delobelle, Pieter and Bettina Berendt. 2022.
Chloé Clavel. 2021. A novel estimator of FairDistillation: Mitigating stereotyping in
mutual information for learning to language models. In Joint European
disentangle textual representations. In Conference on Machine Learning and
Proceedings of the 59th Annual Meeting of the Knowledge Discovery in Databases,
Association for Computational Linguistics and pages 638–654. https://doi.org/10
the 11th International Joint Conference on .1007/978-3-031-26390-3 37
Natural Language Processing (Volume 1: Long Delobelle, Pieter, Ewoenam Tokpo, Toon
Papers), pages 6539–6550. https://doi Calders, and Bettina Berendt. 2022.
.org/10.18653/v1/2021.acl-long.511 Measuring fairness with biased rulers: A
Conneau, Alexis, Kartikay Khandelwal, comparative study on bias metrics for
Naman Goyal, Vishrav Chaudhary, pre-trained language models. In
Guillaume Wenzek, Francisco Guzmán, Proceedings of the 2022 Conference of the
Edouard Grave, Myle Ott, Luke North American Chapter of the Association for
Zettlemoyer, and Veselin Stoyanov. 2020. Computational Linguistics: Human Language
Unsupervised cross-lingual representation Technologies, pages 1693–1706. https://
learning at scale. In Proceedings of the 58th doi.org/10.18653/v1/2022.naacl
Annual Meeting of the Association for -main.122
Computational Linguistics, pages 8440–8451. Denton, Emily, Mark Dı́az, Ian Kivlichan,
https://doi.org/10.18653/v1/2020 Vinodkumar Prabhakaran, and Rachel
.acl-main.747 Rosen. 2021. Whose ground truth?
Craft, Justin T., Kelly E. Wright, Accounting for individual and collective
Rachel Elizabeth Weissler, and Robin M. identities underlying dataset annotation.
Queen. 2020. Language and arXiv preprint arXiv:2112.04554.
discrimination: Generating meaning, Denton, Emily, Alex Hanna, Razvan
perceiving identities, and discriminating Amironesei, Andrew Smart, Hilary Nicole,
outcomes. Annual Review of Linguistics, and Morgan Klaus Scheuerman. 2020.
6:389–407. https://doi.org/10.1146 Bringing the people back in: Contesting
/annurev-linguistics-011718-011659 benchmark machine learning datasets.
Crawford, Kate. 2017. The trouble with bias. arXiv preprint arXiv:2007.07399.
Keynote at NeurIPS. Dev, Sunipa, Tao Li, Jeff M. Phillips, and
Cryan, Jenna, Shiliang Tang, Xinyi Zhang, Vivek Srikumar. 2020. On measuring and
Miriam Metzger, Haitao Zheng, and Ben Y. mitigating biased inferences of word
Zhao. 2020. Detecting gender stereotypes: embeddings. In Proceedings of the AAAI
Lexicon vs. supervised learning methods. Conference on Artificial Intelligence,
In Proceedings of the 2020 CHI Conference on volume 34, pages 7659–7666. https://
Human Factors in Computing Systems, doi.org/10.1609/aaai.v34i05.6267
pages 1–11. https://doi.org/10.1145 Dev, Sunipa, Tao Li, Jeff M. Phillips, and
/3313831.3376488 Vivek Srikumar. 2021. OSCaR: Orthogonal
1164
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
1165
Computational Linguistics Volume 50, Number 3
1166
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
1167
Computational Linguistics Volume 50, Number 3
1168
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
1169
Computational Linguistics Volume 50, Number 3
1170
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
the Association for Computational Linguistics: In Proceedings of the 59th Annual Meeting of
EMNLP 2020, pages 3475–3489. the Association for Computational Linguistics
https://doi.org/10.18653/v1/2020 and the 11th International Joint Conference on
.findings-emnlp.311 Natural Language Processing (Volume 1: Long
Li, Xiang Lisa and Percy Liang. 2021. Papers), pages 6691–6706. https://doi
Prefix-tuning: Optimizing continuous .org/10.18653/v1/2021.acl-long.522
prompts for generation. In Proceedings of Liu, Haochen, Jamell Dacon, Wenqi Fan, Hui
the 59th Annual Meeting of the Association for Liu, Zitao Liu, and Jiliang Tang. 2020. Does
Computational Linguistics and the 11th gender matter? Towards fairness in
International Joint Conference on Natural dialogue systems. In Proceedings of the 28th
Language Processing (Volume 1: Long International Conference on Computational
Papers), pages 4582–4597. https://doi Linguistics, pages 4403–4416. https://
.org/10.18653/v1/2021.acl-long.353 doi.org/10.18653/v1/2020.coling
Li, Yingji, Mengnan Du, Xin Wang, and Ying -main.390
Wang. 2023. Prompt tuning pushes farther, Liu, Pengfei, Weizhe Yuan, Jinlan Fu,
contrastive learning pulls closer: A Zhengbao Jiang, Hiroaki Hayashi, and
two-stage approach to mitigate social Graham Neubig. 2023. Pre-train, prompt,
biases. In Proceedings of the 61st Annual and predict: A systematic survey of
Meeting of the Association for Computational prompting methods in natural language
Linguistics (Volume 1: Long Papers), processing. ACM Computing Surveys,
pages 14254–14267. https://doi.org 55(9):1–35. https://doi.org/10
/10.18653/v1/2023.acl-long.797 .1145/3560815
Li, Yunqi and Yongfeng Zhang. 2023. Liu, Ruibo, Chenyan Jia, Jason Wei,
Fairness of ChatGPT. arXiv preprint Guangxuan Xu, Lili Wang, and Soroush
arXiv:2305.18569. Vosoughi. 2021b. Mitigating political bias
Liang, Paul Pu, Irene Mengze Li, Emily in language models through reinforced
Zheng, Yao Chong Lim, Ruslan calibration. In Proceedings of the AAAI
Salakhutdinov, and Louis-Philippe Conference on Artificial Intelligence,
Morency. 2020. Towards debiasing volume 35, pages 14857–14866. https://
sentence representations. In Proceedings of doi.org/10.1609/aaai.v35i17.17744
the 58th Annual Meeting of the Association for Liu, Xiao, Yanan Zheng, Zhengxiao Du,
Computational Linguistics, pages 5502–5515. Ming Ding, Yujie Qian, Zhilin Yang, and
https://doi.org/10.18653/v1/2020 Jie Tang. 2021c. GPT understands, too.
.acl-main.488 arXiv preprint arXiv:2103.10385.
Liang, Paul Pu, Chiyu Wu, Louis-Philippe Liu, Xin, Muhammad Khalifa, and Lu Wang.
Morency, and Ruslan Salakhutdinov. 2021. 2023. BOLT: Fast energy-based controlled
Towards understanding and mitigating text generation with tunable biases. In
social biases in language models. In Proceedings of the 61st Annual Meeting of the
International Conference on Machine Association for Computational Linguistics
Learning, pages 6565–6576. (Volume 2: Short Papers), pages 186–200.
Liang, Percy, Rishi Bommasani, Tony Lee, https://doi.org/10.18653/v1/2023
Dimitris Tsipras, Dilara Soylu, Michihiro .acl-short.18
Yasunaga, Yian Zhang, Deepak Liu, Yinhan, Myle Ott, Naman Goyal, Jingfei
Narayanan, Yuhuai Wu, Ananya Kumar, Du, Mandar Joshi, Danqi Chen, Omer
et al. 2022. Holistic evaluation of language Levy, Mike Lewis, Luke Zettlemoyer, and
models. arXiv preprint arXiv:2211.09110. Veselin Stoyanov. 2019. RoBERTa: A
Limisiewicz, Tomasz and David Mareček. robustly optimized BERT pretraining
2022. Don’t forget about pronouns: approach. arXiv preprint arXiv:1907.11692.
Removing gender bias in language models Loudermilk, Brandon C. 2015. Implicit
without losing factual gender information. attitudes and the perception of
In Proceedings of the 4th Workshop on Gender sociolinguistic variation. In Alexei
Bias in Natural Language Processing Prikhodkine and Dennis R. Preston,
(GeBNLP), pages 17–29. https://doi editors, Responses to Language Varieties:
.org/10.18653/v1/2022.gebnlp-1.3 Variability, Processes and Outcomes,
Liu, Alisa, Maarten Sap, Ximing Lu, Swabha pages 137–156. https://doi.org/10
Swayamdipta, Chandra Bhagavatula, .1075/impact.39.06lou
Noah A. Smith, and Yejin Choi. 2021a. Lu, Kaiji, Piotr Mardziel, Fangjing Wu,
DExperts: Decoding-time controlled text Preetam Amancharla, and Anupam Datta.
generation with experts and anti-experts. 2020. Gender bias in neural natural
1171
Computational Linguistics Volume 50, Number 3
1172
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
1173
Computational Linguistics Volume 50, Number 3
1174
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
Raffel, Colin, Noam Shazeer, Adam Roberts, ACM SIGIR Conference on Research and
Katherine Lee, Sharan Narang, Michael Development in Information Retrieval, SIGIR
Matena, Yanqi Zhou, Wei Li, and Peter J. ’20, pages 2065–2068. https://doi.org
Liu. 2020. Exploring the limits of transfer /10.1145/3397271.3401280
learning with a unified text-to-text Ribeiro, Marco Tulio, Sameer Singh, and
transformer. Journal of Machine Learning Carlos Guestrin. 2016. ”Why should I trust
Research, 21(1):5485–5551. you?” Explaining the predictions of any
Raji, Deborah, Emily Denton, Emily M. classifier. In Proceedings of the 22nd ACM
Bender, Alex Hanna, and Amandalynne SIGKDD International Conference on
Paullada. 2021. AI and the everything in Knowledge Discovery and Data Mining, KDD
the whole wide world benchmark. In ’16, pages 1135–1144. https://doi.org
Proceedings of the Neural Information /10.1145/2939672.2939778
Processing Systems Track on Datasets and Rudinger, Rachel, Jason Naradowsky, Brian
Benchmarks, pages 1–17. Leonard, and Benjamin Van Durme. 2018.
Rajpurkar, Pranav, Jian Zhang, Konstantin Gender bias in coreference resolution. In
Lopyrev, and Percy Liang. 2016. SQuAD: Proceedings of the 2018 Conference of the
100,000+ questions for machine North American Chapter of the Association for
comprehension of text. In Proceedings of the Computational Linguistics: Human Language
2016 Conference on Empirical Methods in Technologies, Volume 2 (Short Papers),
Natural Language Processing, pages 8–14. https://doi.org/10.18653
pages 2383–2392. https://doi.org/10 /v1/N18-2002
.18653/v1/D16-1264 Salazar, Julian, Davis Liang, Toan Q.
Ramesh, Krithika, Arnav Chavan, Shrey Nguyen, and Katrin Kirchhoff. 2020.
Pandit, and Sunayana Sitaram. 2023. A Masked language model scoring. In
comparative study on the impact of model Proceedings of the 58th Annual Meeting of
compression techniques on fairness in the Association for Computational
language models. In Proceedings of the 61st Linguistics, pages 2699–2712. https://
Annual Meeting of the Association for doi.org/10.18653/v1/2020.acl
Computational Linguistics (Volume 1: Long -main.240
Papers), pages 15762–15782. https:// Sanh, Victor, Thomas Wolf, and Alexander
doi.org/10.18653/v1/2023.acl Rush. 2020. Movement pruning: Adaptive
-long.878 sparsity by fine-tuning. Advances in Neural
Ranaldi, Leonardo, Elena Sofia Ruzzetti, Information Processing Systems,
Davide Venditti, Dario Onorati, and 33:20378–20389.
Fabio Massimo Zanzotto. 2023. A trip Sap, Maarten, Dallas Card, Saadia Gabriel,
towards fairness: Bias and de-biasing in Yejin Choi, and Noah A. Smith. 2019. The
large language models. arXiv preprint risk of racial bias in hate speech detection.
arXiv:2305.13862. In Proceedings of the 57th Annual Meeting of
Ravfogel, Shauli, Yanai Elazar, Hila Gonen, the Association for Computational Linguistics,
Michael Twiton, and Yoav Goldberg. 2020. pages 1668–1678. https://doi.org/10
Null it out: Guarding protected attributes .18653/v1/P19-1163
by iterative nullspace projection. In Sattigeri, Prasanna, Soumya Ghosh, Inkit
Proceedings of the 58th Annual Meeting of the Padhi, Pierre Dognin, and Kush R.
Association for Computational Linguistics, Varshney. 2022. Fair infinitesimal
pages 7237–7256. https://doi.org/10 jackknife: Mitigating the influence of
.18653/v1/2020.acl-main.647 biased training data points without
Rekabsaz, Navid, Simone Kopeinik, and refitting. Advances in Neural Information
Markus Schedl. 2021. Societal biases in Processing Systems, 35:35894–35906.
retrieved contents: Measurement Saunders, Danielle, Rosie Sallis, and Bill
framework and adversarial mitigation of Byrne. 2022. First the worst: Finding better
BERT rankers. In Proceedings of the 44th gender translations during beam search. In
International ACM SIGIR Conference on Findings of the Association for Computational
Research and Development in Information Linguistics: ACL 2022, pages 3814–3823.
Retrieval, SIGIR ’21, pages 306–316. https://doi.org/10.18653/v1/2022
https://doi.org/10.1145/3404835 .findings-acl.301
.3462949 Savani, Yash, Colin White, and
Rekabsaz, Navid and Markus Schedl. 2020. Naveen Sundar Govindarajulu. 2020.
Do neural ranking models intensify gender Intra-processing methods for debiasing
bias? In Proceedings of the 43rd International neural networks. Advances in Neural
1175
Computational Linguistics Volume 50, Number 3
1176
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
1177
Computational Linguistics Volume 50, Number 3
1178
Gallegos et al. Bias and Fairness in Large Language Models: A Survey
unwanted biases with adversarial Zhao, Zihao, Eric Wallace, Shi Feng, Dan
learning. In Proceedings of the 2018 Klein, and Sameer Singh. 2021. Calibrate
AAAI/ACM Conference on AI, Ethics, and before use: Improving few-shot
Society, AIES ’18, pages 335–340. https:// performance of language models. In
doi.org/10.1145/3278721.3278779 International Conference on Machine
Zhang, Hongyi, Moustapha Cisse, Yann N. Learning, pages 12697–12706.
Dauphin, and David Lopez-Paz. 2018. Zheng, Chujie, Pei Ke, Zheng Zhang, and
mixup: Beyond empirical risk Minlie Huang. 2023. Click: Controllable
minimization. In International Conference on text generation with sequence likelihood
Learning Representations. contrastive learning. In Findings of the
Zhao, Jieyu, Tianlu Wang, Mark Yatskar, Association for Computational Linguistics:
Ryan Cotterell, Vicente Ordonez, and ACL 2023, pages 1022–1040. https://
Kai-Wei Chang. 2019. Gender bias in doi.org/10.18653/v1/2023.findings
contextualized word embeddings. In -acl.65
Proceedings of the 2019 Conference of the Zhou, Fan, Yuzhou Mao, Liu Yu, Yi Yang,
North American Chapter of the Association for and Ting Zhong. 2023. Causal-debias:
Computational Linguistics: Human Language Unifying debiasing in pretrained language
Technologies, Volume 1 (Long and Short models and fine-tuning via causal
Papers), pages 629–634. https://doi invariant learning. In Proceedings of the 61st
.org/10.18653/v1/N19-1064 Annual Meeting of the Association for
Zhao, Jieyu, Tianlu Wang, Mark Yatskar, Computational Linguistics (Volume 1: Long
Vicente Ordonez, and Kai-Wei Chang. Papers), pages 4227–4241. https://doi
2017. Men also like shopping: Reducing .org/10.18653/v1/2023.acl-long.232
gender bias amplification using Ziems, Caleb, Jiaao Chen, Camille Harris,
corpus-level constraints. In Proceedings of Jessica Anderson, and Diyi Yang. 2022.
the 2017 Conference on Empirical Methods in VALUE: Understanding dialect disparity
Natural Language Processing, in NLU. In Proceedings of the 60th Annual
pages 2979–2989. https://doi.org/10 Meeting of the Association for Computational
.18653/v1/D17-1323 Linguistics (Volume 1: Long Papers),
Zhao, Jieyu, Tianlu Wang, Mark Yatskar, pages 3701–3720. https://doi.org/10
Vicente Ordonez, and Kai-Wei Chang. .18653/v1/2022.acl-long.258
2018. Gender bias in coreference Zmigrod, Ran, Sabrina J. Mielke, Hanna
resolution: Evaluation and debiasing Wallach, and Ryan Cotterell. 2019.
methods. In Proceedings of the 2018 Counterfactual data augmentation for
Conference of the North American Chapter of mitigating gender stereotypes in languages
the Association for Computational Linguistics: with rich morphology. In Proceedings of the
Human Language Technologies, Volume 2 57th Annual Meeting of the Association for
(Short Papers), pages 15–20. https:// Computational Linguistics, pages 1651–1661.
doi.org/10.18653/v1/N18-2003 https://doi.org/10.18653/v1/P19-1161
1179