0% found this document useful (0 votes)
47 views21 pages

Oeldorf

This article reviews the emerging field of algorithmic literacy, which examines users' understanding of algorithms that influence their social media experiences. It highlights the diversity in definitions and measurements of algorithmic literacy and proposes a research agenda focusing on user engagement, algorithmic transparency, and the implications of algorithmic divides. The authors emphasize the importance of users being aware of and able to critically evaluate the algorithms that shape their online interactions and information consumption.

Uploaded by

Furkan Tatoğlu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views21 pages

Oeldorf

This article reviews the emerging field of algorithmic literacy, which examines users' understanding of algorithms that influence their social media experiences. It highlights the diversity in definitions and measurements of algorithmic literacy and proposes a research agenda focusing on user engagement, algorithmic transparency, and the implications of algorithmic divides. The authors emphasize the importance of users being aware of and able to critically evaluate the algorithms that shape their online interactions and information consumption.

Uploaded by

Furkan Tatoğlu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

1182662

research-article2023
NMS0010.1177/14614448231182662new media & societyOeldorf-Hirsch and Neubaum

Article

new media & society

What do we know about


2025, Vol. 27(2) 681­–701
© The Author(s) 2023
Article reuse guidelines:
algorithmic literacy? The status sagepub.com/journals-permissions
DOI: 10.1177/14614448231182662
https://doi.org/10.1177/14614448231182662
quo and a research agenda for journals.sagepub.com/home/nms

a growing field

Anne Oeldorf-Hirsch
University of Connecticut, USA

German Neubaum
Universität Duisburg-Essen, Germany

Abstract
The increasing role of algorithms shaping our use of communication technology—
particularly on social media—comes with a growth of empirical research attempting
to assess how literate users are regarding these algorithms. This rapidly emerging
field is marked by great diversity in terms of how it theorizes and measures our
understanding of algorithms, due, in part, to the opaque “black box” nature of the
algorithms themselves. In this review article, we summarize the state of knowledge
on algorithmic literacy, including its definitions, development, measurement, and current
theorizing on human–algorithm interaction. Drawing on this existing work, we propose
an agenda including four different directions that future research could focus on: (1)
balancing users’ expectations of algorithmic literacy with developers’ responsibility for
algorithmic transparency, (2) methods for engaging users in increasing their literacy, (3)
further developing the affective and behavioral facets of literacy, and (4) addressing the
new algorithmic divide.

Keywords
Algorithmic divide, algorithmic literacy, research agenda, social media, user
engagement

Corresponding author:
Anne Oeldorf-Hirsch, Department of Communication, University of Connecticut, 337 Mansfield Road, Unit
1259, Storrs, CT 06269, USA.
Email: anne.oeldorf-hirsch@uconn.edu
682 new media & society 27(2)

Algorithms determine nearly everything we do online, from shopping to the news we read
to the music we stream. Broadly, algorithms make decisions about what information we
see, and they learn this at least partly from our interactions with existing content. On
social media specifically, “algorithms are a way of sorting posts in a user’s feed based on
relevancy instead of publish time” (Barnhart, 2021). For instance, Facebook uses a
machine learning algorithm that ranks posts on numerous factors such as post relevance
to determine which to show on a user’s Timeline (Tech@Facebook, 2021), and Twitter
uses a similar deep learning algorithm based on factors such as which tweets users engaged
in previously (Koumchatzky and Andryeyev, 2017). Even within the scope of social
media platforms, the pervasive role of algorithms has the power to influence how informed
and connected we are to others in our network based on the content we are presented. Yet,
the exact algorithmic formulas are generally kept secret; most websites, apps, and social
media platforms only vaguely reveal why users receive the content they do.
Even without knowing the nuances of why every post has appeared in their feeds, it is
imperative that social media users understand broadly how their social media content
reached them and how it may be influencing them. Users’ skills to find, consume, evalu-
ate, and produce information through media have long been examined under the umbrella
term “media literacy” (Livingstone, 2004). On this basis, scholars started using concepts
such as “computer literacy” (Horton, 1983), “digital competence” (Janssen et al., 2013),
“information literacy” (Johnston and Webber, 2005), “new media literacy” (Koc and
Barut, 2016), or “social media literacy” (Festl, 2021) to describe people’s cognitive,
technical, and emotional abilities for effectively using newly emerging information and
communication technologies.
While many of those concepts cover users’ skills to understand how information is cre-
ated and processed by intelligent systems, a very young strand of research has just started
focusing specifically on whether and how people make sense of the algorithms filtering
this information. Works addressing this very specific form of digital literacy initially
showed that as algorithmic awareness (i.e. basic awareness of the existence of algorithms)
is increasing (e.g. Klawitter and Hargittai, 2018), those without this awareness may be
disadvantaged by missing out on important information that is not prioritized for them
(Rainie and Anderson, 2017). Therefore, those with even more advanced levels of algo-
rithmic literacy—not just being aware of the presence and the impact of algorithm-based
systems, but also knowing how to use this understanding (DeVito, 2021)—present a new
digital divide (Cotter and Reisdorf, 2020; Gran et al., 2021). Therefore, it seems that the
prevalence of this specific form of literacy also follows the principles described in well-
established digital inequality frameworks (Reisdorf and Blank, 2021).
Hamilton et al. (2014) first called for a framework for exposing algorithms to users
and working with them to study their effects. Since then, researchers have increasingly
taken up the call to assess if and how well social media users understand these algo-
rithms. Such research is difficult because the actual algorithmic working is unknown
even to the researcher and requires interpretation (Andersen, 2020; Kitchin, 2017; Latzer
and Festic, 2019). This limits the ability to assess how “correct” users are in their under-
standing (Koenig, 2020). In other areas of digital literacy, such as web skills, clear
answers exist about what a user knows (e.g. what a bookmark or a PDF is; Hargittai,
2009), whereas the secretive nature of algorithms makes it difficult to assess literacy
Oeldorf-Hirsch and Neubaum 683

about them. Therefore, the current problem is twofold: (1) What is algorithmic literacy?
and (2) How do we assess this?
The purpose of this article is to define algorithmic literacy, based on existing research,
review current issues in algorithmic literacy, and propose an agenda for moving forward
with algorithmic literacy research. Festic (2020) points out that algorithmic selection is
now a significant aspect of everyday life. Algorithms exist in many forms, including
search, filter, recommendation, and scoring algorithms, which have differing functions
across various contexts. Festic summarizes these contexts in four life domains: social
and political orientation, recreation, commercial transactions, and socializing. Social
media apps may span all of these life domains, as they offer spaces to socialize, find
news, watch videos and other entertainment, and make purchases among users. However,
individuals turn to social media largely to interact with others. In these spaces, algo-
rithms lead users to content from their friends, family, or other social connections, driv-
ing how they engage with those individuals. For instance, users may expect to see all
content their social connections post, but algorithms filter and prioritize what is dis-
played in their feeds.
Therefore, our focus is primarily on social media content filtering algorithms.
Uncovering users’ handling of filtering algorithms in social media appears of pivotal
relevance, as algorithmic filtering is supposed to shape the balance of users’ information
landscape (e.g. via “curated flows”) and, in turn, their political attitudes and actions
(Klinger and Svensson, 2018; Ohme, 2021; Thorson and Wells, 2016). Second, filtering
algorithms might be the type which is most salient in social media users’ awareness, as it
plays a key role in public debates and becomes increasingly important in social media
platforms (e.g. TikTok). Thus, we are interested in the user experience with these filter-
ing algorithms as they use social media apps.
To construct this review, we searched for literature using the search term “algorithmic
literacy” and related terms, including “algorithmic awareness,” “algorithmic knowl-
edge,” “algorithmic understanding,” “algorithmic experience,” “algorithmic skills,”
“algorithmic divide,” and “algorithm” + “belief.” We started our search in Google
Scholar, and then focused more specifically on the Communication and Mass Media
Complete, PsycInfo, and ACM Digital Library databases. Our search strategy moved
from narrow to broader; first searching the terms in the title, then the abstract, and then
the full text. Upon reviewing abstracts, we determined if the article empirically assessed
or theorized Internet users’ understanding of or interaction with algorithms. Finally, we
included additional relevant articles cited by these initial articles. In total, we reviewed
96 articles that were deemed potentially relevant, 50 of which are included in this review.

Defining algorithmic literacy


Algorithmic literacy has recently been defined in two ways. First, as

the capacity and opportunity to be aware of both the presence and impact of algorithmically-
driven systems on self- or collaboratively-identified goals, and the capacity and opportunity to
crystalize this understanding into a strategic use of these systems to accomplish said goals.
(DeVito, 2021: 3)
684 new media & society 27(2)

Second, as “being aware of the use of algorithms in online applications, platforms, and
services, knowing how algorithms work, being able to critically evaluate algorithmic
decision-making as well as having the skills to cope with or even influence algorithmic
operations” (Dogruel et al., 2021: 4). Both definitions attempt to incorporate the evolu-
tion of many sub-dimensions of algorithmic understanding. The first proposes two broad
stages of understanding, from mere awareness to practical use. The second expands lit-
eracy to four steps, by distinguishing awareness from knowledge, adding the ability to
critique algorithms, and the skills to influence them. While these definitions offer neces-
sarily nuanced definitions of literacy, they propose different levels of granularity in terms
of what constitutes “literacy” comprehensively.

A short history of defining algorithmic understanding


Initially, research focused on the concept of algorithmic awareness: that users are even
aware of the existence of algorithms. It has since been more explicitly defined as “know-
ing that a dynamic system is in place that can personalize and customize the information
that a user sees or hears” (Hargittai et al., 2020: 771). In attempts to address more than
just awareness, other researchers focus on algorithmic knowledge. Cotter and Reisdorf
(2020) note that while

basic awareness provides a foundation on which to build an understanding of the criteria by


which algorithms rank content . . . more advanced algorithmic knowledge includes insight
about the principles and methods of software development that underlie algorithms and/or the
social and political effects of algorithms. (p. 747)

Finally, algorithmic skill refers to “users’ knowledge about algorithms and their role in
making online content visible, as well as users’ ability to figure out how particular algo-
rithms work, and then leverage that knowledge when producing and sharing content”
(Klawitter and Hargittai, 2018: 3492).
Swart (2021a) categorizes experiences with algorithms into cognitive, affective, and
behavioral dimensions, where understanding algorithms represents the cognitive com-
prehension of their existence and functioning, sensing algorithms represents the affective
influences that algorithms have over users, and engaging with algorithms represents the
behavioral dimension of interactions with algorithms. This aligns similarly to Lomborg
and Kapsch’s (2020) framework of knowing, feeling, and doing algorithms.
Dogruel et al. (2021) place both awareness and knowledge in the cognitive dimension
of understanding algorithms, separate from a behavioral dimension, which includes cop-
ing with algorithms and using them for creation. Cotter (2022) taps into the behavioral
by proposing a practical knowledge of algorithms, “to capture knowledge located at the
intersection of practice and discourse” (p. 2). This is similar to the use of skills (Klawitter
and Hargittai, 2018), though the ambiguity of algorithms offers no concrete proof of how
skilled a user is in using them, highlighting a boundary condition of behavioral under-
standing. Finally, an affective dimension has developed largely in the literature of atti-
tudes toward algorithms. Specifically, research pits appreciation (preferring an algorithm
over a human in decision-making; Logg et al., 2019) against aversion (preferring a
Oeldorf-Hirsch and Neubaum 685

Figure 1. Dimensions of algorithmic literacy.

human over an algorithm; Dietvorst et al., 2015). Though not explicitly about under-
standing algorithms—rather focusing on how individuals feel about them—these affec-
tive components also imply awareness, and potentially some component of skill.
The short but varied history of algorithmic definitions represents both a range of con-
cepts that are being addressed (e.g. awareness vs skills), and may also highlight termino-
logical inconsistencies that need to be addressed for the field to move forward cohesively.
We advocate for further converging on algorithmic literacy, such as has been defined by
DeVito (2021) and Dogruel et al. (2021), as an umbrella term. However, we must decide
whether to cultivate a cohesive definition of literacy as an overarching construct, or
accept definitions as collections of other concepts that make up literacy.
Previous iterations of literacy (e.g. media, information, Internet, digital, and social
media literacy), are also multi-faceted, incorporating elements of their previous litera-
cies. For instance, a recent systematic review of social media literacy (Polanco-Levicán
and Salvo-Garrido, 2022) concludes that its definition takes media literacy and adds ele-
ments pertinent to social media, which overlap but do not encompass digital literacy. Yet,
no one definition of social media literacy rises to the surface. Instead, definitions vary
from those that tap into cognitive, affective, and behavioral elements to those which
address increasingly complex stages of understanding. Settling on one definition of algo-
rithmic literacy will prove just as difficult. In any case, we can similarly categorize the
existing and emerging cognitive, affective, and behavioral aspects of understanding, and
further define boundaries between them. Moving forward, this allows the development
of literacy frameworks, which can address literacy gaps and lead to interventions along
the lines of previous literacy research in communication technology. As a first step, we
have visualized the current definitions in Figure 1.

Shifting divides in algorithmic literacy of social media users


The implementation of algorithms in social media sets a starting point for dividing users
who know about them from users who do not, driving gaps that have impacts in areas
from politics (Huszár et al., 2022) to e-commerce (Klawitter and Hargittai, 2018) to
community (DeVito, 2021). Facebook was the first social media platform to start
686 new media & society 27(2)

experimenting with an algorithm on its newly created News Feed in 2007 (Wallaroo
Media, 2021). This led to EdgeRank, Facebook’s first algorithm, which showed News
Feed content based on a variety of factors, including relationships, “weight” of each
item, and time decay (Bucher, 2012). This has since been replaced by a more sophisti-
cated and constantly evolving machine learning algorithm to curate highly personalized
content (Tech@Facebook, 2021). Twitter implemented its Timeline algorithm in 2016,
switching to an “optimized” (rather than chronological) feed by default (Koumchatzky
and Andryeyev, 2017), and Instagram followed suit the same year (Titcomb, 2017). Most
recently, TikTok revealed in 2020 that its algorithm recommends videos in a user’s “For
You” feed based on user interactions with other videos, video information, and device
and account settings (TikTok, 2020). Each of these algorithms chooses for the user what
appears in their social media feed, and not all users know this.
Some of the earliest writing on how social media users engage with algorithms started
with Bucher’s (2012) Foucauldian analysis of managing visibility on Facebook within its
EdgeRank algorithm. This case study illuminated how even the earliest social media algo-
rithms shaped the prevalence of one’s content and thus identity in social media spaces. The
first empirical work on Facebook users’ experiences with algorithms showed that the
majority (62.5%) were still not aware that Facebook did not show all available posts in
their news feeds, and were surprised or even angry to find out that content was filtered
(Eslami et al., 2015). This left most users behind in trying to communicate and manage
their online relationships. When asked more openly whether they thought Facebook always
showed all their friends’ posts, the majority (73%) said no (Rader and Gray, 2015). Yet they
did not understand how such filtering worked, or why it was done, which meant they had
little power to influence or leverage it. By now, most online news users realize that content
is filtered, but still have a limited understanding of the criteria used (Powers, 2017; Swart,
2021b). Similarly, YouTube users show a high awareness of the algorithmic process that
recommends content on the platform, but can only guess at what data it uses (Alvarado
et al., 2020). In both cases, this leaves users guessing at how to get to the content they want
or how to get their content to desired audiences. Notably, TikTok users feel acutely aware
of the algorithms that shape their “For You” page and state that they regularly “train” the
algorithm to show desirable videos (Siles and Meléndez-Moran, 2021), though the accu-
racy of this is difficult to determine given the opaque nature of algorithms.
In any case, even those actively invested in understanding algorithms can only glean
so much from their interactions with them. Independent artists on sites such as Etsy rec-
ognize the importance of algorithms, and find ways to learn about taking advantage of
them (e.g. by testing out various search optimization strategies), but are ultimately frus-
trated with their lack of verified knowledge (Klawitter and Hargittai, 2018). YouTube
content creators engage in “algorithmic labor” to negotiate the opacity and precarity of
the platform’s advertising moderation algorithms (Ma and Kou, 2021). Instagram influ-
encers are also acutely aware of algorithms, but lack definitive information about their
functioning, so they take it upon themselves to “play the visibility game” by testing the
outcomes of various engagement behaviors (Cotter, 2019).
For marginalized communities, not being able to grasp onto the algorithm is equally
important, and can have serious social consequences. Lesbian, gay, bisexual, transgender
and queer/questioning (LGBTQ+) Facebook users carefully navigate algorithms to
Oeldorf-Hirsch and Neubaum 687

manage their self-presentation in online spaces subject to context collapse, yet must con-
tinually re-theorize how these changing algorithms work (DeVito, 2021). Similarly, on
TikTok, LGBTQ+ users never feel fully in control of their digital self-presentation,
because while the algorithm is highly personalized, it cannot be tamed, leaving users
unable to integrate their various selves (Simpson et al., 2022). Thus, while algorithmic
literacy of social media platforms has increased markedly within the past decade, users
may be reaching the limits of what they can know without greater algorithmic transpar-
ency, and are facing the consequences.

Inductive routes to algorithmic literacy


Given the limited available knowledge about how algorithms work, users can only
develop their own ideas about what algorithms might be. Bucher (2017) calls this inter-
action between people and algorithms an algorithmic imaginary, or the “way in which
people imagine, perceive and experience algorithms and what these imaginations make
possible” (p. 31). These are not false beliefs, but the best understanding that users can
develop based on their own experience with algorithmic spaces such as Facebook. For
example, users may notice a commonality between their social media behavior and tar-
geted ads and theorize how these are connected.
Based on such repeated experiences, users develop folk theories, or “intuitive, infor-
mal theories that individuals develop to explain the outcomes, effects, or consequences
of technological systems, which guide reactions to and behavior towards said systems”
(DeVito et al., 2017: 3165). These folk theories are malleable, adapting to accommodate
algorithmic changes on the platforms (DeVito, 2021). Most social media users develop
folk theories based on their own experiences within a platform (endogenous informa-
tion), such as patterns of who and what appears in their feeds. This is complemented by
exogenous information, such as media reports or discussions with other users.
Algorithmic folk theories are both general and platform-specific. For instance,
Facebook users developed several theories about algorithms in their feeds, based on their
experiences on that site (Eslami et al., 2016). Most followed a personal engagement the-
ory that the more they interact with someone, the more they show up in their feed. Others
ascribed to the global popularity theory (content with more likes is more likely to show
up in their feed), format theory (posts with media content get higher priority), or narcissus
theory (users see content from those similar to themselves). However, Spotify folk theo-
ries (Siles et al., 2020) reveal that users estimate how the Spotify algorithm works based
on their understanding of other algorithms (e.g. Netflix recommendations) and also based
on that platform’s specific features in contrast to competitors (e.g. Apple Music). On
YouTube, users’ beliefs vary widely about how content is presented to them, among which
there is no explicit agreement (Alvarado et al., 2020). On TikTok, users focus on how
content could reach other users’ feeds based on their own engagement (Klug et al., 2021).

Gaps and biases in algorithmic literacy


Although some social media users have developed a rich understanding of algorithms,
this also presents the risk for a growing divide, akin to those seen with other technologies
688 new media & society 27(2)

(e.g. Internet access). Initial research on algorithmic literacy gaps shows that those with
less developed technological (specifically, search engine) skills also showed lower algo-
rithmic knowledge (Cotter and Reisdorf, 2020). Yet even those with higher formal edu-
cation may be missing this technology-specific knowledge. A recent report shows that
college students are no longer prepared for the information landscape that exists today,
as assignments do not address the necessary technological skills (Head et al., 2020).
While most of these students indicated an awareness of algorithms, most had no idea
how they worked or what their effects would be. With most of the online content that
users engage with now controlled by algorithms, a lack of information literacy implies a
lack of algorithmic literacy, with detrimental implications.
Another common factor in algorithmic literacy, as with most digital literacies, is the
effect of age (Cotter and Reisdorf, 2020; Gran et al., 2021), with younger Internet users
showing more algorithmic knowledge than older users. This may disproportionately
leave older social media users at higher risk for misinformation or information exclusion.
This pattern can already be seen in terms of how different generations handle misinfor-
mation online, with reports indicating that older users are worse at recognizing misinfor-
mation, and have a greater hand in spreading it (Gottfried and Grieco, 2018). With
algorithmic literacy at stake, those with already lower algorithmic literacy can be further
adversely affected with reduced or biased information access in their social media feeds.
For example, not understanding that an algorithm is dictating what appears in one’s
Facebook feed could lead a user to believe that the limited political information they are
seeing is the whole and accurate political reality.
More insidiously than just not showing users the full scope of information, algorithms
systematically bias content for users, excluding entire groups from receiving information
or being represented by it. This happens when Google shows ads for higher-paying jobs
disproportionately to men over women (Kirkpatrick, 2016), or when Facebook targets
their housing ads so as to exclude certain racial, religious, disabled, and other protected
classes of people (Booker, 2019). Worse yet, algorithms can make detrimental assump-
tions about users in a process called algorithmic symbolic annihilation, such as when
individuals who have experienced pregnancy loss continue to be subjected to content
about pregnancy (Andalibi and Garcia, 2021).
Unfortunately, algorithms do not merely reflect existing biases, but further perpetuate
them through their own design. Danks and London (2017) taxonomize routes to algorith-
mic bias, putting interpretation bias, or how the algorithm presents information to the
user, at the end. As they point out, algorithms are biased through many earlier steps,
starting with learning from biased input data. For instance, facial recognition software—
now widely understood to be biased against women and people of color—is likely built
on training datasets that disproportionately feature white male faces (Garvie and Frankle,
2016). This could mean dominant groups receiving even more opportunities than already
marginalized groups. While this problem expands beyond mere literacy, awareness of
these biases is the first step in correcting them.

Initial theorizing of human-algorithm interaction


Building a comprehensive framework of algorithmic literacy is difficult because of the ever-
changing nature of algorithms, but there are some attempts to move toward a more cohesive
Oeldorf-Hirsch and Neubaum 689

study of algorithmic experience (AX). Just as approaches to understanding algorithmic lit-


eracy come from communication and computer science, among other fields, so do attempts
to build frameworks for improving literacy based on the algorithm-user relationship.
From a communication perspective, Lomborg and Kapsch (2020) adapt the commu-
nication theory of decoding to developing an understanding of algorithms. The purpose
of this approach is to highlight the gaps in knowledge that must be interpreted for mean-
ingful communication, in this case about and with algorithms. Because algorithms can-
not be directly decoded, users attempt to decode them through communication processes
of knowing, feeling, and doing algorithms. To know an algorithm is to be aware of its
presence and basic functioning, which varies greatly between individuals, and comes
from a combination of formal learning, personal experiences, and third-party media and
conversations. Through encounters with algorithms, users also feel them. As illustrated
by earlier work on appreciation (e.g. Logg et al., 2019) and aversion (e.g. Dietvorst et al.,
2015), these experiences can be positive or negative, but only if the algorithm becomes
noticeable, which most often it does not. Users do algorithms by interacting with them
through digital media in three particular ways: using them as intended by effectively
feeding them data through usage, cautiously engaging with them as a necessary but
imperfect part of information systems, or actively resisting them as problematic
technologies.
These three stages of decoding algorithms synthesize nicely the existing research on
awareness of algorithms (knowing), and attitudes about algorithms (feeling), and points
to necessary future work on assessing the effects of algorithmic literacy on behaviors
(doing). In particular, it provides a framework for linking attitudes with behaviors,
namely in proposing that the dominant, negotiated, or oppositional ways that users
engage with algorithms are determined by holding positive, mixed, or negative views of
algorithms, respectively. By incorporating how users feel about algorithms, future
research on improving literacy can tailor interventions to users based on their existing
awareness and attitudes.
From a technical user experience perspective, Alvarado and Waern (2018) propose
AX as an analytical framework for making user interactions with algorithms more
explicit. The framework contains five dimensions, the purpose of which is to increase
algorithmic awareness and empowerment. Algorithmic awareness is the general under-
standing that algorithms are present, which guides the other dimensions. Algorithmic
profiling transparency refers to the extent to which a system makes visible what it knows
about a user and how it uses that information to present information. Algorithmic profil-
ing management considers how much input users could have in managing the profiling
done by an algorithm. Algorithmic user control refers to the various ways a system could
give users control over its algorithm, such as changing the display of news feed items,
turning off data sources, or giving feedback when the algorithm makes a faulty predic-
tion. Selective algorithmic memory expands on user control to specifically allow users to
determine what data algorithms get to use to make their predictions.
This framework provides a jumping off point for interventions to increase algorithmic
literacy. For instance, profiling transparency could be displayed in real-time social media
use to increase algorithmic awareness, and profiling management could further test a
user’s algorithmic knowledge. User control could be implemented as various engage-
ment points that improve algorithmic skills. Finally, selective memory might be the
690 new media & society 27(2)

result of clearer profiling, management, and control that influences key attitudes about
algorithms and resulting behaviors.

Investigating algorithmic literacy


As Kitchin (2017) notes, researching users’ understanding of algorithms is challenging
because algorithms are (1) largely inaccessible, (2) highly varied from platform to plat-
form, and (3) constantly changing. First, platforms that depend on user data (e.g. Meta)
do not reveal how their algorithms work. Second, other platforms (e.g. Twitter and
TikTok) may function on largely different sets algorithms, meaning that even if one is
revealed, it may not usefully inform users of another. Third, algorithms are dynamic by
design, continuously learning from user data to improve their output, so insights are not
relevant for long. Furthermore, as Seaver (2019) notes, knowing an algorithm is not as
simple as achieving algorithmic transparency, but in understanding algorithmic systems
that function as an interplay between humans and computers within broader social, cul-
tural, and political contexts. This further complicates researchers’ attempts to determine
how much users know.
Given this, Hargittai et al. (2020) set out guidelines for what may or may not work for
assessing these “black box” measures. For instance, directly asking social media users to
report their level of literacy is unlikely to be useful, but instead more in-depth discus-
sions of their experiences with algorithms may uncover what they really know. This
returns us to the point that algorithmic literacy is not simply algorithmic awareness, or
even knowing what algorithms are, but also feelings toward algorithms and ways of
using them.

Methodological starting points


Methods for studying algorithmic literacy currently use qualitative approaches (e.g.
focus groups; Siles et al., 2020) and quantitative approaches (e.g. surveys; Cotter and
Reisdorf, 2020). Naturally, each method offers a unique lens for assessing algorithmic
literacy, while also presenting limitations. Whereas strengths and weaknesses exist for
all research methods, this problem is particularly salient for algorithmic literacy, where
the opacity of algorithms makes arriving at a “ground truth” of assessing their functions
impossible (Hargittai et al., 2020). Given this, much of the research has been exploratory
thus far, with recent attempts to move into testing more uniformly defined measures of
literacy (Dogruel et al., 2021; Zarouali et al., 2021).
Knowledge of AXs was built on qualitative methods such as in-depth interviews,
leading to the development of folk theories about Facebook (Bucher, 2017), Google
News (Powers, 2017), YouTube (Alvarado et al., 2020), Spotify (Siles et al., 2020), and
TikTok (Klug et al., 2021). These methods allow participants to express a wide range of
emotions about and expectations of with algorithms in their own use, which can also
vary widely by platform. However, given unique user experiences across platforms, this
method also limits the ability to draw broader common inferences about an AX.
Conversely, the move to quantitative survey approaches provides a generalizable
method for testing literacy as a predictor or outcome, but their unified measures sacrifice
Oeldorf-Hirsch and Neubaum 691

unique user experiences. For example, surveys measuring algorithmic knowledge are
able to show that education and search skills are positively correlated with algorithmic
knowledge (Cotter and Reisdorf, 2020), and negatively correlated with online news
engagement (Makady, 2021). These studies provide new insight into what might predict
or be predicted by algorithmic literacy, though with a necessarily narrower understand-
ing of the concept.
One method in need of development involves experimental studies of algorithmic
literacy effects on various cognitive, affective, and behavioral outcomes. One such inter-
vention has tested the effects of exposure to algorithmic information and found changes
to users’ attitudes about algorithms (Silva et al., 2022). Computational approaches are
another avenue for more advanced assessments of algorithmic literacy, potentially
through the collection and display of social media data to its users for reflection.

Measuring algorithmic literacy


Dogruel et al.’s (2021) algorithmic literacy scale is currently the most comprehensive
attempt to measure literacy, capturing the dimensions of algorithmic awareness and algo-
rithmic knowledge. Algorithmic awareness is measured using binary statements about
whether a variety of communication technologies (e.g. Internet browsers) use algorithms
to function. Algorithmic knowledge uses true/false statements to measure more nuanced
aspects of algorithms, such as “The use of algorithms which deliver personalized content
can mean that the content you find is mostly consistent with your pre-existing opinions”
and “I can influence algorithms with my Internet usage behavior” (Dogruel, Online
Supplement). This scale takes a useful step forward in assessing literacy, though still
breaks into two sub-scales that together do not measure algorithmic literacy as one cohe-
sive construct.
Aside from this scale, most research has focused on investigating algorithmic aware-
ness or algorithmic knowledge separately, though each is not uniformly operationalized.
So as to avoid priming effects, awareness is often gauged indirectly by asking users
open-endedly about their general experiences with algorithmically-driven platforms with
hopes that indicators of algorithmic awareness arise (e.g. DeVito et al., 2018; Schwartz
and Mahnke, 2018). Awareness has also been addressed through somewhat more focused
questions, leading users to speculate about why content is presented to them, but still
usually without explicit reference to algorithms (e.g. Koenig, 2020; Powers, 2017; Rader
and Gray, 2015). In the most direct measure, Gran et al. (2021) ask survey participants to
self-report their awareness with the question “What kind of awareness do you have that
algorithms are used to present recommendations, advertisements and other content on
the Internet?” (p. 18).
Zarouali et al. (2021) provide a more developed quantitative measure of awareness
with their Algorithmic Media Content Awareness (AMCA) scale. This scale measures
the level of awareness of four constructs of algorithmic media platforms: content filter-
ing, automated decision-making, human–algorithm interplay, and ethical considerations.
One drawback of this scale is that it relies on users to assess their awareness of each
construct for a specific platform, rather than generally. To date, no published research has
used the scale for platforms beyond those tested in its original development (Facebook,
692 new media & society 27(2)

Netflix, and YouTube), leaving open the question of whether content differs for other
platforms (e.g. Instagram, Twitter) or is truly generalizable across all algorithmic media
content.
Cotter and Reisdorf’s (2020) measure of algorithmic knowledge asks participants to
rank how much influence they feel various actions have on their search engine results.
This differs from knowledge measures Zarouali et al. (2021) validated against awareness
in their AMCA scale, which used true/false statements about common algorithmic mis-
conceptions. Their evidence indicates that algorithmic awareness and algorithmic knowl-
edge are positively correlated, but remain distinct concepts. Therefore, rather than
focusing on distinguishing or combining concepts, future research should focus on fur-
ther developing frameworks that incorporate sub-dimensions, such as those proposed by
Swart (2021a) and Lomborg and Kapsch (2020).

Moving forward with algorithmic literacy research: an


agenda
The previous systematic overview of theoretical and methodological research on liter-
acy, awareness, and attitudes toward algorithms indicates that studies from various dis-
ciplines have started focusing on how humans perceive, explain, and evaluate the
functionalities of algorithms. We believe that a promising way to move this line of
research forward is to consider algorithmic literacy in different roles within the process
of human–technology interaction: (1) as predictors of users’ evaluations of algorithms
and their interaction with them, (2) as a moderator and/or mediator indicating when this
form of literacy can exacerbate or attenuate technology effects, and (3) as a dependent
variable by asking how it can be changed. Taking these three roles into account, we pro-
pose a research agenda within the framework of human–algorithm interaction focused on
four key areas: (1) balancing algorithmic literacy with algorithmic transparency, (2)
engaging users in increasing their literacy, (3) developing the affective and behavioral
facets of literacy, and (4) addressing the algorithmic divide.

Balancing user literacy and platform transparency


We believe that—as one of the main purposes of research—the omnipresence of algo-
rithms in social media users’ daily lives requires communication scholars to consider the
relationship between users and algorithm creators, particularly in terms of the responsi-
bility that each has in promoting literacy. Following the ideas of human–algorithm inter-
action approaches (Alvarado and Waern, 2018; Lomborg and Kapsch, 2020; Swart,
2021a), we argue that algorithmic systems need to represent both entities—the human
and the algorithm—and describe how their interaction can shape algorithmic literacy.
More specifically, researchers need to specify both what users need to understand, and
which technological cues or properties are accessible to users to support this understand-
ing. Much of the focus this far has been on the literacy needs of social media users, yet
there is also pressure on app developers for algorithmic transparency.
In their innovative intervention, Rader et al. (2018) test short explanatory statements
that reveal key elements of the Facebook algorithm, including what, how, and why
Oeldorf-Hirsch and Neubaum 693

information ends up in their feeds. They find evidence that viewing these statements
increased users’ understanding of what algorithms are and how they function. Notably, it
often presented new and surprising information to users, indicating that average social
media users still have a lot to learn about algorithms, but that even a little bit of informa-
tion from the app itself can have a significant impact on their understanding. Some apps
do provide various levels of algorithmic cues for why content appears in a feed, such as
“because you interacted with a post from this user” on Instagram or “Sara celebrated this
post” on LinkedIn, yet it is not yet known which cues are displayed to which users, and
whether they are noticed at all.
The AX framework (Alvarado and Waern, 2018) outlines which psychological pro-
cesses operate when individuals interact with each of these technological cues and prop-
erties. Besides identifying these processes, an applied literacy/design approach could
specify how different manifestations of those cues (explicit or implicit recommenda-
tions) form different dimensions of algorithmic literacy (e.g. awareness, knowledge,
interaction skills). This theoretical endeavor should also include key variables that sig-
nificantly shape the level of algorithmic literacy besides the actual human–algorithm
interaction such as general technological experience (e.g. search engine skills; Cotter and
Reisdorf, 2020).
Algorithmic folk theories play an important role in uncovering how social media
users understand, feel about, and engage with algorithms, from their perspective. While
folk theories can vary in accuracy, they shed light on users’ subjective experiences with
algorithmic environments, which affect their attitudes and behaviors around algorithms.
Crucially, they highlight what users do and do not perceive in terms of algorithmic trans-
parency, which offers important insights for designers who make choices about what
algorithmic cues to make visible on their interfaces. Previous works specified which folk
theories users of specific platforms develop based on interrelationships between behav-
ior and the consequences they observe (e.g. DeVito, 2021; Eslami et al., 2016; Lee et al.,
2022; Ytre-Arne and Moe, 2021) and which kind of sources they use to develop these lay
assumptions (DeVos et al., 2022). Still, future research needs to observe to what extent
different levels of algorithmic transparency (manifested through cues) can provoke cer-
tain cognitive, affective, and behavioral responses reflecting a certain level of algorith-
mic literacy.

Engaging users in algorithmic literacy


Having proposed that algorithmic literacy should not be solely the responsibility of the
social media user, the current reality is that individuals must endure most of effort in
improving literacy. Therefore, it seems commendable to develop and customize inter-
ventions that increase users’ engagement in their own literacy. Especially in the face of a
potential behavioral calculus (Dienlin and Metzger, 2016), a higher level of algorithmic
literacy seems pivotal for users to make informed and well-reasoned decisions about
which actions they wish to show and which information they desire to disclose when
using emerging communication technologies. Building on the current empirical work—
particularly that on folk theories (e.g. DeVito, 2021; DeVito et al., 2017; Eslami et al.,
2016)—four key psychological concepts seem particularly important to consider in
694 new media & society 27(2)

engaging users in this process: (1) curiosity, (2) motivation, (3) control, and (4) practice.
These reflect individual, situational, attitudinal, and behavioral aspects.

Curiosity. First, learning about algorithms seems aided greatly when curiosity is triggered
(e.g. Siles and Meléndez-Moran, 2021). While most social media users are now aware of
the existence of algorithms (Lomborg and Kapsch, 2020), this is not enough to provoke
greater literacy. Instead, users likely need to be curious about what the algorithms do and
why. Curiosity may be a set personality trait, but could be encouraged in certain social
media contexts. Bucher (2017) finds that many Facebook users first learn about algo-
rithms in unexpected encounters or “whoa moments,” such as when they realize a social
media ad has “found them” from previous interactions. Rather than make these encoun-
ters “creepy,” they could employ algorithmic cues about why the content in question
found them. Previous research indicates that brief explanatory mechanisms could be
effective in increasing literacy (Rader et al., 2018). To spark curiosity about algorithms,
a pop-up notification could appear when users engage with a post, asking “curious why
you received this post?” and providing an opportunity for users to learn more.

Motivation. Second, motivation may be crucial, as passive social media users are not as
likely to care why certain content appears on their social media apps. Yet, more active
users such as content creators, influencers, and those who otherwise use social media to
meet specific goals have a vested interest in learning how the algorithm filters content.
For example, Etsy artists and YouTube content creators strategize to optimize the algo-
rithm (Klawitter and Hargittai, 2018; Ma and Kou, 2021) for greater earning potential.
Furthermore, users in the demographic majority who find themselves well-represented
by the content in their feeds may not feel compelled to care how the algorithm works, as
it already serves them (e.g. DeVito, 2022). However, users in marginalized groups whose
identities are not as prominent in the space—especially when they are actively fighting
for recognition through movements such as Black Lives Matter or #MeToo—are likely
to be more motivated to understand how an algorithm could filter out their presence.
Thus, finding each user’s motivations for engaging with content on a social media plat-
form could be a vital step in determining what and how to increase their algorithmic
literacy.

Control. Third, to the extent that users possess a greater locus of control, or feel they have
more influence on the algorithm, the more likely they may be to engage with and learn
from it. Previous research finds that users appreciate algorithms more when they are
given even a little control over it (Dietvorst et al., 2018). This seems particularly true in
the case of the TikTok algorithm, where a user’s sense that they have some influence on
it plays a role in their enjoyment of the platform, and indicates a deeper algorithmic
understanding than on other platforms (Siles and Meléndez-Moran, 2021; Simpson et al.,
2022). While platforms determine how much influence (if any) users can have on their
algorithm, users could be made aware about where they have some influence, such as
how to prioritize certain friends in one’s Facebook news feed or how to turn off person-
alization of trends on Twitter.
Oeldorf-Hirsch and Neubaum 695

Practice. Finally, users need practice with algorithms to better understand them. Several
studies indicate that those who use social media platforms more are more knowledgeable
about the algorithms that determine the content shown (Cotter and Reisdorf, 2020;
Eslami et al., 2015). Demographic factors such as age and education are also correlated
with algorithmic literacy (Gran et al., 2021), providing indirect evidence that those who
use social media more (younger, more educated) have greater algorithmic literacy. Prac-
tice with algorithms might then be a matter of closing the digital divide, by providing
better access to and training on social media, both in formal education contexts and
through online learning opportunities while using social media platforms.

Strengthening affective and behavioral facets of algorithmic literacy


Currently, measures of algorithmic literacy lie mostly in the cognitive dimension (Cotter
and Reisdorf, 2020; Dogruel et al., 2021; Gran et al., 2021; Zarouali et al., 2021), stopping
at awareness or knowledge of algorithms. However, broader frameworks (e.g. Lomborg
and Kapsch, 2020; Swart, 2021a) note the importance of affective and behavioral dimen-
sions as part of holistic literacy. Knowledge about algorithms is not strongly correlated
with positive attitudes (Araujo et al., 2020; Dietvorst et al., 2015; Yeomans et al., 2019),
and increasing this knowledge does little to change these attitudes (Silva et al., 2022). This
highlights that affective dimensions of literacy function separately from knowledge, and
likely depend on users’ needs and motivations for using any specific algorithmically
driven platform. Indeed, initial evidence indicates that users reflect upon the “platform
spirit” and whether this platform’s functionalities match their understanding of the system
and their motives to use it (DeVito, 2021). Thus, it is crucial to analyze users’ individual
differences that could intervene in the relationship between algorithmic knowledge and
attitudes. For instance, the attitude toward an algorithm that filters information based on
the users’ political preferences may vary depending on whether users are driven by a
defense (i.e. looking for information that support one’s viewpoints) or accuracy (i.e. seek-
ing unbiased and balanced information presentation) motivation (Winter et al., 2016).
Knowledge has also long been treated as a key qualification for responsible and desir-
able user behavior in human–technology interaction (Livingstone, 2004). Still, it is nor-
matively difficult to estimate which behaviors related to algorithms define literacy. For
instance, is it the user feeding the algorithm with more information so that it becomes
more accurate or the user refraining from disclosing information to protect their privacy
that is more literate? Again, the desirability of a behavior might be a function of users’
individual needs (e.g. their need for privacy; Trepte and Masur, 2020). Users’ actions on
social media follow a complex calculus of what they gain versus what they lose when
disclosing certain types of information (Dienlin and Metzger, 2016). This calculus has
implications not only for users’ lay perceptions and folk theories about the curation of
information they receive but also for their self-presentation as the algorithmic curation
can be a barrier in the relationship between self-presenters and their audiences (DeVito
et al., 2018; Karizat et al., 2021; Lee et al., 2022). Extending the research focus of algo-
rithmic literacy as predictor of this behavioral calculus will help to uncover when knowl-
edge and awareness transfer to observable actions.
696 new media & society 27(2)

Addressing the algorithmic divide


Finally, the pivotal question to further approach the concept of algorithmic literacy and
its status quo is: Who knows what and why? This line of inquiry considering algorithmic
literacy as a consequence of certain circumstances is clearly associated with the idea of
the digital divide and the extent to which technical knowledge and skills vary across dif-
ferent user groups (Van Deursen and Van Dijk, 2014). A recent study suggested that
algorithmic awareness is more prevalent among male and better educated users (Gran
et al., 2021). Still, it remains unclear whether these associations are observable in differ-
ent national contexts and to what extent they are attributable to further factors such as
access to and experience with technologies, and users’ self-efficacy. Identifying groups
within which algorithmic literacy is remarkably low would help to develop tailored inter-
ventions to increase users’ knowledge and skills related to algorithms and to propose
ways to close the algorithmic divide.
That said, future research should not only provide a comprehensive socio-demo-
graphic analysis of the prevalence of algorithmic literacy but also focus on “softer”
drivers such as access and experience with technology. Likewise, algorithmic divides
can also occur based on users’ identities and their construction thereof. Especially
users from marginalized groups such as the LGBTQ + community indicated that per-
sonalizing algorithms do not grasp their identities properly, so they face extra chal-
lenges in terms of their literacy to make algorithms in constantly changing social
systems work for them (DeVito, 2022; Karizat et al., 2021; Simpson et al., 2022).
Therefore, users’ personal identities warrant exploration when determining which user
groups have extensive versus less knowledge about how algorithms work and why this
is the case.

Conclusion
The current state of the research on algorithmic literacy is rich, if still somewhat scat-
tered in its approaches across various fields of study. In the past decade, researchers have
uncovered how aware social media users are of algorithms, how they form folk theories,
and have begun to develop quantitative assessments of algorithmic literacy. While a
comprehensive framework of algorithmic literacy is difficult to develop due to the
opaque, heterogeneous, and user-dependent nature of the algorithms being investigated,
some attempts exist to synthesize the user experience of algorithms. Still, much remains
unknown, such as what predicts algorithmic literacy, its cognitive, affective, and behav-
ioral outcomes, and how to improve it. Thus, we present an agenda for moving forward
with algorithmic literacy research, which includes balancing user and developer respon-
sibilities, engaging users in their literacy, further developing behavioral dimensions of
literacy, and addressing the algorithmic divide.

Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/
or publication of this article: This research was completed with support from the Fulbright U.S.
Scholar Program and Fulbright Germany.
Oeldorf-Hirsch and Neubaum 697

ORCID iDs
Anne Oeldorf-Hirsch https://orcid.org/0000-0002-3961-3766
German Neubaum https://orcid.org/0000-0002-7006-7089

References
Alvarado O and Waern A (2018) Towards algorithmic experience. In: Proceedings of the 2018
CHI conference on human factors in computing systems, Montreal, QC, Canada, 21–26 April,
pp. 1–12. New York: ACM.
Alvarado O, Heuer H, Vanden Abeele V, et al. (2020) Middle-aged video consumers’ beliefs about
algorithmic recommendations on YouTube. Proceedings of the ACM on Human-Computer
Interaction 4(CSCW2): 121.
Andalibi N and Garcia P (2021) Sensemaking and coping after pregnancy loss: the seeking and
disruption of emotional validation online. Proceedings of the ACM on Human-Computer
Interaction 5(CSCW1): 127.
Andersen J (2020) Understanding and interpreting algorithms: toward a hermeneutics of algo-
rithms. Media, Culture & Society 42(7–8): 1479–1494.
Araujo T, Helberger N, Kruikemeier S, et al. (2020) In AI we trust? Perceptions about automated
decision-making by artificial intelligence. AI & Society 35(3): 611–623.
Barnhart B (2021) Everything You Need to Know about Social Media Algorithms. Sprout Social.
Available at: https://sproutsocial.com/insights/social-media-algorithms/
Booker B (2019) After lawsuits, Facebook announces changes to alleged discriminatory ad tar-
geting. NPR, 19 March. Available at: https://www.npr.org/2019/03/19/704831866/after-law-
suits-facebook-announces-changes-to-alleged-discriminatory-ad-targeting
Bucher T (2012) Want to be on the top? Algorithmic power and the threat of invisibility on
Facebook. New Media & Society 14(7): 1164–1180.
Bucher T (2017) The algorithmic imaginary: exploring the ordinary affects of Facebook algo-
rithms. Information Communication & Society 20(1): 30–44.
Cotter K (2019) Playing the visibility game: how digital influencers and algorithms negotiate
influence on Instagram. New Media & Society 21(4): 895–913.
Cotter K (2022) Practical knowledge of algorithms: the case of BreadTube. New Media & Society.
Epub ahead of print 18 March. DOI: 10.1177/14614448221081802.
Cotter K and Reisdorf BC (2020) Algorithmic knowledge gaps: a new dimension of (digital) ine-
quality. International Journal of Communication 14: 745–765.
Danks D and London AJ (2017) Algorithmic bias in autonomous systems. In: Proceedings of
the 26th international joint conference on artificial intelligence (IJCAI), Melbourne, VIC,
Australia, 19–25 August, pp. 4691–4697. Palo Alto, CA: AAAI Press.
DeVito MA (2021) Adaptive folk theorization as a path to algorithmic literacy on changing plat-
forms. Proceedings of the ACM on Human-Computer Interaction 5(CSCW2): 339.
DeVito MA (2022) How transfeminine TikTok creators navigate the algorithmic trap of visibility via
folk theorization. Proceedings of the ACM on Human-Computer Interaction 6(CSCW2): 380.
DeVito MA, Birnholtz J, Hancock JT, et al. (2018) How people form folk theories of social media
feeds and what it means for how we study self-presentation. In: Proceedings of the 2018 CHI
conference on human factors in computing systems, Montreal, QC, Canada, 21–26 April, pp.
1–12. New York: ACM.
DeVito MA, Gergle D and Birnholtz J (2017) “Algorithms ruin everything”: #RIPTwitter, folk
theories, and resistance to algorithmic change in social media. In: Proceedings of the 2017
698 new media & society 27(2)

CHI conference on human factors in computing systems, Denver, CO, 6–11 May, pp. 3163–
3174. New York: ACM.
DeVos A, Dhabalia A, Shen H, et al. (2022) Toward user-driven algorithm auditing: investigating
users’ strategies for uncovering harmful algorithmic behavior. In: Proceedings of the 2022
CHI conference on human factors in computing systems, New Orleans, LA, 29 April–5 May,
pp. 1–19. New York: ACM.
Dienlin T and Metzger MJ (2016) An extended privacy calculus model for SNSs: analyzing self-
disclosure and self-withdrawal in a representative US sample. Journal of Computer-Mediated
Communication 21(5): 368–383.
Dietvorst BJ, Simmons JP and Massey C (2015) Algorithm aversion: people erroneously avoid
algorithms after seeing them err. Journal of Experimental Psychology: General 144(1): 114–
126.
Dietvorst BJ, Simmons JP and Massey C (2018) Overcoming algorithm aversion: people will use
imperfect algorithms if they can (even slightly) modify them. Management Science 64(3):
1155–1170.
Dogruel L, Masur P and Joeckel S (2021) Development and validation of an algorithm literacy
scale for Internet users. Communication Methods and Measures 16(2): 115–133.
Eslami M, Karahalios K, Sandvig C, et al. (2016) First I “like” it, then I hide it: folk theories of
social feeds. In: Proceedings of the 2016 CHI conference on human factors in computing
systems, San Jose, CA, 7–12 May, pp. 2371–2382. New York: ACM.
Eslami M, Rickman A, Vaccaro K, et al. (2015) I always assumed that I wasn’t really that close
to [her]. In: Proceedings of the 33rd annual ACM conference on human factors in computing
systems, Seoul, Republic of Korea, 18–3 April, pp. 153–162. New York: ACM.
Festic N (2020) Same, same, but different! Qualitative evidence on how algorithmic selection
applications govern different life domains. Regulation & Governance 16: 85–101.
Festl R (2021) Social media literacy & adolescent social online behavior in Germany. Journal of
Children and Media 15(2): 249–271.
Garvie C and Frankle J (2016) Facial-recognition software might have a racial bias problem. The
Atlantic, pp. 1–9. Available at: https://www.theatlantic.com/technology/archive/2016/04/the-
underlying-bias-of-facial-recognition-systems/476991/
Gottfried J and Grieco E (2018) Younger Americans Are Better than Older Americans at Telling
Factual News Statements from Opinions. Pew Research Center. Available at: https://www.
pewresearch.org/fact-tank/2018/10/23/younger-americans-are-better-than-older-americans-
at-telling-factual-news-statements-from-opinions/
Gran A-B, Booth P and Bucher T (2021) To be or not to be algorithm aware: a question of a new
digital divide? Information Communication & Society 24(12): 1779–1796.
Hamilton K, Karahalios K, Sandvig C, et al. (2014) A path to understanding the effects of algo-
rithm awareness. In: Proceedings of the CHI conference on extended abstracts on human
factors in computing systems, Toronto, ON, Canada, 26 April–1 May, pp. 631–642. New
York: ACM.
Hargittai E (2009) An update on survey measures of web-oriented digital literacy. Social Science
Computer Review 27(1): 130–137.
Hargittai E, Gruber J, Djukaric T, et al. (2020) Black box measures? How to study people’s algo-
rithm skills. Information Communication & Society 23(5): 764–775.
Head AJ, Fister B and MacMillan M (2020) Information Literacy in the Age of Algorithms: Student
Experiences with News and Information, and the Need for Change. Project Information Literacy
Research Institute. Available at: https://projectinfolit.org/publications/algorithm-study/
Horton FW (1983) Information literacy vs. Computer literacy. Bulletin of the American Society for
Information Science 9(4): 14–16.
Oeldorf-Hirsch and Neubaum 699

Huszár F, Ktena SI, O’Brien C, et al. (2022) Algorithmic amplification of politics on Twitter.
Proceedings of the National Academy of Sciences of the United States of America 119(1):
e2025334119.
Janssen J, Stoyanov S, Ferrari A, et al. (2013) Experts’ views on digital competence: commonali-
ties and differences. Computers & Education 68: 473–481.
Johnston B and Webber S (2005) As we may think: information literacy as a discipline for the
information age. Research Strategies 20(3): 108–121.
Karizat N, Delmonaco D, Eslami M, et al. (2021) Algorithmic folk theories and identity: how
TikTok users co-produce knowledge of identity and engage in algorithmic resistance.
Proceedings of the ACM on Human-Computer Interaction 5(CSCW2): 305.
Kirkpatrick K (2016) Battling algorithmic bias. Communications of the ACM 59(10): 16–17.
Kitchin R (2017) Thinking critically about and researching algorithms. Information Communication
& Society 20(1): 14–29.
Klawitter E and Hargittai E (2018) “It’s like learning a whole other language”: the role of algo-
rithmic skills in the curation of creative goods. International Journal of Communication 12:
3490–3510.
Klinger U and Svensson J (2018) The end of media logics? On algorithms and agency. New Media
& Society 20(12): 4653–4670.
Klug D, Qin Y, Evans M, et al. (2021) Trick and please. A mixed-method study on user assump-
tions about the TikTok algorithm. In: Proceedings of the 13th ACM Web Science Conference
2021 (WebSci ‘21), Virtual Event. New York: Association for Computing Machinery, pp.
84–92.
Koc M and Barut E (2016) Development and validation of New Media Literacy Scale (NMLS) for
university students. Computers in Human Behavior 63: 834–843.
Koenig A (2020) The algorithms know me and I know them: using student journals to uncover
algorithmic literacy awareness. Computers and Composition 58: 102611.
Koumchatzky N and Andryeyev A (2017) Using Deep Learning at Scale in Twitter’s Timelines.
Twitter Blog. Available at: https://blog.twitter.com/engineering/en_us/topics/insights/2017/
using-deep-learning-at-scale-in-twitters-timelines
Latzer M and Festic N (2019) A guideline for understanding and measuring algorithmic govern-
ance in everyday life. Internet Policy Review 8(2): 1–19.
Lee AY, Mieczkowski H, Ellison NB, et al. (2022) The algorithmic crystal: conceptualizing the
self through algorithmic personalization on TikTok. Proceedings of the ACM on Human-
Computer Interaction 6(CSCW2): 543.
Livingstone S (2004) Media literacy and the challenge of new information and communication
technologies. The Communication Review 7(1): 3–14.
Logg JM, Minson JA and Moore DA (2019) Algorithm appreciation: people prefer algorithmic
to human judgment. Organizational Behavior and Human Decision Processes 151: 90–103.
Lomborg S and Kapsch PH (2020) Decoding algorithms. Media, Culture & Society 42(5): 745–761.
Ma R and Kou Y (2021) “How advertiser-friendly is my video?”: YouTuber’s socioeconomic
interactions with algorithmic content moderation. Proceedings of the ACM on Human-
Computer Interaction 5(CSCW2): 429.
Makady H (2021) “I wouldn’t react to it because of the algorithm”: How can self-presentation
moderate news consumption.In: Paper presented at the 104th annual conference of the
Association for Education in Journalism and Mass Communication (AEJMC), Denver, CO,
USA.
Ohme J (2021) Algorithmic social media use and its relationship to attitude reinforcement and
issue—specific political participation—the case of the 2015 European immigration move-
ments. Journal of Information Technology & Politics 18(1): 36–54.
700 new media & society 27(2)

Polanco-Levicán K and Salvo-Garrido S (2022) Understanding social media literacy: a systematic


review of the concept and its competences. International Journal of Environmental Research
and Public Health 19(14): 8807.
Powers E (2017) My news feed is filtered? Awareness of news personalization among college
students. Digital Journalism 5(10): 1315–1335.
Rader E and Gray R (2015) Understanding user beliefs about algorithmic curation in the Facebook
News Feed. In: Proceedings of the 33rd annual ACM conference on human factors in com-
puting systems (CHI’15), Seoul, Republic of Korea, 18–23 April, pp. 173–182. New York:
ACM.
Rader E, Cotter K and Cho J (2018) Explanations as mechanisms for supporting algorithmic trans-
parency. In: Proceedings of the 2018 CHI conference on human factors in computing systems,
Montreal, QC, Canada, 21–26 April, pp. 1–13. New York: ACM.
Rainie L and Anderson J (2017) Code-Dependent: Pros and Cons of the Algorithm Age. Pew
Research Center. Available at: https://www.pewresearch.org/internet/2017/02/08/code-
dependent-pros-and-cons-of-the-algorithm-age/
Reisdorf BC and Blank G (2021) Algorithmic literacy and platform trust. In: Hargittai E (ed.)
Handbook of Digital Inequality. Cheltenham: Edward Elgar Publishing, pp. 338–354.
Schwartz SA and Mahnke MS (2018) I—Facebook—world: how people relate to technology and
the world through Facebook use. In: Proceedings of the 9th international conference on social
media and society (SMSociety’18), Copenhagen, 18–20 July, pp. 370–374. New York: ACM.
Seaver N (2019) Knowing algorithms. In: Vertesi J, Ribes D, Forlano L, et al. (eds) digitalSTS:
A Field Guide for Science & Technology Studies. Princeton, NJ: Princeton University Press,
pp. 412–422.
Siles I and Meléndez-Moran A (2021) “The most aggressive of algorithms”: user awareness of and
attachment to TikTok’s content personalization. Paper presented at the 71st annual confer-
ence of the International Communication Association (ICA), Denver, CO, 27–31 May.
Siles I, Segura-Castillo A, Solís R, et al. (2020) Folk theories of algorithmic recommendations on
Spotify: enacting data assemblages in the global South. Big Data & Society. Epub ahead of
print 30 April. DOI: 10.1177/2053951720923377.
Silva DE, Chen C and Zhu Y (2022) Facets of algorithmic literacy: information, experience, and
individual factors predict attitudes toward algorithmic systems. New Media & Society. Epub
ahead of print 7 June. DOI: 10.1177/14614448221098042.
Simpson E, Hamann A and Semaan B (2022) How to tame “your” algorithm: LGBTQ+ users’
domestication of TikTok. Proceedings of the ACM on Human-Computer Interaction
6(GROUP): 22.
Swart J (2021a) Experiencing algorithms: how young people understand, feel about, and engage
with algorithmic news selection on social media. Social Media + Society. Epub ahead of print
12 April. DOI: 10.1177/20563051211008828.
Swart J (2021b) Tactics of news literacy: how young people access, evaluate, and engage with
news on social media. New Media & Society 25: 505–521.
Tech@Facebook (2021) How Does News Feed Predict What You Want to See? Available at:
https://tech.fb.com/news-feed-ranking/
Thorson K and Wells C (2016) Curated flows: a framework for mapping media exposure in the
digital age. Communication Theory 26(3): 309–328.
TikTok (2020) How TikTok recommends videos #ForYou. Newsroom|TikTok, 18 June. Available
at: https://newsroom.tiktok.com/en-us/how-tiktok-recommends-videos-for-you
Titcomb J (2017) Instagram is changing its feed to show photos out of order. The Daily Telegraph.
Available at: https://www.telegraph.co.uk/technology/2016/03/16/instagram-is-changing-its-
feed-to-show-photos-out-of-order/
Oeldorf-Hirsch and Neubaum 701

Trepte S and Masur PK (2020) Need for privacy. In: Zeigler-Hill V and Shackelford TK (eds)
Encyclopedia of Personality and Individual Differences. Cham: Springer International
Publishing, pp. 3132–3135.
Van Deursen AJAM and Van Dijk JAGM (2014) The digital divide shifts to differences in usage.
New Media & Society 16(3): 507–526.
Wallaroo Media (2021) Facebook News Feed Algorithm History. Available at: https://walla-
roomedia.com/facebook-newsfeed-algorithm-history/
Winter S, Metzger MJ and Flanagin AJ (2016) Selective use of news cues: a multiple-motive per-
spective on information selection in social media environments. Journal of Communication
66(4): 669–693.
Yeomans M, Shah A, Mullainathan S, et al. (2019) Making sense of recommendations. Journal of
Behavioral Decision Making 32(4): 403–414.
Ytre-Arne B and Moe H (2021) Folk theories of algorithms: understanding digital irritation.
Media, Culture & Society 43(5): 807–824.
Zarouali B, Boerman SC and De Vreese CH (2021) Is this recommended by an algorithm? The
development and validation of the algorithmic media content awareness scale (AMCA-scale).
Telematics and Informatics 62: 101607.

Author biographies
Anne Oeldorf-Hirsch is an Associate Professor in the Department of Communication at the
University of Connecticut, where she conducts research in the Human-Computer Interaction lab.
Her research investigates the use of using social media to engage with news, health, and science
content.
German Neubaum is an Assistant Professor of Media Psychology and Education at the University
of Duisburg-Essen, Germany. His research interests focus on the educational benefits users can
gain from using social media. By combining media psychological methods and social media ana-
lytics, he studies technology-enabled educational processes in the context of politics, morality,
science, and health communication.

You might also like