Oeldorf
Oeldorf
research-article2023
NMS0010.1177/14614448231182662new media & societyOeldorf-Hirsch and Neubaum
Article
a growing field
Anne Oeldorf-Hirsch
University of Connecticut, USA
German Neubaum
Universität Duisburg-Essen, Germany
Abstract
The increasing role of algorithms shaping our use of communication technology—
particularly on social media—comes with a growth of empirical research attempting
to assess how literate users are regarding these algorithms. This rapidly emerging
field is marked by great diversity in terms of how it theorizes and measures our
understanding of algorithms, due, in part, to the opaque “black box” nature of the
algorithms themselves. In this review article, we summarize the state of knowledge
on algorithmic literacy, including its definitions, development, measurement, and current
theorizing on human–algorithm interaction. Drawing on this existing work, we propose
an agenda including four different directions that future research could focus on: (1)
balancing users’ expectations of algorithmic literacy with developers’ responsibility for
algorithmic transparency, (2) methods for engaging users in increasing their literacy, (3)
further developing the affective and behavioral facets of literacy, and (4) addressing the
new algorithmic divide.
Keywords
Algorithmic divide, algorithmic literacy, research agenda, social media, user
engagement
Corresponding author:
Anne Oeldorf-Hirsch, Department of Communication, University of Connecticut, 337 Mansfield Road, Unit
1259, Storrs, CT 06269, USA.
Email: anne.oeldorf-hirsch@uconn.edu
682 new media & society 27(2)
Algorithms determine nearly everything we do online, from shopping to the news we read
to the music we stream. Broadly, algorithms make decisions about what information we
see, and they learn this at least partly from our interactions with existing content. On
social media specifically, “algorithms are a way of sorting posts in a user’s feed based on
relevancy instead of publish time” (Barnhart, 2021). For instance, Facebook uses a
machine learning algorithm that ranks posts on numerous factors such as post relevance
to determine which to show on a user’s Timeline (Tech@Facebook, 2021), and Twitter
uses a similar deep learning algorithm based on factors such as which tweets users engaged
in previously (Koumchatzky and Andryeyev, 2017). Even within the scope of social
media platforms, the pervasive role of algorithms has the power to influence how informed
and connected we are to others in our network based on the content we are presented. Yet,
the exact algorithmic formulas are generally kept secret; most websites, apps, and social
media platforms only vaguely reveal why users receive the content they do.
Even without knowing the nuances of why every post has appeared in their feeds, it is
imperative that social media users understand broadly how their social media content
reached them and how it may be influencing them. Users’ skills to find, consume, evalu-
ate, and produce information through media have long been examined under the umbrella
term “media literacy” (Livingstone, 2004). On this basis, scholars started using concepts
such as “computer literacy” (Horton, 1983), “digital competence” (Janssen et al., 2013),
“information literacy” (Johnston and Webber, 2005), “new media literacy” (Koc and
Barut, 2016), or “social media literacy” (Festl, 2021) to describe people’s cognitive,
technical, and emotional abilities for effectively using newly emerging information and
communication technologies.
While many of those concepts cover users’ skills to understand how information is cre-
ated and processed by intelligent systems, a very young strand of research has just started
focusing specifically on whether and how people make sense of the algorithms filtering
this information. Works addressing this very specific form of digital literacy initially
showed that as algorithmic awareness (i.e. basic awareness of the existence of algorithms)
is increasing (e.g. Klawitter and Hargittai, 2018), those without this awareness may be
disadvantaged by missing out on important information that is not prioritized for them
(Rainie and Anderson, 2017). Therefore, those with even more advanced levels of algo-
rithmic literacy—not just being aware of the presence and the impact of algorithm-based
systems, but also knowing how to use this understanding (DeVito, 2021)—present a new
digital divide (Cotter and Reisdorf, 2020; Gran et al., 2021). Therefore, it seems that the
prevalence of this specific form of literacy also follows the principles described in well-
established digital inequality frameworks (Reisdorf and Blank, 2021).
Hamilton et al. (2014) first called for a framework for exposing algorithms to users
and working with them to study their effects. Since then, researchers have increasingly
taken up the call to assess if and how well social media users understand these algo-
rithms. Such research is difficult because the actual algorithmic working is unknown
even to the researcher and requires interpretation (Andersen, 2020; Kitchin, 2017; Latzer
and Festic, 2019). This limits the ability to assess how “correct” users are in their under-
standing (Koenig, 2020). In other areas of digital literacy, such as web skills, clear
answers exist about what a user knows (e.g. what a bookmark or a PDF is; Hargittai,
2009), whereas the secretive nature of algorithms makes it difficult to assess literacy
Oeldorf-Hirsch and Neubaum 683
about them. Therefore, the current problem is twofold: (1) What is algorithmic literacy?
and (2) How do we assess this?
The purpose of this article is to define algorithmic literacy, based on existing research,
review current issues in algorithmic literacy, and propose an agenda for moving forward
with algorithmic literacy research. Festic (2020) points out that algorithmic selection is
now a significant aspect of everyday life. Algorithms exist in many forms, including
search, filter, recommendation, and scoring algorithms, which have differing functions
across various contexts. Festic summarizes these contexts in four life domains: social
and political orientation, recreation, commercial transactions, and socializing. Social
media apps may span all of these life domains, as they offer spaces to socialize, find
news, watch videos and other entertainment, and make purchases among users. However,
individuals turn to social media largely to interact with others. In these spaces, algo-
rithms lead users to content from their friends, family, or other social connections, driv-
ing how they engage with those individuals. For instance, users may expect to see all
content their social connections post, but algorithms filter and prioritize what is dis-
played in their feeds.
Therefore, our focus is primarily on social media content filtering algorithms.
Uncovering users’ handling of filtering algorithms in social media appears of pivotal
relevance, as algorithmic filtering is supposed to shape the balance of users’ information
landscape (e.g. via “curated flows”) and, in turn, their political attitudes and actions
(Klinger and Svensson, 2018; Ohme, 2021; Thorson and Wells, 2016). Second, filtering
algorithms might be the type which is most salient in social media users’ awareness, as it
plays a key role in public debates and becomes increasingly important in social media
platforms (e.g. TikTok). Thus, we are interested in the user experience with these filter-
ing algorithms as they use social media apps.
To construct this review, we searched for literature using the search term “algorithmic
literacy” and related terms, including “algorithmic awareness,” “algorithmic knowl-
edge,” “algorithmic understanding,” “algorithmic experience,” “algorithmic skills,”
“algorithmic divide,” and “algorithm” + “belief.” We started our search in Google
Scholar, and then focused more specifically on the Communication and Mass Media
Complete, PsycInfo, and ACM Digital Library databases. Our search strategy moved
from narrow to broader; first searching the terms in the title, then the abstract, and then
the full text. Upon reviewing abstracts, we determined if the article empirically assessed
or theorized Internet users’ understanding of or interaction with algorithms. Finally, we
included additional relevant articles cited by these initial articles. In total, we reviewed
96 articles that were deemed potentially relevant, 50 of which are included in this review.
the capacity and opportunity to be aware of both the presence and impact of algorithmically-
driven systems on self- or collaboratively-identified goals, and the capacity and opportunity to
crystalize this understanding into a strategic use of these systems to accomplish said goals.
(DeVito, 2021: 3)
684 new media & society 27(2)
Second, as “being aware of the use of algorithms in online applications, platforms, and
services, knowing how algorithms work, being able to critically evaluate algorithmic
decision-making as well as having the skills to cope with or even influence algorithmic
operations” (Dogruel et al., 2021: 4). Both definitions attempt to incorporate the evolu-
tion of many sub-dimensions of algorithmic understanding. The first proposes two broad
stages of understanding, from mere awareness to practical use. The second expands lit-
eracy to four steps, by distinguishing awareness from knowledge, adding the ability to
critique algorithms, and the skills to influence them. While these definitions offer neces-
sarily nuanced definitions of literacy, they propose different levels of granularity in terms
of what constitutes “literacy” comprehensively.
Finally, algorithmic skill refers to “users’ knowledge about algorithms and their role in
making online content visible, as well as users’ ability to figure out how particular algo-
rithms work, and then leverage that knowledge when producing and sharing content”
(Klawitter and Hargittai, 2018: 3492).
Swart (2021a) categorizes experiences with algorithms into cognitive, affective, and
behavioral dimensions, where understanding algorithms represents the cognitive com-
prehension of their existence and functioning, sensing algorithms represents the affective
influences that algorithms have over users, and engaging with algorithms represents the
behavioral dimension of interactions with algorithms. This aligns similarly to Lomborg
and Kapsch’s (2020) framework of knowing, feeling, and doing algorithms.
Dogruel et al. (2021) place both awareness and knowledge in the cognitive dimension
of understanding algorithms, separate from a behavioral dimension, which includes cop-
ing with algorithms and using them for creation. Cotter (2022) taps into the behavioral
by proposing a practical knowledge of algorithms, “to capture knowledge located at the
intersection of practice and discourse” (p. 2). This is similar to the use of skills (Klawitter
and Hargittai, 2018), though the ambiguity of algorithms offers no concrete proof of how
skilled a user is in using them, highlighting a boundary condition of behavioral under-
standing. Finally, an affective dimension has developed largely in the literature of atti-
tudes toward algorithms. Specifically, research pits appreciation (preferring an algorithm
over a human in decision-making; Logg et al., 2019) against aversion (preferring a
Oeldorf-Hirsch and Neubaum 685
human over an algorithm; Dietvorst et al., 2015). Though not explicitly about under-
standing algorithms—rather focusing on how individuals feel about them—these affec-
tive components also imply awareness, and potentially some component of skill.
The short but varied history of algorithmic definitions represents both a range of con-
cepts that are being addressed (e.g. awareness vs skills), and may also highlight termino-
logical inconsistencies that need to be addressed for the field to move forward cohesively.
We advocate for further converging on algorithmic literacy, such as has been defined by
DeVito (2021) and Dogruel et al. (2021), as an umbrella term. However, we must decide
whether to cultivate a cohesive definition of literacy as an overarching construct, or
accept definitions as collections of other concepts that make up literacy.
Previous iterations of literacy (e.g. media, information, Internet, digital, and social
media literacy), are also multi-faceted, incorporating elements of their previous litera-
cies. For instance, a recent systematic review of social media literacy (Polanco-Levicán
and Salvo-Garrido, 2022) concludes that its definition takes media literacy and adds ele-
ments pertinent to social media, which overlap but do not encompass digital literacy. Yet,
no one definition of social media literacy rises to the surface. Instead, definitions vary
from those that tap into cognitive, affective, and behavioral elements to those which
address increasingly complex stages of understanding. Settling on one definition of algo-
rithmic literacy will prove just as difficult. In any case, we can similarly categorize the
existing and emerging cognitive, affective, and behavioral aspects of understanding, and
further define boundaries between them. Moving forward, this allows the development
of literacy frameworks, which can address literacy gaps and lead to interventions along
the lines of previous literacy research in communication technology. As a first step, we
have visualized the current definitions in Figure 1.
experimenting with an algorithm on its newly created News Feed in 2007 (Wallaroo
Media, 2021). This led to EdgeRank, Facebook’s first algorithm, which showed News
Feed content based on a variety of factors, including relationships, “weight” of each
item, and time decay (Bucher, 2012). This has since been replaced by a more sophisti-
cated and constantly evolving machine learning algorithm to curate highly personalized
content (Tech@Facebook, 2021). Twitter implemented its Timeline algorithm in 2016,
switching to an “optimized” (rather than chronological) feed by default (Koumchatzky
and Andryeyev, 2017), and Instagram followed suit the same year (Titcomb, 2017). Most
recently, TikTok revealed in 2020 that its algorithm recommends videos in a user’s “For
You” feed based on user interactions with other videos, video information, and device
and account settings (TikTok, 2020). Each of these algorithms chooses for the user what
appears in their social media feed, and not all users know this.
Some of the earliest writing on how social media users engage with algorithms started
with Bucher’s (2012) Foucauldian analysis of managing visibility on Facebook within its
EdgeRank algorithm. This case study illuminated how even the earliest social media algo-
rithms shaped the prevalence of one’s content and thus identity in social media spaces. The
first empirical work on Facebook users’ experiences with algorithms showed that the
majority (62.5%) were still not aware that Facebook did not show all available posts in
their news feeds, and were surprised or even angry to find out that content was filtered
(Eslami et al., 2015). This left most users behind in trying to communicate and manage
their online relationships. When asked more openly whether they thought Facebook always
showed all their friends’ posts, the majority (73%) said no (Rader and Gray, 2015). Yet they
did not understand how such filtering worked, or why it was done, which meant they had
little power to influence or leverage it. By now, most online news users realize that content
is filtered, but still have a limited understanding of the criteria used (Powers, 2017; Swart,
2021b). Similarly, YouTube users show a high awareness of the algorithmic process that
recommends content on the platform, but can only guess at what data it uses (Alvarado
et al., 2020). In both cases, this leaves users guessing at how to get to the content they want
or how to get their content to desired audiences. Notably, TikTok users feel acutely aware
of the algorithms that shape their “For You” page and state that they regularly “train” the
algorithm to show desirable videos (Siles and Meléndez-Moran, 2021), though the accu-
racy of this is difficult to determine given the opaque nature of algorithms.
In any case, even those actively invested in understanding algorithms can only glean
so much from their interactions with them. Independent artists on sites such as Etsy rec-
ognize the importance of algorithms, and find ways to learn about taking advantage of
them (e.g. by testing out various search optimization strategies), but are ultimately frus-
trated with their lack of verified knowledge (Klawitter and Hargittai, 2018). YouTube
content creators engage in “algorithmic labor” to negotiate the opacity and precarity of
the platform’s advertising moderation algorithms (Ma and Kou, 2021). Instagram influ-
encers are also acutely aware of algorithms, but lack definitive information about their
functioning, so they take it upon themselves to “play the visibility game” by testing the
outcomes of various engagement behaviors (Cotter, 2019).
For marginalized communities, not being able to grasp onto the algorithm is equally
important, and can have serious social consequences. Lesbian, gay, bisexual, transgender
and queer/questioning (LGBTQ+) Facebook users carefully navigate algorithms to
Oeldorf-Hirsch and Neubaum 687
manage their self-presentation in online spaces subject to context collapse, yet must con-
tinually re-theorize how these changing algorithms work (DeVito, 2021). Similarly, on
TikTok, LGBTQ+ users never feel fully in control of their digital self-presentation,
because while the algorithm is highly personalized, it cannot be tamed, leaving users
unable to integrate their various selves (Simpson et al., 2022). Thus, while algorithmic
literacy of social media platforms has increased markedly within the past decade, users
may be reaching the limits of what they can know without greater algorithmic transpar-
ency, and are facing the consequences.
(e.g. Internet access). Initial research on algorithmic literacy gaps shows that those with
less developed technological (specifically, search engine) skills also showed lower algo-
rithmic knowledge (Cotter and Reisdorf, 2020). Yet even those with higher formal edu-
cation may be missing this technology-specific knowledge. A recent report shows that
college students are no longer prepared for the information landscape that exists today,
as assignments do not address the necessary technological skills (Head et al., 2020).
While most of these students indicated an awareness of algorithms, most had no idea
how they worked or what their effects would be. With most of the online content that
users engage with now controlled by algorithms, a lack of information literacy implies a
lack of algorithmic literacy, with detrimental implications.
Another common factor in algorithmic literacy, as with most digital literacies, is the
effect of age (Cotter and Reisdorf, 2020; Gran et al., 2021), with younger Internet users
showing more algorithmic knowledge than older users. This may disproportionately
leave older social media users at higher risk for misinformation or information exclusion.
This pattern can already be seen in terms of how different generations handle misinfor-
mation online, with reports indicating that older users are worse at recognizing misinfor-
mation, and have a greater hand in spreading it (Gottfried and Grieco, 2018). With
algorithmic literacy at stake, those with already lower algorithmic literacy can be further
adversely affected with reduced or biased information access in their social media feeds.
For example, not understanding that an algorithm is dictating what appears in one’s
Facebook feed could lead a user to believe that the limited political information they are
seeing is the whole and accurate political reality.
More insidiously than just not showing users the full scope of information, algorithms
systematically bias content for users, excluding entire groups from receiving information
or being represented by it. This happens when Google shows ads for higher-paying jobs
disproportionately to men over women (Kirkpatrick, 2016), or when Facebook targets
their housing ads so as to exclude certain racial, religious, disabled, and other protected
classes of people (Booker, 2019). Worse yet, algorithms can make detrimental assump-
tions about users in a process called algorithmic symbolic annihilation, such as when
individuals who have experienced pregnancy loss continue to be subjected to content
about pregnancy (Andalibi and Garcia, 2021).
Unfortunately, algorithms do not merely reflect existing biases, but further perpetuate
them through their own design. Danks and London (2017) taxonomize routes to algorith-
mic bias, putting interpretation bias, or how the algorithm presents information to the
user, at the end. As they point out, algorithms are biased through many earlier steps,
starting with learning from biased input data. For instance, facial recognition software—
now widely understood to be biased against women and people of color—is likely built
on training datasets that disproportionately feature white male faces (Garvie and Frankle,
2016). This could mean dominant groups receiving even more opportunities than already
marginalized groups. While this problem expands beyond mere literacy, awareness of
these biases is the first step in correcting them.
result of clearer profiling, management, and control that influences key attitudes about
algorithms and resulting behaviors.
unique user experiences. For example, surveys measuring algorithmic knowledge are
able to show that education and search skills are positively correlated with algorithmic
knowledge (Cotter and Reisdorf, 2020), and negatively correlated with online news
engagement (Makady, 2021). These studies provide new insight into what might predict
or be predicted by algorithmic literacy, though with a necessarily narrower understand-
ing of the concept.
One method in need of development involves experimental studies of algorithmic
literacy effects on various cognitive, affective, and behavioral outcomes. One such inter-
vention has tested the effects of exposure to algorithmic information and found changes
to users’ attitudes about algorithms (Silva et al., 2022). Computational approaches are
another avenue for more advanced assessments of algorithmic literacy, potentially
through the collection and display of social media data to its users for reflection.
Netflix, and YouTube), leaving open the question of whether content differs for other
platforms (e.g. Instagram, Twitter) or is truly generalizable across all algorithmic media
content.
Cotter and Reisdorf’s (2020) measure of algorithmic knowledge asks participants to
rank how much influence they feel various actions have on their search engine results.
This differs from knowledge measures Zarouali et al. (2021) validated against awareness
in their AMCA scale, which used true/false statements about common algorithmic mis-
conceptions. Their evidence indicates that algorithmic awareness and algorithmic knowl-
edge are positively correlated, but remain distinct concepts. Therefore, rather than
focusing on distinguishing or combining concepts, future research should focus on fur-
ther developing frameworks that incorporate sub-dimensions, such as those proposed by
Swart (2021a) and Lomborg and Kapsch (2020).
information ends up in their feeds. They find evidence that viewing these statements
increased users’ understanding of what algorithms are and how they function. Notably, it
often presented new and surprising information to users, indicating that average social
media users still have a lot to learn about algorithms, but that even a little bit of informa-
tion from the app itself can have a significant impact on their understanding. Some apps
do provide various levels of algorithmic cues for why content appears in a feed, such as
“because you interacted with a post from this user” on Instagram or “Sara celebrated this
post” on LinkedIn, yet it is not yet known which cues are displayed to which users, and
whether they are noticed at all.
The AX framework (Alvarado and Waern, 2018) outlines which psychological pro-
cesses operate when individuals interact with each of these technological cues and prop-
erties. Besides identifying these processes, an applied literacy/design approach could
specify how different manifestations of those cues (explicit or implicit recommenda-
tions) form different dimensions of algorithmic literacy (e.g. awareness, knowledge,
interaction skills). This theoretical endeavor should also include key variables that sig-
nificantly shape the level of algorithmic literacy besides the actual human–algorithm
interaction such as general technological experience (e.g. search engine skills; Cotter and
Reisdorf, 2020).
Algorithmic folk theories play an important role in uncovering how social media
users understand, feel about, and engage with algorithms, from their perspective. While
folk theories can vary in accuracy, they shed light on users’ subjective experiences with
algorithmic environments, which affect their attitudes and behaviors around algorithms.
Crucially, they highlight what users do and do not perceive in terms of algorithmic trans-
parency, which offers important insights for designers who make choices about what
algorithmic cues to make visible on their interfaces. Previous works specified which folk
theories users of specific platforms develop based on interrelationships between behav-
ior and the consequences they observe (e.g. DeVito, 2021; Eslami et al., 2016; Lee et al.,
2022; Ytre-Arne and Moe, 2021) and which kind of sources they use to develop these lay
assumptions (DeVos et al., 2022). Still, future research needs to observe to what extent
different levels of algorithmic transparency (manifested through cues) can provoke cer-
tain cognitive, affective, and behavioral responses reflecting a certain level of algorith-
mic literacy.
engaging users in this process: (1) curiosity, (2) motivation, (3) control, and (4) practice.
These reflect individual, situational, attitudinal, and behavioral aspects.
Curiosity. First, learning about algorithms seems aided greatly when curiosity is triggered
(e.g. Siles and Meléndez-Moran, 2021). While most social media users are now aware of
the existence of algorithms (Lomborg and Kapsch, 2020), this is not enough to provoke
greater literacy. Instead, users likely need to be curious about what the algorithms do and
why. Curiosity may be a set personality trait, but could be encouraged in certain social
media contexts. Bucher (2017) finds that many Facebook users first learn about algo-
rithms in unexpected encounters or “whoa moments,” such as when they realize a social
media ad has “found them” from previous interactions. Rather than make these encoun-
ters “creepy,” they could employ algorithmic cues about why the content in question
found them. Previous research indicates that brief explanatory mechanisms could be
effective in increasing literacy (Rader et al., 2018). To spark curiosity about algorithms,
a pop-up notification could appear when users engage with a post, asking “curious why
you received this post?” and providing an opportunity for users to learn more.
Motivation. Second, motivation may be crucial, as passive social media users are not as
likely to care why certain content appears on their social media apps. Yet, more active
users such as content creators, influencers, and those who otherwise use social media to
meet specific goals have a vested interest in learning how the algorithm filters content.
For example, Etsy artists and YouTube content creators strategize to optimize the algo-
rithm (Klawitter and Hargittai, 2018; Ma and Kou, 2021) for greater earning potential.
Furthermore, users in the demographic majority who find themselves well-represented
by the content in their feeds may not feel compelled to care how the algorithm works, as
it already serves them (e.g. DeVito, 2022). However, users in marginalized groups whose
identities are not as prominent in the space—especially when they are actively fighting
for recognition through movements such as Black Lives Matter or #MeToo—are likely
to be more motivated to understand how an algorithm could filter out their presence.
Thus, finding each user’s motivations for engaging with content on a social media plat-
form could be a vital step in determining what and how to increase their algorithmic
literacy.
Control. Third, to the extent that users possess a greater locus of control, or feel they have
more influence on the algorithm, the more likely they may be to engage with and learn
from it. Previous research finds that users appreciate algorithms more when they are
given even a little control over it (Dietvorst et al., 2018). This seems particularly true in
the case of the TikTok algorithm, where a user’s sense that they have some influence on
it plays a role in their enjoyment of the platform, and indicates a deeper algorithmic
understanding than on other platforms (Siles and Meléndez-Moran, 2021; Simpson et al.,
2022). While platforms determine how much influence (if any) users can have on their
algorithm, users could be made aware about where they have some influence, such as
how to prioritize certain friends in one’s Facebook news feed or how to turn off person-
alization of trends on Twitter.
Oeldorf-Hirsch and Neubaum 695
Practice. Finally, users need practice with algorithms to better understand them. Several
studies indicate that those who use social media platforms more are more knowledgeable
about the algorithms that determine the content shown (Cotter and Reisdorf, 2020;
Eslami et al., 2015). Demographic factors such as age and education are also correlated
with algorithmic literacy (Gran et al., 2021), providing indirect evidence that those who
use social media more (younger, more educated) have greater algorithmic literacy. Prac-
tice with algorithms might then be a matter of closing the digital divide, by providing
better access to and training on social media, both in formal education contexts and
through online learning opportunities while using social media platforms.
Conclusion
The current state of the research on algorithmic literacy is rich, if still somewhat scat-
tered in its approaches across various fields of study. In the past decade, researchers have
uncovered how aware social media users are of algorithms, how they form folk theories,
and have begun to develop quantitative assessments of algorithmic literacy. While a
comprehensive framework of algorithmic literacy is difficult to develop due to the
opaque, heterogeneous, and user-dependent nature of the algorithms being investigated,
some attempts exist to synthesize the user experience of algorithms. Still, much remains
unknown, such as what predicts algorithmic literacy, its cognitive, affective, and behav-
ioral outcomes, and how to improve it. Thus, we present an agenda for moving forward
with algorithmic literacy research, which includes balancing user and developer respon-
sibilities, engaging users in their literacy, further developing behavioral dimensions of
literacy, and addressing the algorithmic divide.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/
or publication of this article: This research was completed with support from the Fulbright U.S.
Scholar Program and Fulbright Germany.
Oeldorf-Hirsch and Neubaum 697
ORCID iDs
Anne Oeldorf-Hirsch https://orcid.org/0000-0002-3961-3766
German Neubaum https://orcid.org/0000-0002-7006-7089
References
Alvarado O and Waern A (2018) Towards algorithmic experience. In: Proceedings of the 2018
CHI conference on human factors in computing systems, Montreal, QC, Canada, 21–26 April,
pp. 1–12. New York: ACM.
Alvarado O, Heuer H, Vanden Abeele V, et al. (2020) Middle-aged video consumers’ beliefs about
algorithmic recommendations on YouTube. Proceedings of the ACM on Human-Computer
Interaction 4(CSCW2): 121.
Andalibi N and Garcia P (2021) Sensemaking and coping after pregnancy loss: the seeking and
disruption of emotional validation online. Proceedings of the ACM on Human-Computer
Interaction 5(CSCW1): 127.
Andersen J (2020) Understanding and interpreting algorithms: toward a hermeneutics of algo-
rithms. Media, Culture & Society 42(7–8): 1479–1494.
Araujo T, Helberger N, Kruikemeier S, et al. (2020) In AI we trust? Perceptions about automated
decision-making by artificial intelligence. AI & Society 35(3): 611–623.
Barnhart B (2021) Everything You Need to Know about Social Media Algorithms. Sprout Social.
Available at: https://sproutsocial.com/insights/social-media-algorithms/
Booker B (2019) After lawsuits, Facebook announces changes to alleged discriminatory ad tar-
geting. NPR, 19 March. Available at: https://www.npr.org/2019/03/19/704831866/after-law-
suits-facebook-announces-changes-to-alleged-discriminatory-ad-targeting
Bucher T (2012) Want to be on the top? Algorithmic power and the threat of invisibility on
Facebook. New Media & Society 14(7): 1164–1180.
Bucher T (2017) The algorithmic imaginary: exploring the ordinary affects of Facebook algo-
rithms. Information Communication & Society 20(1): 30–44.
Cotter K (2019) Playing the visibility game: how digital influencers and algorithms negotiate
influence on Instagram. New Media & Society 21(4): 895–913.
Cotter K (2022) Practical knowledge of algorithms: the case of BreadTube. New Media & Society.
Epub ahead of print 18 March. DOI: 10.1177/14614448221081802.
Cotter K and Reisdorf BC (2020) Algorithmic knowledge gaps: a new dimension of (digital) ine-
quality. International Journal of Communication 14: 745–765.
Danks D and London AJ (2017) Algorithmic bias in autonomous systems. In: Proceedings of
the 26th international joint conference on artificial intelligence (IJCAI), Melbourne, VIC,
Australia, 19–25 August, pp. 4691–4697. Palo Alto, CA: AAAI Press.
DeVito MA (2021) Adaptive folk theorization as a path to algorithmic literacy on changing plat-
forms. Proceedings of the ACM on Human-Computer Interaction 5(CSCW2): 339.
DeVito MA (2022) How transfeminine TikTok creators navigate the algorithmic trap of visibility via
folk theorization. Proceedings of the ACM on Human-Computer Interaction 6(CSCW2): 380.
DeVito MA, Birnholtz J, Hancock JT, et al. (2018) How people form folk theories of social media
feeds and what it means for how we study self-presentation. In: Proceedings of the 2018 CHI
conference on human factors in computing systems, Montreal, QC, Canada, 21–26 April, pp.
1–12. New York: ACM.
DeVito MA, Gergle D and Birnholtz J (2017) “Algorithms ruin everything”: #RIPTwitter, folk
theories, and resistance to algorithmic change in social media. In: Proceedings of the 2017
698 new media & society 27(2)
CHI conference on human factors in computing systems, Denver, CO, 6–11 May, pp. 3163–
3174. New York: ACM.
DeVos A, Dhabalia A, Shen H, et al. (2022) Toward user-driven algorithm auditing: investigating
users’ strategies for uncovering harmful algorithmic behavior. In: Proceedings of the 2022
CHI conference on human factors in computing systems, New Orleans, LA, 29 April–5 May,
pp. 1–19. New York: ACM.
Dienlin T and Metzger MJ (2016) An extended privacy calculus model for SNSs: analyzing self-
disclosure and self-withdrawal in a representative US sample. Journal of Computer-Mediated
Communication 21(5): 368–383.
Dietvorst BJ, Simmons JP and Massey C (2015) Algorithm aversion: people erroneously avoid
algorithms after seeing them err. Journal of Experimental Psychology: General 144(1): 114–
126.
Dietvorst BJ, Simmons JP and Massey C (2018) Overcoming algorithm aversion: people will use
imperfect algorithms if they can (even slightly) modify them. Management Science 64(3):
1155–1170.
Dogruel L, Masur P and Joeckel S (2021) Development and validation of an algorithm literacy
scale for Internet users. Communication Methods and Measures 16(2): 115–133.
Eslami M, Karahalios K, Sandvig C, et al. (2016) First I “like” it, then I hide it: folk theories of
social feeds. In: Proceedings of the 2016 CHI conference on human factors in computing
systems, San Jose, CA, 7–12 May, pp. 2371–2382. New York: ACM.
Eslami M, Rickman A, Vaccaro K, et al. (2015) I always assumed that I wasn’t really that close
to [her]. In: Proceedings of the 33rd annual ACM conference on human factors in computing
systems, Seoul, Republic of Korea, 18–3 April, pp. 153–162. New York: ACM.
Festic N (2020) Same, same, but different! Qualitative evidence on how algorithmic selection
applications govern different life domains. Regulation & Governance 16: 85–101.
Festl R (2021) Social media literacy & adolescent social online behavior in Germany. Journal of
Children and Media 15(2): 249–271.
Garvie C and Frankle J (2016) Facial-recognition software might have a racial bias problem. The
Atlantic, pp. 1–9. Available at: https://www.theatlantic.com/technology/archive/2016/04/the-
underlying-bias-of-facial-recognition-systems/476991/
Gottfried J and Grieco E (2018) Younger Americans Are Better than Older Americans at Telling
Factual News Statements from Opinions. Pew Research Center. Available at: https://www.
pewresearch.org/fact-tank/2018/10/23/younger-americans-are-better-than-older-americans-
at-telling-factual-news-statements-from-opinions/
Gran A-B, Booth P and Bucher T (2021) To be or not to be algorithm aware: a question of a new
digital divide? Information Communication & Society 24(12): 1779–1796.
Hamilton K, Karahalios K, Sandvig C, et al. (2014) A path to understanding the effects of algo-
rithm awareness. In: Proceedings of the CHI conference on extended abstracts on human
factors in computing systems, Toronto, ON, Canada, 26 April–1 May, pp. 631–642. New
York: ACM.
Hargittai E (2009) An update on survey measures of web-oriented digital literacy. Social Science
Computer Review 27(1): 130–137.
Hargittai E, Gruber J, Djukaric T, et al. (2020) Black box measures? How to study people’s algo-
rithm skills. Information Communication & Society 23(5): 764–775.
Head AJ, Fister B and MacMillan M (2020) Information Literacy in the Age of Algorithms: Student
Experiences with News and Information, and the Need for Change. Project Information Literacy
Research Institute. Available at: https://projectinfolit.org/publications/algorithm-study/
Horton FW (1983) Information literacy vs. Computer literacy. Bulletin of the American Society for
Information Science 9(4): 14–16.
Oeldorf-Hirsch and Neubaum 699
Huszár F, Ktena SI, O’Brien C, et al. (2022) Algorithmic amplification of politics on Twitter.
Proceedings of the National Academy of Sciences of the United States of America 119(1):
e2025334119.
Janssen J, Stoyanov S, Ferrari A, et al. (2013) Experts’ views on digital competence: commonali-
ties and differences. Computers & Education 68: 473–481.
Johnston B and Webber S (2005) As we may think: information literacy as a discipline for the
information age. Research Strategies 20(3): 108–121.
Karizat N, Delmonaco D, Eslami M, et al. (2021) Algorithmic folk theories and identity: how
TikTok users co-produce knowledge of identity and engage in algorithmic resistance.
Proceedings of the ACM on Human-Computer Interaction 5(CSCW2): 305.
Kirkpatrick K (2016) Battling algorithmic bias. Communications of the ACM 59(10): 16–17.
Kitchin R (2017) Thinking critically about and researching algorithms. Information Communication
& Society 20(1): 14–29.
Klawitter E and Hargittai E (2018) “It’s like learning a whole other language”: the role of algo-
rithmic skills in the curation of creative goods. International Journal of Communication 12:
3490–3510.
Klinger U and Svensson J (2018) The end of media logics? On algorithms and agency. New Media
& Society 20(12): 4653–4670.
Klug D, Qin Y, Evans M, et al. (2021) Trick and please. A mixed-method study on user assump-
tions about the TikTok algorithm. In: Proceedings of the 13th ACM Web Science Conference
2021 (WebSci ‘21), Virtual Event. New York: Association for Computing Machinery, pp.
84–92.
Koc M and Barut E (2016) Development and validation of New Media Literacy Scale (NMLS) for
university students. Computers in Human Behavior 63: 834–843.
Koenig A (2020) The algorithms know me and I know them: using student journals to uncover
algorithmic literacy awareness. Computers and Composition 58: 102611.
Koumchatzky N and Andryeyev A (2017) Using Deep Learning at Scale in Twitter’s Timelines.
Twitter Blog. Available at: https://blog.twitter.com/engineering/en_us/topics/insights/2017/
using-deep-learning-at-scale-in-twitters-timelines
Latzer M and Festic N (2019) A guideline for understanding and measuring algorithmic govern-
ance in everyday life. Internet Policy Review 8(2): 1–19.
Lee AY, Mieczkowski H, Ellison NB, et al. (2022) The algorithmic crystal: conceptualizing the
self through algorithmic personalization on TikTok. Proceedings of the ACM on Human-
Computer Interaction 6(CSCW2): 543.
Livingstone S (2004) Media literacy and the challenge of new information and communication
technologies. The Communication Review 7(1): 3–14.
Logg JM, Minson JA and Moore DA (2019) Algorithm appreciation: people prefer algorithmic
to human judgment. Organizational Behavior and Human Decision Processes 151: 90–103.
Lomborg S and Kapsch PH (2020) Decoding algorithms. Media, Culture & Society 42(5): 745–761.
Ma R and Kou Y (2021) “How advertiser-friendly is my video?”: YouTuber’s socioeconomic
interactions with algorithmic content moderation. Proceedings of the ACM on Human-
Computer Interaction 5(CSCW2): 429.
Makady H (2021) “I wouldn’t react to it because of the algorithm”: How can self-presentation
moderate news consumption.In: Paper presented at the 104th annual conference of the
Association for Education in Journalism and Mass Communication (AEJMC), Denver, CO,
USA.
Ohme J (2021) Algorithmic social media use and its relationship to attitude reinforcement and
issue—specific political participation—the case of the 2015 European immigration move-
ments. Journal of Information Technology & Politics 18(1): 36–54.
700 new media & society 27(2)
Trepte S and Masur PK (2020) Need for privacy. In: Zeigler-Hill V and Shackelford TK (eds)
Encyclopedia of Personality and Individual Differences. Cham: Springer International
Publishing, pp. 3132–3135.
Van Deursen AJAM and Van Dijk JAGM (2014) The digital divide shifts to differences in usage.
New Media & Society 16(3): 507–526.
Wallaroo Media (2021) Facebook News Feed Algorithm History. Available at: https://walla-
roomedia.com/facebook-newsfeed-algorithm-history/
Winter S, Metzger MJ and Flanagin AJ (2016) Selective use of news cues: a multiple-motive per-
spective on information selection in social media environments. Journal of Communication
66(4): 669–693.
Yeomans M, Shah A, Mullainathan S, et al. (2019) Making sense of recommendations. Journal of
Behavioral Decision Making 32(4): 403–414.
Ytre-Arne B and Moe H (2021) Folk theories of algorithms: understanding digital irritation.
Media, Culture & Society 43(5): 807–824.
Zarouali B, Boerman SC and De Vreese CH (2021) Is this recommended by an algorithm? The
development and validation of the algorithmic media content awareness scale (AMCA-scale).
Telematics and Informatics 62: 101607.
Author biographies
Anne Oeldorf-Hirsch is an Associate Professor in the Department of Communication at the
University of Connecticut, where she conducts research in the Human-Computer Interaction lab.
Her research investigates the use of using social media to engage with news, health, and science
content.
German Neubaum is an Assistant Professor of Media Psychology and Education at the University
of Duisburg-Essen, Germany. His research interests focus on the educational benefits users can
gain from using social media. By combining media psychological methods and social media ana-
lytics, he studies technology-enabled educational processes in the context of politics, morality,
science, and health communication.