Current Issue
Preview Issue
Previous Issues
Preview Issue
Previous Issues
- 2024: 18.1
- 2023: 17.4
- 2023: 17.3
- 2023: 17.2
- 2023: 17.1
- 2022: 16.4
- 2022: 16.3
- 2022: 16.2
- 2022: 16.1
- 2021: 15.4
- 2021: 15.3
- 2021: 15.2
- 2021: 15.1
- 2020: 14.4
- 2020: 14.3
- 2020: 14.2
- 2020: 14.1
- 2019: 13.4
- 2019: 13.3
- 2019: 13.2
- 2019: 13.1
- 2018: 12.4
- 2018: 12.3
- 2018: 12.2
- 2018: 12.1
- 2017: 11.4
- 2017: 11.3
- 2017: 11.2
- 2017: 11.1
- 2016: 10.4
- 2016: 10.3
- 2016: 10.2
- 2016: 10.1
- 2015: 9.4
- 2015: 9.3
- 2015: 9.2
- 2015: 9.1
- 2014: 8.4
- 2014: 8.3
- 2014: 8.2
- 2014: 8.1
- 2013: 7.3
- 2013: 7.2
- 2013: 7.1
- 2012: 6.3
- 2012: 6.2
- 2012: 6.1
- 2011: 5.3
- 2011: 5.2
- 2011: 5.1
- 2010: 4.2
- 2010: 4.1
- 2009: 3.4
- 2009: 3.3
- 2009: 3.2
- 2009: 3.1
- 2008: 2.1
- 2007: 1.2
- 2007: 1.1
ISSN 1938-4122
Announcements
DHQ: Digital Humanities Quarterly
2023 17.2
Critical Code Studies
Editors: Mark Marino, Jeremy Douglass
Front Matter
[en] Introduction:
Situating Critical Code Studies in the Digital Humanities
Mark C. Marino, University of Southern California; Jeremy Douglass, University of California, Santa Barbara
Abstract
[en]
In this foreword from the editors we present a brief introduction to the field of
Critical Code Studies, a reflection on its genesis and evolution, and a summary of
the many and varied author contributions to Part 1 of this remarkable special
collection.
Code Close Readings
[en] Reverse Engineering the Gendered Design of Amazon’s
Alexa: Methods in Testing Closed-Source Code in Grey and Black Box Systems
Lai-Tze Fan, University of Waterloo
Abstract
[en]
This article examines the gendered design of Amazon Alexa’s voice-driven
capabilities, or, “skills,” in order to better understand how Alexa, as an AI
assistant, mirrors traditionally feminized labour and sociocultural expectations.
While Alexa’s code is closed source — meaning that the code is not available to be
viewed, copied, or edited — certain features of the code architecture may be
identified through methods akin to reverse engineering and black box testing. This
article will examine what is available of Alexa’s code — the official software
developer console through the Alexa Skills Kit, code samples and snippets of official
Amazon-developed skills on Github, and the code of an unofficial, third-party
user-developed skill on Github — in order to demonstrate that Alexa is designed to be
female presenting, and that, as a consequence, expectations of gendered labour and
behaviour have been built into the code and user experiences of various Alexa skills.
In doing so, this article offers methods in critical code studies toward analyzing
code to which we do not have access. It also provides a better understanding of the
inherently gendered design of AI that is designated for care, assistance, and menial
labour, outlining ways in which these design choices may affect and influence user
behaviours.
[en] BASIC FTBALL and Computer Programming for
All
Annette Vee, University of Pittsburgh
Abstract
[en]
In late fall 1965, John Kemeny wrote a 239-line BASIC program called
FTBALL***. Along with his colleague Thomas Kurtz and a few work-study
students at Dartmouth College, Kemeny had developed the BASIC programming
language and Dartmouth Time-Sharing System (DTSS). BASIC and DTSS
represented perhaps the earliest successful attempt at “computer
programming for all,” combining English-language vocabulary, simple
yet robust instructions, and near-realtime access to a mainframe computer.
This article takes a closer look at FTBALL as a crucial program in the
history of “programming for all” while gesturing to the tension
between a conception of “all” and FTBALL’s context in an elite,
all-male college in the mid-1960s. I put FTBALL in a historical, cultural,
gendered context of “programming for all” as well as the historical
context of programming language development, timesharing technology, and
the hardware and financial arrangements necessary to support this kind of
playful, interactive program in 1965. I begin with a short history of
BASIC’s early development, compare FTBALL with other early games and sports
games, then move into the hardware and technical details that enabled the
code before finally reading FTBALL’s code in detail. Using methods from
critical code studies (Marino 2020), I point to specific innovations of
BASIC at the time and outline the program flow of FTBALL. This history and
code reading of BASIC FTBALL provides something of interest to computing
historians, critical code studies practitioners, and games scholars and
aficionados.
[en] Computational art Explorations of Linguistic
Possibility Spaces: comparative translingual close readings of Daniel C. Howe’s
Automatype and Radical of the
Vertical Heart 忄
John Cayley, Brown University
Abstract
[en]
A code-critical close reading of two related works by Daniel C. Howe. The artist's
Automatype is an installation that visualizes and
sonifies minimal-distance paths between English words and thus explores a possibility
space that is relatively familiar to western readers, not only readers of English but
also readers of any language which uses Latin letters to compose the orthographic
word-level elements of its writing system . In Radical of
the Vertical Heart 忄 (RotVH) Howe engages with
commensurate explorations in certain possibility spaces of the Chinese writing system
and of the language’s lexicon. Translinguistically these spaces and, as it were,
orthographic architectures, are structured in radically different ways. A comparative
close reading of the two works will bring us into productive discursive relationship
not only with distinct and code-critically significant programming strategies, but
also with under-appreciated comparative linguistic concepts having implications for
the theory of writing systems, of text, and of language as such. Throughout,
questions concerning the aestheticization of this kind of computational exploration
and visualization may also be addressed. His website is programmatology.com.
[en] “Any Means Necessary to
Refuse Erasure by Algorithm:” Lillian-Yvonne Bertram’s Travesty
Generator
Zach Whalen, University of Mary Washington
Abstract
[en]
Lillian-Yvonne Bertram's 2019 book of poetry is titled Travesty Generator in reference to Hugh Kenner and Joseph
O'Rourke's Pascal program to “fabricate
pseudo-text” by producing text such that each n-length string
of characters in the output occurs at the same frequency as in the source
text. Whereas for Kenner and O'Rourke, labeling their work a
“travesty” is a hyperbolic tease or a literary
burlesque, for Bertram, the travesty is the political reality of racism in
America. For each of the works Travesty
Generator, Bertram uses the generators of computer poetry to
critique, resist, and replace narratives of oppression and to make explicit
and specific what is elsewhere algorithmically insidious and
ambivalent.
In “Counternarratives”, Bertram presents
sentences, fragments, and ellipses that begin ambiguously but gradually
resolve point clearly to the moment of Trayvon Martin's killing. The poem
that opens the book, “three_last_words”, is at a
functional level a near-echo of the program in Nick Montfort's “I AM THAT I AM”, which is itself a version or
adaptation of Brion Gysin's permutation poem of the same title. But
Bertram’s poem has one important functional difference in that Bertram's
version retains and concatenates the entire working result. With this
modification, the memory required to produce all permutations of the
phrase, “I can’t breathe”, is sufficiently
greater than the storage available most computers, so the poem will end in
a crashed runtime or a frozen computer--metaphorically reenacting and
memorializing Eric Garner’s death. Lillian-Yvonne Bertram's Travesty Generator is a challenging, haunting,
and important achievement of computational literature, and in this essay, I
expand my reading of this book to dig more broadly and deeply into how
specific poems work to better appreciate the collection's contribution to
the field of digital poetry.
[en] Poetry as Code as Interactive Fiction: Engaging
Multiple Text-Based Literacies in Scarlet Portrait
Parlor
Jason Boyd, Toronto Metropolitan University
Abstract
[en]
In Prismatik’s Scarlet Portrait Parlor (2020) poetry and
code uncannily appear one and the same. This results in a work that is both familiar
and strange, and this, along withScarlet Portrait
Parlor’s brevity, simplicity of construction, and immediate recognizability
as a work of literature (a sonnet) that is also executable source code producing a
work of electronic literature, has the potential to intrigue students and textual
scholars unfamiliar with and perhaps resistant to Critical Code Studies (CCS). A
study of Prismatik’s work also has the potential to refine some simplistic judgements
in CCS scholarship about the efficacy of code that emulates natural, human language.
This case study aims to elaborate the value of Scarlet Portrait
Parlor as a rich example of how poetry, programming, and interactive
fiction can be intertwined if not blurred in a single text and to act as a catalyst
for generative discussions about the overlapping and intertwining of natural
languages, programming languages, creative writing, and coding.
Code Legibility and Critical AI
[en] How to Do Things with
Deep Learning Code
Minh Hua, Johns Hopkins University; Rita Raley, University of California, Santa Barbara
Abstract
[en]
The premise of this article is that a basic understanding of the composition and
functioning of large language models is critically urgent. To that end, we extract a
representational map of OpenAI’s GPT-2 with what we articulate as two classes of deep
learning code, that which pertains to the model and that which underwrites
applications built around the model. We then verify this map through case studies of
two popular GPT-2 applications: the text adventure game, AI
Dungeon, and the language art project, This Word Does
Not Exist. Such an exercise allows us to test the potential of Critical
Code Studies when the object of study is deep learning code and to demonstrate the
validity of code as an analytical focus for researchers in the subfields of Critical
Artificial Intelligence and Critical Machine Learning Studies. More broadly, however,
our work draws attention to the means by which ordinary users might interact with,
and even direct, the behavior of deep learning systems, and by extension works toward
demystifying some of the auratic mystery of “AI.” What is at stake is the
possibility of achieving an informed sociotechnical consensus about the responsible
applications of large language models, as well as a more expansive sense of their
creative capabilities — indeed, understanding how and where engagement occurs allows
all of us to become more active participants in the development of machine learning
systems.
[en] Tracing “Toxicity” Through Code: Towards a
Method of Explainability and Interpretability in Software
David M. Berry, University of Sussex
Abstract
[en]
The ubiquity of digital technologies in citizen’s lives marks a major qualitative
shift where automated decisions taken by algorithms deeply affect the lived
experience of ordinary people. But this is not just an action-oriented change as
computational systems can also introduce epistemological transformations in the
constitution of concepts and ideas. However, a lack of public understanding of how
algorithms work also makes them a source of distrust, especially concerning the way
in which they can be used to create frames or channels for social and individual
behaviour. This public concern has been magnified by election hacking, social media
disinformation, data extractivism, and a sense that Silicon Valley companies are out
of control. The wide adoption of algorithms into so many aspects of peoples’ lives,
often without public debate, has meant that increasingly algorithms are seen as
mysterious and opaque, when they are not seen as inequitable or biased. Up until
recently it has been difficult to challenge algorithms or to question their
functioning, especially with wide acceptance that software’s inner workings were
incomprehensible, proprietary or secret (cf. open source). Asking why an algorithm
did what it did often was not thought particularly interesting outside of a strictly
programming context. This meant that there has been a widening explanatory gap in
relation to understanding algorithms and their effect on peoples’ lived experiences.
This paper argues that Critical Code Studies offers a novel field for developing
theoretical and code-epistemological practices to reflect on the explanatory deficit
in modern societies from a reliance on information technologies. The challenge of new
forms of social obscurity from the implementation of technical systems is heightened
by the example of machine learning systems that have emerged in the past decade. A
key methodological contribution of this paper is to show how concept formation, in
this case of the notion of “toxicity,” can be traced through key categories and
classifications deployed in code structures (e.g. modularity and layering software)
but also how these classifications can appear more stable than they actually are by
the tendency of software layers to obscure even as they reveal. How a concept such as
“toxicity” can be constituted through code and discourse and then used
unproblematically is revealing in relation to both its technical deployment but also
for a possible computational sociology of knowledge. By developing a broadened notion
of explainability, this paper argues that critical code studies can make important
theoretical, code-epistemological and methodological contributions to digital
humanities, computer science and related disciplines.
[en] Nonsense Code: A Nonmaterial Performance
Barry Rountree, Lawrence Livermore National Laboratory; William Condee, Ohio University
Abstract
[en]
Critical Code Studies often relies on the textual representation of code in order to
derive extra-textual significance, with less focus on how code performs and in what
contexts. In this paper we analyze three case studies in which a literal reading of
each program’s code is effectively nonsense. In their performance, however, the
programs generate meaning. To discern this meaning, we use the framework of
nonmaterial performance (NMP), which is based on four tenets: code abstracts, code
performs, code acts within a network, and code is vibrant. We begin with what is to
our knowledge the oldest example of nonsense code: a program (now lost) from the
1950s that caused a Univac 1 computer to hum “Happy
Birthday”. Second, we critique Firestarter, a processor stress test from
the Technical University of Dresden. Finally, we analyze one of the family of
processor power side-channel attacks known collectively as Platypus. In each case,
the text of the code is a wholly unreliable guide to its extra-textual significance.
This paper builds on work in Critical Code Studies by bringing in methodologies from
actor-network theory and political science, examining code from a performance-studies
perspective and with expertise from computer science. Code can certainly be read as
literature, but ultimately it is text written to be performed. Imagining and
observing the performance forces the critic to engage with the code in its own
network. The three examples we have chosen to critique here are outliers---very
little code in the world is purposed to manipulate the physical machine. Nonsense
shows us the opportunity that nonmaterial performance creates: to decenter text from
privileged position and to recenter code as a performance.
Code Languages and Linguistics
[en] ᐊᒐᐦᑭᐯᐦᐃᑲᓇ ᒫᒥᑐᓀᔨᐦᐃᒋᑲᓂᐦᑳᓂᕽ | acahkipehikana mâmitoneyihicikanihkânihk |
Programming with Cree# and Ancestral Code:
Nehiyawewin Spirit Markings in an Artificial Brain
Jon Corbett, Simon Fraser University
Abstract
[en]
In this article, I discuss my project “Ancestral
Code”, which consists of an integrated development
environment (IDE) and the Nehiyaw (Plains Cree) based programming
languages called Cree# (pronounced: Cree-Sharp) and
ᐊᒋᒧ (âcimow).
These languages developed in response to western perspectives on human-computer
relationships, which I challenge and reframe in Nehiyaw/Indigenous contexts.
[en] The Less Humble
Programmer
Daniel Temkin, Bard College
Abstract
[en]
Esoteric programming languages (esolangs) break from the norms of language design by
explicitly refusing practicality and clarity. While some go even further and make
code impossible to write (e.g. Unnecessary), others (e.g. Malboge) retains the
ability to express functional and reliable code, despite the seeming disorder of the
language. To understand the conversation these languages are having, we must look at
how they challenge or re-affirm wider ideas in programming culture and in how
computer science is taught: specifically the sometimes-contradictory aesthetics of
Humbleness and Computational Idealism.
Tools Criticism
Editor: Peter Verhaar
Articles
[en] The Explainability Turn
David M. Berry, University of Sussex
Abstract
[en]
How can we know what our computational infrastructures are doing to us? More to
the point, how can we have any confidence that their effects on our minds are
positive rather than negative? Certainly, it is the case that digital
infrastructures combined with spatial and temporal organisation create forms of
digitally-enabled structures that serve to change the cognitive capacity of
humans. How then to assess these new digital infrastructures and machine
learning systems? One of the most difficult tasks facing the critical theorist
today is understanding the delegation and prescription of agency in digital
infrastructures. These are capital intensive systems and hence tend to be
developed by corporations or governments in order to combine multiple systems
into a single unity. The systems they build are often difficult if not
impossible to understand and require the public to trust but not to be able to
verify the system decisions. In contrast, recent moves to assuage worries over
the opaque and threatening potential of computation have been partially
addressed through a new legal right to challenge algorithms and their decisions.
This requirement, termed “explainability,” I suggest might contribute to
tool criticism within digital humanities for investigating and potentially
challenging these assemblages and creating a potential for democratic
contestation.
[en] Distant Reading and Viewing: “Big
Questions” in Digital Art History and Digital Literary Studies
Ruta Binkyte, Inria Saclay - Île-de-France Research Centre, Institut Polytechnique de Paris
[en] Tool criticism
in practice. On methods, tools and aims of computational literary
studies
J. Berenike Herrmann, Universität Bielefeld; Anne-Sophie Bories, University of Basel; Francesca Frontini, CNR - Istituto di Linguistica Computazionale A. Zampolli; Clèmence Jacquot; Steffen Pielström, University of Würzburg; Simone Rebora, Johannes Gutenberg University Mainz; Geoffrey Rockwell, University of Alberta; Stéfan Sinclair
Abstract
[en]
This paper is a case-driven contribution to the discussion on the method-theory
relationship in practices within the field of Computational Literary
Studies (CLS). Progress in this field dedicated to the computational
analysis of literary texts has long revolved around the new, digital tools:
tools, as computational devices for analysis, have had here a comparatively
strong status as research entities of their own, while their ontological status
has remained unclear to the day. As a rule, they have widely been imported from
the fields of data science and NLP, while less often being hand-tailored to
specific tasks within interdisciplinary settings. Although studies within CLS
are evolving to both a higher degree of specialization in method (going beyond
the limitations of out-of-the-box tools) and a stronger theoretical modeling,
the technological dimension remains a defining factor. An unreflective adoption
of technology in the shape of tools can compromise the plausibility and the
reproducibility of the results produced using these tools.
Our paper presents a multi-faceted intervention to the discussion around tools,
methods, and the research questions that are answered with them. It presents
research perspectives first conceived at the ADHO SIG-DLS workshop Anatomy of tools: A closer look at textual DH
methodologies that took place in Utrecht in July 2019. At that
event, the authors discussed selected case studies to address tool criticism
from several angles. Our goal was to leverage a tool-critical perspective, in
order to “take stock, reflect upon and critically comment
upon our own practices” within CLS.
We identified Textométrie, Stylometry, and
Semantic Text Mining as three central types of hands-on CLS.
For each of these sub-fields, we asked: What are our tools and
methods-in-use? What are the implications of using a tool-oriented
perspective as opposed to a methodology-oriented one? How do either relate
to research questions and theory? These questions were explored by
case-studies on an exemplary basis.
The unifying perspective of this paper is an applied tool criticism – a critical
inquiry leveraged towards crucial dimensions of CLS practices. Here we
re-compose the original oral papers and add entirely new sections to it, to
create a useful overview of the issue through a combination of perspectives.
While we elaborated the thematic connections between the individual case
studies, we hope the interactive spirit of an exemplary exchange remains
palpable: individual research perspectives shape the case studies reported
for Textométrie, Stylometry and Semantic
Text Mining, are complemented by further studies showcasing
CLS-specific perspectives
on replicability and domain-specific research, and
a short section discussing a tool inventory as a practical,
community-based incarnation of tool criticism.
The article reflects thus a rich array of perspectives on tool
criticism, including the complementary perspective of tool
defense – arguing that we need tools and methods as a basic common
ground on how to carry out fundamental operations of analysis and interpretation
within a community.
[en] Slow Listening: Digital Tools for Voice
Studies
Marit J. MacArthur, University of California, Davis; Lee M. Miller, University of California, Davis
Abstract
[en]
Sound studies in general, and voice studies in particular, present particular
challenges for digital humanities scholarship. The software tools available to
digital humanists who want to study performative speech are less familiar and
less developed for our uses, and the user base is also much smaller than for
text mining or network analysis. This article provides a critical narrative of
our research and an outline of our methodology, in applying, developing and
refining tools for the analysis of pitch and timing patterns in recorded
performances of literary texts. The primary texts and audio considered are
poetry readings, but the tools and methods can and have been applied more widely
to podcasts, talking books, political speeches, etc.
[en] Bias in Big Data, Machine Learning and AI: What
Lessons for the Digital Humanities?
Andrew Prescott, University of Glasgow
Abstract
[en]
This article surveys the ways in which issues of race and gender bias emerge in
projects involving the use of predictive analytics, big data and artificial
intelligence (AI). It analyses some of the reasons biased results occur and
argues for the importance of open documentation and explainability in combatting
these inequities. Digital humanities can make a significant contribution in
addressing these issues. This article was written in late 2020, and discussion
and public debate about AI and bias has moved on enormously since the article
was completed. Nevertheless, the fundamental proposition of this article has
become even more important and pressing as debates around AI have progressed –
namely, that as a result of the development of big data and AI, it is vital to
foster critical and socially aware approaches to the construction and analysis
of data. The greatest threat to humanity from AI comes not from autonomous
killer robots but rather from the social dislocation and injustices caused by an
overreliance on poorly designed and badly documented commercial black boxes to
administer everything from health care to public order and crime.
[en] The Politics of Tools
Stephen Ramsay, University of Nebraska-Lincoln
Abstract
[en]
A consideration of the political meaning of software that tries to add greater
philosophical precision to statements about the politics of tools and tool
building in the humanities. Using Michael Oakeshott's formulations of the “politics of faith” and the “politics of skepticism,” it suggests
that while declaring our tools be morally or political neutral may be obvious
fallacious, it is equally problematic to suppose that we can predict in advance
the political formations that will arise from our tool building. For indeed (as
Oakeshott suggests), the tools themselves give rise to what is politically
possible.
[en] Sentiment Analysis in Literary Studies. A
Critical Survey
Simone Rebora, Johannes Gutenberg University Mainz
Abstract
[en]
The article sets up a critique of Sentiment Analysis (SA) tools in literary
studies, both from a theoretical and a computational point of view. In the first
section, a possible use of SA in narratology and reader response studies is
discussed, highlighting the gaps between literary theories and computational
models, and suggesting possible solutions to fill them. In the second section, a
stratified taxonomy of SA tools is proposed, which distinguishes: (1) the
emotion theory adopted by the tool; (2) the method used to build the emotion
resources; (3) the technique adopted to accomplish the analysis. A critical
survey of six representative SA tools for literary studies (Syuzhet, Vader,
SentiArt, SEANCE, Stanford SA, and Transformers Pipelines) closes the
article.
[en] Unpacking tool criticism as practice, in
practice
Karin van Es, Utrecht University
Abstract
[en]
Thanks to easy-to-use data analysis tools and digital infrastructures, even those
humanities scholars who lack programming skills can work with large-scale
empirical datasets in order to disclose patterns and correlations within them.
Although empirical research trends have existed throughout the history of the
humanities , these recently emergent possibilities have
revived an empiricist attitude among humanities scholars schooled in more
critical and interpretive traditions. Replying to calls for a
critical digital humanities
, this paper explores “tool
criticism”
– a critical attitude required of digital humanities
scholars when working with computational tools and digital infrastructures.
First, it explores tool criticism as a response to instrumentalism in the
digital humanities and proposes it to be part of what a critical digital
humanities does. Second, it analyses tool criticism as practice, in practice.
Concretely, it discusses two critical making–inspired workshops in which
participants explored the affordances of digital tools and infrastructures and
their underlying assumptions and values. The first workshop focused on “games-as-tools”
. Participants in the workshop engaged with the
constraints, material and mechanical, of a card game by making modifications to
it. In the second workshop, drawing on the concept of “digital infrapuncture”
, participants examined digital infrastructure in
terms of capacity and care. After first identifying “hurt” in a chat
environment, they then designed bots to intervene in that hurt and offer
relief.
Articles
[en] Computational Paremiology: Charting the temporal, ecological dynamics of proverb use in books, news articles, and tweets
Ethan Davis, Computational Story Lab, Vermont Complex Systems Center, MassMutual Center of Excellence for Complex Systems and Data Science, Vermont Advanced Computing Core, Watzek Library, Lewis & Clark College; Christopher Danforth, Computational Story Lab, Vermont Complex Systems Center, MassMutual Center of Excellence for Complex Systems and Data Science, Vermont Advanced Computing Core, Department of Mathematics & Statistics, University of Vermont; Wolfgang Mieder, Department of German & Russian, University of Vermont; Peter Sheridan Dodds, Computational Story Lab, Vermont Complex Systems Center, MassMutual Center of Excellence for Complex Systems and Data Science, Vermont Advanced Computing Core, Department of Computer Science, University of Vermont, Santa Fe Institute
Abstract
[en]
Proverbs are an essential component of language and culture, and though much attention
has been paid to their history and currency, there has been comparatively little
quantitative work on changes in the frequency with which they are used over time. With
wider availability of large corpora reflecting many diverse genres of documents, it is
now possible to take a broad and dynamic view of the importance of the proverb. Here, we
measure temporal changes in the relevance of proverbs within four corpora, differing in
kind, scale, and time frame: Millions of books over centuries; thousands of books over
centuries; millions of news articles over twenty years; and billions of tweets over a
decade. While similar methodologies abound lately, they have not yet been performed
using comprehensive phraseological lexica (here, The Dictionary of
American Proverbs). We show that beyond simple partitioning of texts into words,
searches for culturally significant phrases can yield distinct insights from the same
corpora. Comparative analysis between four commonly used corpora show that each reveals
its own relationship to the phenomena being studied. We also find that the frequency
with which proverbs appear in texts follows a similar distribution to that of individual
words.
[en] Historical GIS and
Guidebooks: A Scalable Reading of Czechoslovak Tourist Attractions
Sune Bechmann Pedersen, Stockholm University; Mathias Johansson, Lund University
Abstract
[en]
This article demonstrates the value of “scalable reading” of historical travel
guides, combining traditional close reading with computer-assisted distant reading.
Aiming to scrutinize the persistence of older tourist attractions under communism, we
analyse guidebooks intended for similar audiences but produced under different
political regimes. More specifically, we compare three travel guides to the same
geographical area produced between 1905 and 1959: one to communist cold war
Czechoslovakia, one to democratic interwar Czechoslovakia, and one to the
Habsburg-era Czech lands and Slovakia. We analyse the geographic distribution of
attractions by geolocating the guidebook toponyms and visualizing them with
Geographic Information Systems (GIS). This distant reading is complemented with a
hermeneutic analysis grounded in a close reading of the guidebook text. The
combination of these approaches documents the similarities in the symbolic
representation of the country’s attractions across political caesuras and provides a
methodological template for future explorations of travel guides with historical
GIS.
[en] An Integral Web-map for the Analysis of Spatial
Change over Time in a Complex Built Environment: Digital Samos
Estefanía López Salas, Universidade da Coruña
Abstract
[en]
The paper focuses on a prototype interactive web-map developed for the presentation
and dissemination of architectural transformations at the monastic site of San Julián
de Samos in north-western Spain. The paper’s central argument offers a response to
questions regarding why and how to create an interactive web-map in the field of
architectural history through a particular case study. The paper is organized into
three main parts. It first presents the project focus on spatiotemporal analysis of a
centuries-old Spanish monastic site. Second part is devoted to the specific domain of
web-mapping tools and why they can help us to better make sense of complex built
environments that humans have formed and re-formed over time. After that, we explain
how we faced the process of creating an integral scientific web-map that goes beyond
static 2D representations of a multi-layered past physical realm in a definitive
publication, the challenges we faced, and the proposed future developments. The
prototype web-map of Digital Samos integrates the graphic features of spatial objects
with source data in a web publication platform where the reader is granted accessed
to fully uncover, interact with, and learn about a historically rich monastic
palimpsest.
[en] SEDES: Metrical Position in Greek
Hexameter
Stephen A. Sansom, Florida State University; David Fifield, Independent Scholar
Abstract
[en]
This article outlines the processes of SEDES, a program that automatically
identifies, quantifies, and visualizes the metrical position of lemmata in
ancient Greek hexameter poetry; and gives examples of its application to
investigate the effects of metrical position on poetic features such as
formularity, expectancy, and intertextuality.
[en] Automatic
Identification of Rhetorical Elements in classical Arabic Poetry
Heyam Abd Alhadi, University of Haifa; Ali Ahmad Hussein, University of Haifa; Tsvi Kuflik, University of Haifa
Abstract
[en]
A novel, rule-based, automatic framework for identifying rhetorical elements in
classical Arabic poetry is described. Since rule-based approaches have well-known
limitations, it is proposed as an interim solution until a sufficient quantity of
annotated text has been amassed with which to train a machine-learning algorithm. The
manual process of identifying rhetorical features in classical Arabic poetry is both
time-consuming and requires high-level expertise in Arabic literature. Hence, an
automatic recognition system will solve this challenge. Automatic identification is,
however, challenging, mainly because there is no existing annotated corpus with which
to train a machine-learning-based classifier. The framework proposed here combines
natural language-processing techniques with a rule-based reasoning approach, and will
continually improve as more examples become available. It is intended as an initial
step toward building the essential annotated corpus. Its focus is 20 rhetorical
elements, all of importance according to classical Arabic rhetoricians, and it
achieves the extremely encouraging result of an overall F-measure of 0.902.
[en] Language,
Materiality, and Digital Neapolitanitá
Cristina Migliaccio, CUNY Medgar Evers College
Abstract
[en]
Southern Italian digital humanist Domenico Fiormonte has argued that “DH is…a discipline and academic discourse dominated materially by an
Anglo-American èlite and intellectually by a mono-cultural view”
and has repeatedly called for a digital
humanities that “improve[s] and cultivate[s] the margins…[giving]
more attention [to] variegated cultural and and linguistic cultural
diversity”
. Similarly, Crystal Hall points out that
both the Digital Humanities and Italian Studies “have struggled
with inclusivity and the representation for traditionally marginalized
voices…[though] both fields offer tools and materials of study that can assist in
[a] transformation”
. This article takes up the work of these
scholars in its investigation of the Neapolitan language on YouTube. According to
UNESCO, the Neapolitan language is a vulnerable language because the number of
speakers has been decreasing steadily in Southern Italy, forecasting the eventual
extinction of the Southern Italian language. UNESCO’s categorization of Neapolitan as
“vulnerable” is problematic because it only accounts for speakers in Southern
Italy and not in the Italian diaspora, which involves a physical relocation of
Neapolitans to other parts of the world such as Australia and the United States. It
is also problematic because it indicates that Italians either in Italy or in the
diaspora may no longer want to speak Neapolitan. A Neapolitan digital diaspora,
unaccounted for in UNESCO statistics, also exists on social media, which may include
Neapolitans in Italy and abroad but also may include first-generation Italians,
heritage-language speakers, and other-culture people fluent or familiar with the
language. In this article, I explore how usages of Neapolitan-Italian language on
YouTube might counter the linguistic and cultural subordination of Neapolitans.
[en] Machine Learning Techniques For Analyzing Inscriptions From Israel
Daiki Tagami, Columbia University; Michael Satlow, Brown University
Abstract
[en]
The date of artifacts is an important factor for scholars to get a further understanding of culture and society of the past. However, many
artifacts are damaged over time, and we can often only get fragments of
information regarding the original artifact. Here, we use the
inscription data from Israel as a model dataset and compare the
performances of eleven commonly used regression models. We find that the
random forest model would be the optimal machine learning model to
predict the year of inscriptions from tabular data. We further show how
we can make interpretations from the machine learning prediction model
through a variance important plot. This research shows an overview of
how machine learning techniques could be used to resolve digital
humanities problems by using the Inscription of Israel/Palestine dataset
as a model dataset
Author Biographies
URL: http://www.digitalhumanities.org/dhq/vol/17/2/index.html
Comments: dhqinfo@digitalhumanities.org
Published by: The Alliance of Digital Humanities Organizations and The Association for Computers and the Humanities
Affiliated with: Digital Scholarship in the Humanities
DHQ has been made possible in part by the National Endowment for the Humanities.
Copyright © 2005 -
Unless otherwise noted, the DHQ web site and all DHQ published content are published under a Creative Commons Attribution-NoDerivatives 4.0 International License. Individual articles may carry a more permissive license, as described in the footer for the individual article, and in the article’s metadata.
Comments: dhqinfo@digitalhumanities.org
Published by: The Alliance of Digital Humanities Organizations and The Association for Computers and the Humanities
Affiliated with: Digital Scholarship in the Humanities
DHQ has been made possible in part by the National Endowment for the Humanities.
Copyright © 2005 -
Unless otherwise noted, the DHQ web site and all DHQ published content are published under a Creative Commons Attribution-NoDerivatives 4.0 International License. Individual articles may carry a more permissive license, as described in the footer for the individual article, and in the article’s metadata.