0% found this document useful (0 votes)
15 views9 pages

Interpretable Propaganda Detection in News Articles

Uploaded by

decibi1076
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views9 pages

Interpretable Propaganda Detection in News Articles

Uploaded by

decibi1076
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Interpretable Propaganda Detection in News Articles

Seunghak Yu1∗ Giovanni Da San Martino2


Mitra Mohtarami3 James Glass3 Preslav Nakov4
1
Amazon Alexa AI, Seattle, WA, USA
2
Department of Mathematics, University of Padova, Italy
3
MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, MA, USA
4
Qatar Computing Research Institute, HBKU, Qatar
yuseungh@amazon.com, dasan@math.unipd.it
{mitra, glass}@csail.mit.edu, pnakov@hbku.edu.qa

Abstract
Online users today are exposed to mislead-
ing and propagandistic news articles and me-
arXiv:2108.12802v1 [cs.CL] 29 Aug 2021

dia posts on a daily basis. To counter thus,


a number of approaches have been designed
aiming to achieve a healthier and safer online
news and media consumption. Automatic sys-
tems are able to support humans in detecting
such content; yet, a major impediment to their
broad adoption is that besides being accurate,
the decisions of such systems need also to be
interpretable in order to be trusted and widely
adopted by users. Since misleading and pro-
pagandistic content influences readers through Figure 1: Comparison of propaganda prediction in-
the use of a number of deception techniques, terpretability using existing methods. Our proposed
we propose to detect and to show the use of method helps users to interpret propaganda predictions
such techniques as a way to offer interpretabil- across various dimensions, e.g., is there a lot of pos-
ity. In particular, we define qualitatively de- itive/negative sentiment (can signal the use of loaded
scriptive features and we analyze their suitabil- language, which appeals to emotions), are the target
ity for detecting deception techniques. We fur- sentence and the document body related to the title,
ther show that our interpretable features can does the sentence agree/disagree with the title, etc.
be easily combined with pre-trained language Each symbol in the top bar chart represents an infor-
models, yielding state-of-the-art results. mation source for propaganda detection.

1 Introduction
Second, the rise of citizen journalism led to the
With the rise of the Internet and social media, there proliferation of various online media, and the ve-
was also a rise of fake (Nguyen et al., 2020), biased racity of information became an issue. In practice,
(Baly et al., 2020a,b), hyperpartisan (Potthast et al., the effort required to fact-check the news, and its
2018), and propagandistic content (Da San Martino bias and propaganda remained the same or even
et al., 2019b). In 2016, news got weaponized, aim- got more complex, compared to traditional media,
ing to influence the US Presidential election and since the news was re-edited and passed through
the Brexit referendum, making the general public other media channels.
concerned about the dangers of the proliferation of Propaganda aims to influence the audi-
fake news (Howard and Kollanyi, 2016; Faris et al., ence with the aim of advancing a specific
2017; Lazer et al., 2018; Vosoughi et al., 2018; agenda (Da San Martino et al., 2020b). Detecting
Bovet and Makse, 2019). it is tricky and arguably more difficult than finding
There ware two reasons for this. First, disinfor- false information in an article. This is because
mation disguised as news created the illusion that propagandistic articles are not intended to simply
the information is reliable, and thus people tended make up a story with objective errors, but instead
to lower their barrier of doubt compared to when use a variety of techniques to convince people,
information came from other types of sources. such as selectively conveying facts or appealing to

Work conducted while the author was at MIT CSAIL. emotions (Jowett and O’Donnell, 2012).
While many techniques are ethically question- 2 Task Setup
able, we can think of propaganda techniques as
Given a document d that consists of n sentences
rhetorical expressions that effectively convey the
d = {di }ni=1 , each sentence should be classified as
author’s opinion (O’Shaughnessy, 2004). Due to
belonging to one of 18 propaganda techniques or
these characteristics, propagandistic articles are of-
as being non-propaganda. The exact definition of
ten produced primarily for political purposes (but
propaganda can be subtly different depending on
are also very common in commercial advertise-
the social environment and the individual’s growth
ment), which directly affect our lives, and are com-
background, and thus it is not surprising that the
monly found even in major news media outlets,
propaganda techniques defined in the literature dif-
which are generally considered credible.
fer (Miller, 1939; Jowett and O’Donnell, 2012;
The importance of detecting propaganda in Hobbs and McGee, 2014; Torok, 2015; Weston,
the news has been recently emphasized, and re- 2018). The techniques we use in this paper are
search is being conducted from various perspec- shown in Table 1. Da San Martino et al. (2019b)
tives (Rashkin et al., 2017; Barrón-Cedeno et al., derived the propaganda techniques from the liter-
2019a; Da San Martino et al., 2019b). However, ature: they selected 18 techniques and manually
while previous work has done reasonable job at de- annotated 451 news articles with a total of 20,110
tecting propaganda, it has largely ignored the ques- sentences. This dataset1 has fragment-level labels
tion of why the content is propagandistic, i.e., there that can span over multiple sentences and can over-
is a lack of interpretability of the system decisions, lap with other labeled spans.
and in many cases, there is a lack of interpretability This granular labeling went beyond our scope
of the model as well, i.e., it is hard to understand and we had to restructure the data. First, we di-
what the model actually does even for its creators. vided the data into sentences. Second, in order to
Interpretability is indispensable if propaganda reduce the complexity of the task, we changed the
detection systems are to be trusted and accepted multi-label setup to a multi-class one by ignoring
by the users. According to the confirmation bias duplicate labels and only allowing one technique
theory (Nickerson, 1998), people easily accept new per sentence (the first one), breaking ties at ran-
information that is consistent with their beliefs, but dom. As a result, we obtained 20,111 sentences
are less likely to do so when it contradicts what they labeled with a non-propaganda class or with one of
already know. Thus, even if a model can correctly 18 propaganda techniques. Based on this data, we
predict which news is propagandistic, if it fails to built a system for predicting the use of propaganda
explain the reason for that, people are more likely techniques at the sentence level, and we provided
to reject the results and to stick to what they want to the semantic and the structural information related
believe. In order to address this issue, we propose to propaganda techniques as the basis of the results.
a new formulation of the propaganda detection task
and a model that can explain the prediction results. 3 Proposed Method
Figure 1 compares the coverage of the explanations Our method can detect the propaganda for each
for pre-existing methods vs. our proposal. sentence in a document, and can explain what pro-
Our contributions can be summarized as follows: paganda technique was used with interpretable se-
mantic and syntactic features. We further propose
• We study how a number of information novel features conceived in the study of human
sources relate to the presence and the absence behavioral characteristics. More detail below.
of propaganda in a piece of text. 3.1 People Do Not Read Full Articles
Behavior studies have shown that people read less
• Based on this, we propose a general frame- than 50% of the articles they find online, and often
work for interpretable propaganda detection. stop reading after the first few sentences, or even af-
ter the title (Manjoo, 2013). Indeed, we found that
• We demonstrate that our framework is comple- 77.5% of our articles use propaganda techniques in
mentary to and can be combined with large- the first five sentences, 65% do so in the first three
scale pre-trained transformers, yielding siz- sentences, and 31.07% do so in the title.
1
able improvements over the state of the art. http://propaganda.math.unipd.it/
Techniques Definition Level Phrases
Name Calling give an object an insulting label
Repetition inject the same message over and over Clause S, SBAR, SBARQ, SINV, SQ
Slogans use a brief and memorable phrase
Appeal to Fear plant fear against other alternatives
Phrase ADJP, ADVP, CONJP, FRAG, INTJ,
Doubt questioning the credibility LST, NAC, NP, NX, PP, PRN, PRT,
Exaggeration exaggerate or minimize something QP, RRC, UCP, VP, WHADJP,
Flag-Waving appeal to patriotism
LL appeal to emotions or stereotypes WHAVP, WHADVP, WHNP, WHPP, X
RtoH the disgusted group likes the idea
Bandwagon appeal to popularity Table 2: The syntactic labels we used as features.
CO assume a simple cause for the outcome
OIC use obscure expressions to confuse
AA use authority’s support as evidence
B&W Fallacy present only two options among many The class unrelated indicates that the sentence is
TC discourage meaningful discussion not related to the claim made in the title, while
Red Herring introduce irrelevant material to distract agree and disagree refer to the sentence agree-
Straw Men refute a nonexistent argument
Whataboutism discredit an opponent’s position ing/disagreeing with the title, and finally discuss is
assigned when the topic is the same as that in the
Table 1: List of propaganda techniques and brief def- title, but there is no stance. We further introduce
initions. LL: Loaded Language, RtoH: Reduction to the related class as the union of agree, disagree,
Hitlerum, CO: Casual Oversimplification, OIC: Obfus- and discuss. We use as features the binary classi-
cation, Intentional vagueness, Confusion, AA: Appeal fication labels and also the probabilities for these
to Authority, TC: Thought-terminating Clichés.
five classes.

We used three types of features (f rp , f sim , f stn ) 3.2 Syntactic and Semantic Information
to account for these observations, which we de- Some propaganda techniques have specific struc-
scribe below. tural or semantic characteristics. For example,
Loaded Language can be configured to elicit an
3.1.1 Relative Position of the Sentence
emotional response, usually using an emotional
We define the relative position of a sentence as noun phrase. To model this, we define the follow-
firp = i/n, where i is the sequence number of the ing three features: f dp , f sent , and f doc .
sentence, and n is the total number of sentences in
the article. 3.2.1 Syntactic Information
We used a syntactic parser to extract structural fea-
3.1.2 Topic Similarity and Stance with
tures about the target sentence fidp . Our hypothesis
Respect to the Title
is that such information could help to discover tech-
The title of an article typically contains the topic niques that have specific structural characteristics
and also the author’s view of that topic. Thus, we such as Doubt and Black and White Fallacy. We
hypothesize that propaganda should also focus on considered a total of 27 clause-level classes and
the topic expressed in the title. phrase-level labels, including the unknown class.
We represent the relationship between the tar- The set is shown in Table 2.
get sentence and the title by measuring the se-
mantic similarity fisim between them as the co- 3.2.2 Sentiment of the Sentence
sine between the sentence-BERT representations The sentiment of the sentence fisent is another im-
(φ(x)) (Reimers and Gurevych, 2019) of the target portant feature for detecting propaganda. This is
sentence di and of the title d1 . because many propagandistic articles try to con-
vince the readers by appealing to their emotions
φ(d1 ) · φ(di )
fisim = (1) and prejudices. Thus, we extract the sentiment us-
|φ(d1 )||φ(di )| ing a sentiment analyzer trained on social media
We further model the stance of a target sentence data (Hutto and Gilbert, 2014), which gives a prob-
with respect to the title fistn using a distribution ability distribution over the following three classes:
over five classes: related, unrelated, agree, dis- positive, neutral, and negative. It further outputs
agree, and discuss. For this, we use a BERT model compound, which is a one-dimensional normalized,
(Fang et al., 2019) fine-tuned on the Fake News weighted composite score. We use all four scores
Challenge dataset (Hanselowski et al., 2018). as features.
3.2.3 Document-Level Prediction Although the most frequent propaganda tech-
If the document is likely to be propagandistic, then niques appear in less than 10% of the examples,
each of its sentences is more likely to contain propa- they do show qualitatively meaningful associations.
ganda. To model this, we use as a feature f doc the Indeed, we do not expect a feature to correlate with
score of the document-level propaganda classifier multiple techniques, as they are fundamentally dif-
Proppy (Barrón-Cedeno et al., 2019a). Note that ferent. We believe that having features that strongly
Proppy is trained on articles labeled using media- correlate with one technique might be an advance-
level labels, i.e., using distant supervision. There- ment towards detecting that technique.
fore, all articles from a propagandistic source are We can see that the structural information (f dp )
considered to be propagandistic. and the sentiment of a sentence (f sent ) are closely
associated with certain propaganda techniques. For
4 Experimental Results example, Loaded Language has a strong correlation
with features identifying words bearing either a pos-
In this section, we present our experimental setup itive or a negative sentiment. This makes sense as
for interpretable propaganda detection and the eval- the authors are more likely to use emotional words
uation results from our experiments. Specifically, rather than neutral ones, and Loaded Language
we perform three sets of experiments: (i) in Sec- aims to elicit an emotional response. Similarly,
tion 4.1, we quantitatively analyze the effectiveness Doubt has high correlation with certain syntactic
of the features we proposed in Section 3; (ii) in Sec- categories.
tions 4.2 and 4.3, we compare our feature-based There are a number of interesting observations
model to the state-of-the-art model described in about the other features. For example, the relative
(Da San Martino et al., 2019b) using the experi- position of sentences (f rp ) is associated with more
mental setup from that paper; (iii) in Section 4.4, than half of the propaganda techniques. Moreover,
we analyze the performance of our model with re- the similarity to the title (f sim ) and the stance with
spect to each of the 18 propaganda techniques. respect to the title (f stn ) are strongly correlated
with the likelihood that the target sentence is pro-
4.1 Quantitative Analysis of the Proposed pagandistic. The features that indicate whether a
Features sentence is related to the subject of the title are
complementary, and thus the covariances are the
Figure 2 shows the absolute value of the covari-
same when absolute values are considered.
ance between each of our features f and each
of the 18 propaganda techniques T . We calcu-
4.2 Comparison to Existing Approaches
lated the values of the features on the training and
on the development datasets, and we standardized Table 3 shows a performance comparison for our
their values. Then, we formulated this as a prob- model vs. existing models on the sentence-level
lem of calculating the covariance between contin- propaganda detection dataset (Da San Martino
uous and Bernoulli random variables as follows: et al., 2019b). This dataset consists of 451 manu-
cov(f , T ) = p · (1 − p) · (E[f |T = 1] − E[f |T = ally annotated articles, collected from various me-
0]). dia sources, and a total of 20,111 sentences. Unlike
The total number of sentences used is 16,137 the experimental setting in the previous sections,
(for the training and for the development datasets, the task here is a binary classification one: given a
combined), among which there are 4,584 propa- sentence, the goal is to predict whether it contains
gandistic sentences. In Figure 2, the vertical axis at least one of the 18 techniques or not. For the
represents the proposed features, and the horizontal performance comparison, we used BERT (Devlin
axis shows the individual propaganda techniques et al., 2019), which we fine-tuned for sentence-level
and the total number of instances thereof. Each classification using the Multi-Granularity Network
square shows an absolute value of the covariance (MGN) (Da San Martino et al., 2019b) architecture
between some feature and some propaganda tech- on top of the [CLS] tokens (trained end-to-end),
nique. We show absolute values in order to ignore as this model improves the performance for both
the direction of the relationship, and we apply a tasks by controlling the word-level prediction using
threshold of 0.001 in order to remove the negligible information from the sentence-level prediction and
relations from the figure. vice versa.
Figure 2: Covariance matrix between the 18 propaganda techniques and the proposed features.
Model P R F1 4.3 Ablation Study
fine-tuned BERT1 63.20 53.16 57.74 Next, we performed an ablation study of the binary
MGN1 60.41 61.58 60.98 (propaganda vs. non-propaganda) model discussed
Proposed 40.97 73.27 52.55 in Section 4.2. The results are presented in Table 4.
Proposed w/ emb 49.41 80.87 61.34 The values in the last row of the table, i.e., - f sent ,
Proposed w/ emb - f stn 49.59 81.44 61.64 are obtained by applying the document-level clas-
sifier, i.e., the feature f doc , to all sentences. We
Table 3: Comparison of our method to pre-existing can see that the structural information about the
propaganda detection models at the sentence level sentence (f dp ) is the best feature for this task. This
for binary classification (propaganda vs. non- is due to the nature of some propaganda techniques
propaganda). The models flagged with 1 are described that must have a specific sentence structure, such
in (Da San Martino et al., 2019b). as Doubt. In addition, as described above, since
there are many techniques related to inducing emo-
tional responses in the readers, it can be understood
that the sentiment of a sentence may be a good fea-
Ablations Precision Recall F1 ture, e.g., for Loaded Language. These results are
All 40.97 73.27 52.55 consistent with our findings in Section 4.1 above.
- f rp 40.87 73.17 52.45 Moreover, the novel features we devised based on
- f sim 40.85 70.87 51.83 a human behavioral study for propaganda detec-
- f stn 40.07 69.62 50.86 tion (f rp , f sim , f stn ) improved the performance
- f dp 37.85 61.54 46.87 further. Overall, we can see in the table that all fea-
- f sent 30.53 77.69 43.83 tures contributed to the performance improvement.

Table 4: Ablation study for our model on binary propa-


4.4 Detecting the 18 Propaganda Techniques
ganda detection at the sentence level.
For the experiments described in the following, we
revert back to the task formulation in Section 2, but
we perform a more detailed analysis of the outcome
We followed the original data split when training of the model: for a given article, the system must
and testing the model, which is 14,137/2,006/3,967 predict whether each sentence uses propaganda
for training/development/testing. We trained a techniques, and if so, which of the 18 techniques
Support Vector Machine (SVM) model2 using the in Table 1 it uses.
above-mentioned features and we optimized the Table 5 shows the performance of our model
values of the hyper-parameters on the development on this task. We can see in the rightmost column
dataset using grid search. We used an RBF kernel that some techniques appear only in a very limited
with gamma={1e-3, 1e-4} and C={10,100}. number of examples, which explains the very low
results for them, e.g., for Red Herring and Straw
We can see in Table 3 that our proposed model, Man. In an attempt to counterbalance the lack of
which is based on interpretable features, performs gold labels for some of the techniques, we used
relatively well when compared to fine-tuned BERT sentence embeddings with the proposed features to
without direct semantic information about the tar- capture more semantic information. Since this task
get sentence. While our model is not state-of-the- is more challenging than the binary classification
art by itself, we managed to outperform the existing problem, we can intuitively expect a performance
models and to improve over the state of the art by reduction, resulting in a weighted average F1 score
simply adding to it sentence embeddings as fea- of 42.88. However, this formulation of the problem
tures (Reimers and Gurevych, 2019), which were has the advantage of providing more granular pre-
not fine-tuned on propaganda data. However, when dictions, thus enriching the propaganda detection
the stance of the sentence and the embedding of the results.
sentence are used together, performance decreases.
This may be due to the two techniques based on 2
Ran on Intel Xeon E5-1620 CPU @ 3.60GHz x 4; 16GiB
semantic similarity being somewhat inconsistent. DDR3 RAM @ 1600MHz.
Techniques P R F1 # A more fine-grained propaganda analysis was
proposed by Da San Martino et al. (2019b), who
Non-propaganda 94.37 36.62 52.77 2,927
developed a corpus of news articles annotated with
Name Calling 14.16 21.92 17.20 146
18 propaganda techniques which was used in two
Repetition 4.60 5.59 5.05 143
shared tasks: at SemEval-2020 (Da San Martino
Slogans 3.75 20.69 6.35 29
et al., 2020a) and at NLP4IF-2020 (Da San Mar-
Appeal to F. 12.99 38.37 19.41 86
tino et al., 2019a). Subsequently, the Prta system
Doubt 5.97 34.85 10.20 66
was released (Da San Martino et al., 2020c), and
Exaggeration 6.06 20.90 9.40 67
improved models were proposed, addressing the
Flag-Waving 10.98 44.62 17.63 65
limitations of transformers (Chernyavskiy et al.,
Loaded L. 32.80 20.13 24.95 303
2021). The Prta system was used to perform a
Reduction 8.00 22.22 11.76 9
study of COVID-19 disinformation and associated
Bandwagon 0.00 0.00 0.00 3
propaganda techniques in Bulgaria (Nakov et al.,
Casual O. 4.03 27.27 7.02 22
2021a) and Qatar (Nakov et al., 2021b). Finally,
O, I, C 0.00 0.00 0.00 5
multimodal content was explored in memes using
Appeal to A. 1.32 13.04 2.39 23
22 fine-grained propaganda techniques (Dimitrov
B&W fallacy 0.89 4.55 1.49 22
et al., 2021a), which was also used in a SemEval-
T. clichés 3.67 44.44 6.78 18
2021 shared task (Dimitrov et al., 2021b).
Red Herring 0.00 0.00 0.00 11
Straw Men 0.00 0.00 0.00 1 6 Conclusion and Future Work
Whataboutism 2.54 14.29 4.32 21
We proposed a model for interpretable propaganda
weighted avg 73.59 32.80 42.88 3,967
detection, which can explain which sentence in an
Table 5: Performance of our proposed method for the input news article is propagandistic by pointing
task of detecting the 18 propaganda techniques, as eval- out the propaganda techniques used, and why the
uated at the sentence level. model has predicted it to be propagandistic. To
this end, we devised novel features motivated by
human behavior studies, quantitatively deduced the
5 Related Work relationship between semantic or syntactic features
and propaganda techniques, and selected the fea-
Research on propaganda detection has focused on
tures that were important for detecting propaganda
analyzing textual content (Barrón-Cedeno et al.,
techniques. Finally, we showed that our proposed
2019b; Rashkin et al., 2017; Da San Martino
method can be combined with a pre-trained lan-
et al., 2019b,a; Yu et al., 2019; Da San Martino
guage model to yield new state-of-the-art results.
et al., 2020b). Rashkin et al. (2017) developed the
In future work, we plan to expand the dataset
TSHP-17 corpus, which uses document-level an-
by creating a platform to guide annotators. The
notation with four classes: trusted, satire, hoax,
dataset will be updated continuously and released
and propaganda. They trained a model using word
for research purposes.3 We also plan to release an
n-gram representation and reported that the model
interpretable online system, with the aim to foster
performed well only on articles from sources that
a healthier and safer online news environment.
the system was trained on. Barrón-Cedeno et al.
(2019b) developed the QProp corpus with two la- Acknowledgements
bels: propaganda vs. non-propaganda. They also
experimented on TSHP-17 and QProp corpora, This research is part of the Tanbih mega-project,4
where for the TSHP-17 corpus, they binarized which aims to limit the impact of “fake news”, pro-
the labels: propaganda vs. any of the other three paganda, and media bias by making users aware
categories. Similarly, Habernal et al. (2017, 2018) of what they are reading, thus promoting media
developed a corpus with 1.3k arguments annotated literacy and critical thinking. It is developed in col-
with five fallacies, including ad hominem, red her- laboration between the Qatar Computing Research
ring, and irrelevant authority, which directly relate Institute, HBKU and the MIT Computer Science
to propaganda techniques. Moreover, Saleh et al. and Artificial Intelligence Laboratory.
(2019) studied the connection between hyperparti- 3
http://propaganda.qcri.org/
4
sanship and propaganda. http://tanbih.qcri.org/
References Giovanni Da San Martino, Shaden Shaar, Yifan Zhang,
Seunghak Yu, Alberto Barrón-Cedeno, and Preslav
Ramy Baly, Giovanni Da San Martino, James Glass, Nakov. 2020c. Prta: A system to support the anal-
and Preslav Nakov. 2020a. We can detect your ysis of propaganda techniques in the news. In Pro-
bias: Predicting the political ideology of news ar- ceedings of the Annual Meeting of Association for
ticles. In Proceedings of the 2020 Conference on Computational Linguistics, ACL ’20, pages 287–
Empirical Methods in Natural Language Processing, 293.
EMNLP ’20, pages 4982–4991.
Giovanni Da San Martino, Seunghak Yu, Alberto
Ramy Baly, Georgi Karadzhov, Jisun An, Haewoon Barrón-Cedeño, Rostislav Petrov, and Preslav
Kwak, Yoan Dinkov, Ahmed Ali, James Glass, and Nakov. 2019b. Fine-grained analysis of propaganda
Preslav Nakov. 2020b. What was written vs. who in news articles. In Proceedings of the 2019 Con-
read it: News media profiling using text analysis and ference on Empirical Methods in Natural Language
social media context. In Proceedings of the 58th An- Processing and the 9th International Joint Confer-
nual Meeting of the Association for Computational ence on Natural Language Processing, EMNLP-
Linguistics, ACL ’20, pages 3364–3374. IJCNLP ’19, pages 5636–5646, Hong Kong, China.
Alberto Barrón-Cedeno, Giovanni Da San Martino, Is-
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
raa Jaradat, and Preslav Nakov. 2019a. Proppy: A
Kristina Toutanova. 2019. BERT: Pre-training of
system to unmask propaganda in online news. In
deep bidirectional transformers for language under-
Proceedings of the AAAI Conference on Artificial In-
standing. In Proceedings of the 2019 Conference of
telligence, AAAI ’19, pages 9847–9848.
the North American Chapter of the Association for
Computational Linguistics: Human Language Tech-
Alberto Barrón-Cedeno, Israa Jaradat, Giovanni
nologies, NAACL-HLT ’19, pages 4171–4186, Min-
Da San Martino, and Preslav Nakov. 2019b. Proppy:
neapolis, Minnesota, USA.
Organizing the news based on their propagandistic
content. Information Processing & Management,
56(5):1849–1864. Dimitar Dimitrov, Bishr Bin Ali, Shaden Shaar, Firoj
Alam, Fabrizio Silvestri, Hamed Firooz, Preslav
Alexandre Bovet and Hernán A Makse. 2019. In- Nakov, and Giovanni Da San Martino. 2021a. De-
fluence of fake news in Twitter during the 2016 tecting propaganda techniques in memes. In Pro-
US presidential election. Nature communications, ceedings of the Joint Conference of the 59th Annual
10(1):7. Meeting of the Association for Computational Lin-
guistics and the 11th International Joint Conference
Anton Chernyavskiy, Dmitry Ilvovsky, and Preslav on Natural Language Processing, ACL-IJCNLP ’21,
Nakov. 2021. Transformers: “The end of history” pages 6603–6617.
for NLP? In Proceedings of the European Confer-
ence on Machine Learning and Principles and Prac- Dimiter Dimitrov, Bishr Bin Ali, Shaden Shaar, Firoj
tice of Knowledge Discovery in Databases, ECML- Alam, Fabrizio Silvestri, Hamed Firooz, Preslav
PKDD’21. Nakov, and Giovanni Da San Martino. 2021b. Task
6 at SemEval-2021: Detection of persuasion tech-
Giovanni Da San Martino, Alberto Barrón-Cedeño, niques in texts and images. In Proceedings of the
Henning Wachsmuth, Rostislav Petrov, and Preslav 15th International Workshop on Semantic Evalua-
Nakov. 2020a. SemEval-2020 task 11: Detection tion, SemEval ’21, pages 70–98.
of propaganda techniques in news articles. In Pro-
ceedings of the International Workshop on Semantic Wei Fang, Moin Nadeem, Mitra Mohtarami, and James
Evaluation, SemEval ’20, Barcelona, Spain. Glass. 2019. Neural multi-task learning for stance
prediction. In Proceedings of the Second Workshop
Giovanni Da San Martino, Alberto Barron-Cedeno, and on Fact Extraction and VERification, FEVER ’19,
Preslav Nakov. 2019a. Findings of the NLP4IF- pages 13–19, Hong Kong, China.
2019 shared task on fine-grained propaganda detec-
tion. In Proceedings of the 2nd Workshop on NLP Robert Faris, Hal Roberts, Bruce Etling, Nikki
for Internet Freedom (NLP4IF): Censorship, Dis- Bourassa, Ethan Zuckerman, and Yochai Benkler.
information, and Propaganda, NLP4IF ’19, pages 2017. Partisanship, propaganda, and disinformation:
162–170, Hong Kong, China. Online media and the 2016 US presidential election.
Berkman Klein Center Research Publication, 6.
Giovanni Da San Martino, Stefano Cresci, Alberto
Barrón-Cedeño, Seunghak Yu, Roberto Di Pietro, Ivan Habernal, Raffael Hannemann, Christian Pol-
and Preslav Nakov. 2020b. A survey on compu- lak, Christopher Klamm, Patrick Pauli, and Iryna
tational propaganda detection. In Proceedings of Gurevych. 2017. Argotario: Computational argu-
the 29th International Joint Conference on Artifi- mentation meets serious games. In Proceedings of
cial Intelligence and the 17th Pacific Rim Interna- the 2017 Conference on Empirical Methods in Nat-
tional Conference on Artificial Intelligence, IJCAI- ural Language Processing: System Demonstrations,
PRICAI ’20, pages 4826–4832, Yokohama, Japan. EMNLP ’17, pages 7–12, Copenhagen, Denmark.
Ivan Habernal, Patrick Pauli, and Iryna Gurevych. Van-Hoang Nguyen, Kazunari Sugiyama, Preslav
2018. Adapting serious game for fallacious argu- Nakov, and Min-Yen Kan. 2020. FANG: Leveraging
mentation to German: Pitfalls, insights, and best social context for fake news detection using graph
practices. In Proceedings of the 11th International representation. In Proceedings of the 29th ACM In-
Conference on Language Resources and Evaluation, ternational Conference on Information and Knowl-
LREC ’18, pages 3329–3335, Miyazaki, Japan. edge Management, CIKM ’20, pages 1165–1174.

Andreas Hanselowski, Avinesh PVS, Benjamin Raymond S Nickerson. 1998. Confirmation bias: A
Schiller, Felix Caspelherr, Debanjan Chaudhuri, ubiquitous phenomenon in many guises. Review of
Christian M. Meyer, and Iryna Gurevych. 2018. A general psychology, 2(2):175–220.
retrospective analysis of the fake news challenge
stance-detection task. In Proceedings of the 27th Nicholas J O’Shaughnessy. 2004. Politics and propa-
International Conference on Computational Lin- ganda: Weapons of mass seduction. Manchester
guistics, COLING ’18, pages 1859–1874, Santa Fe, University Press.
New Mexico, USA.
Martin Potthast, Johannes Kiesel, Kevin Reinartz,
Renee Hobbs and Sandra McGee. 2014. Teaching Janek Bevendorff, and Benno Stein. 2018. A stylo-
about propaganda: An examination of the historical metric inquiry into hyperpartisan and fake news. In
roots of media literacy. Journal of Media Literacy Proceedings of the 56th Annual Meeting of the As-
Education, 6(2):5. sociation for Computational Linguistics, ACL ’18,
pages 231–240, Melbourne, Australia.
Philip N Howard and Bence Kollanyi. 2016. Bots,
Hannah Rashkin, Eunsol Choi, Jin Yea Jang, Svitlana
#StrongerIn, and #Brexit: Computational propa-
Volkova, and Yejin Choi. 2017. Truth of varying
ganda during the UK-EU referendum. Available at
shades: Analyzing language in fake news and politi-
SSRN 2798311.
cal fact-checking. In Proceedings of the Conference
Clayton Hutto and Eric Gilbert. 2014. VADER: A par- on Empirical Methods in Natural Language Process-
simonious rule-based model for sentiment analysis ing, EMNLP ’17, pages 2931–2937, Copenhagen,
of social media text. Proceedings of the Interna- Denmark.
tional AAAI Conference on Web and Social Media,
Nils Reimers and Iryna Gurevych. 2019. Sentence-
8(1):216–225.
BERT: Sentence embeddings using Siamese BERT-
Garth S Jowett and Victoria O’Donnell. 2012. What is networks. In Proceedings of the 2019 Conference on
propaganda, and how does it differ from persuasion. Empirical Methods in Natural Language Processing
Propaganda & Persuasion, pages 1–48. and the 9th International Joint Conference on Nat-
ural Language Processing, EMNLP-IJCNLP ’19,
David M.J. Lazer, Matthew A. Baum, Yochai Ben- pages 3982–3992, Hong Kong, China.
kler, Adam J. Berinsky, Kelly M. Greenhill, Filippo
Abdelrhman Saleh, Ramy Baly, Alberto Barrón-
Menczer, Miriam J. Metzger, Brendan Nyhan, Gor-
Cedeño, Giovanni Da San Martino, Mitra Mo-
don Pennycook, David Rothschild, Michael Schud-
htarami, Preslav Nakov, and James Glass. 2019.
son, Steven A. Sloman, Cass R. Sunstein, Emily A.
Team QCRI-MIT at SemEval-2019 task 4: Propa-
Thorson, Duncan J. Watts, and Jonathan L. Zit-
ganda analysis meets hyperpartisan news detection.
train. 2018. The science of fake news. Science,
In Proceedings of the 13th International Workshop
359(6380):1094–1096.
on Semantic Evaluation, SemEval ’19, pages 1041–
Farhad Manjoo. 2013. You won’t finish this article: 1046, Minneapolis, Minnesota, USA.
Why people online don’t read to the end. Robyn Torok. 2015. Symbiotic radicalisation strate-
gies: Propaganda tools and neuro linguistic program-
Clyde R Miller. 1939. The techniques of propaganda.
ming. In Proceedings of the Australian Security and
From “how to detect and analyze propaganda,” an
Intelligence Conference, pages 58–65, Perth, Aus-
address given at town hall. The Center for learning.
tralia.
Preslav Nakov, Firoj Alam, Shaden Shaar, Giovanni Soroush Vosoughi, Deb Roy, and Sinan Aral. 2018.
Da San Martino, and Yifan Zhang. 2021a. COVID- The spread of true and false news online. Science,
19 in Bulgarian social media: Factuality, harmful- 359(6380):1146–1151.
ness, propaganda, and framing. In Proceedings of
the International Conference on Recent Advances in Anthony Weston. 2018. A rulebook for arguments.
Natural Language Processing, RANLP ’21. Hackett Publishing.
Preslav Nakov, Firoj Alam, Shaden Shaar, Giovanni Seunghak Yu, Giovanni Da San Martino, and Preslav
Da San Martino, and Yifan Zhang. 2021b. A second Nakov. 2019. Experiments in detecting persua-
pandemic? Analysis of fake news about COVID- sion techniques in the news. In Proceedings of
19 vaccines in Qatar. In Proceedings of the Inter- the NeurIPS 2019 Joint Workshop on AI for Social
national Conference on Recent Advances in Natural Good, NeurIPS ’19, Vancouver, Canada.
Language Processing, RANLP ’21.

You might also like