Rethinking automatic evaluation in sentence simplification

T Scialom, L Martin, J Staiano… - arXiv preprint arXiv …, 2021 - arxiv.org
arXiv preprint arXiv:2104.07560, 2021arxiv.org
Automatic evaluation remains an open research question in Natural Language Generation.
In the context of Sentence Simplification, this is particularly challenging: the task requires by
nature to replace complex words with simpler ones that shares the same meaning. This
limits the effectiveness of n-gram based metrics like BLEU. Going hand in hand with the
recent advances in NLG, new metrics have been proposed, such as BERTScore for Machine
Translation. In summarization, the QuestEval metric proposes to automatically compare two …
Automatic evaluation remains an open research question in Natural Language Generation. In the context of Sentence Simplification, this is particularly challenging: the task requires by nature to replace complex words with simpler ones that shares the same meaning. This limits the effectiveness of n-gram based metrics like BLEU. Going hand in hand with the recent advances in NLG, new metrics have been proposed, such as BERTScore for Machine Translation. In summarization, the QuestEval metric proposes to automatically compare two texts by questioning them. In this paper, we first propose a simple modification of QuestEval allowing it to tackle Sentence Simplification. We then extensively evaluate the correlations w.r.t. human judgement for several metrics including the recent BERTScore and QuestEval, and show that the latter obtain state-of-the-art correlations, outperforming standard metrics like BLEU and SARI. More importantly, we also show that a large part of the correlations are actually spurious for all the metrics. To investigate this phenomenon further, we release a new corpus of evaluated simplifications, this time not generated by systems but instead, written by humans. This allows us to remove the spurious correlations and draw very different conclusions from the original ones, resulting in a better understanding of these metrics. In particular, we raise concerns about very low correlations for most of traditional metrics. Our results show that the only significant measure of the Meaning Preservation is our adaptation of QuestEval.
arxiv.org