BERTScore: Evaluating Text Generation with BERTDownload PDF

Published: 20 Dec 2019, Last Modified: 23 Mar 2025ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: We propose BERTScore, an automatic evaluation metric for text generation, which correlates better with human judgments and provides stronger model selection performance than existing metrics.
Abstract: We propose BERTScore, an automatic evaluation metric for text generation. Analogously to common metrics, BERTScore computes a similarity score for each token in the candidate sentence with each token in the reference sentence. However, instead of exact matches, we compute token similarity using contextual embeddings. We evaluate using the outputs of 363 machine translation and image captioning systems. BERTScore correlates better with human judgments and provides stronger model selection performance than existing metrics. Finally, we use an adversarial paraphrase detection task and show that BERTScore is more robust to challenging examples compared to existing metrics.
Keywords: Metric, Evaluation, Contextual Embedding, Text Generation
Code: https://github.com/Tiiiger/bert_score
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 9 code implementations](https://www.catalyzex.com/paper/bertscore-evaluating-text-generation-with/code)
Original Pdf: pdf
7 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview