An empirical survey on long document summarization: Datasets, models, and metrics

HY Koh, J Ju, M Liu, S Pan - ACM computing surveys, 2022 - dl.acm.org
Long documents such as academic articles and business reports have been the standard
format to detail out important issues and complicated subjects that require extra attention. An …

Repairing the cracked foundation: A survey of obstacles in evaluation practices for generated text

S Gehrmann, E Clark, T Sellam - Journal of Artificial Intelligence Research, 2023 - jair.org
Abstract Evaluation practices in natural language generation (NLG) have many known flaws,
but improved evaluation approaches are rarely widely adopted. This issue has become …

Language model tokenizers introduce unfairness between languages

A Petrov, E La Malfa, P Torr… - Advances in neural …, 2023 - proceedings.neurips.cc
Recent language models have shown impressive multilingual performance, even when not
explicitly trained for it. Despite this, there are concerns about the quality of their outputs …

AlignScore: Evaluating factual consistency with a unified alignment function

Y Zha, Y Yang, R Li, Z Hu - arxiv preprint arxiv:2305.16739, 2023 - arxiv.org
Many text generation applications require the generated text to be factually consistent with
input information. Automatic evaluation of factual consistency is challenging. Previous work …

Learning to summarize with human feedback

N Stiennon, L Ouyang, J Wu… - Advances in neural …, 2020 - proceedings.neurips.cc
As language models become more powerful, training and evaluation are increasingly
bottlenecked by the data and metrics used for a particular task. For example, summarization …

On faithfulness and factuality in abstractive summarization

J Maynez, S Narayan, B Bohnet… - arxiv preprint arxiv …, 2020 - arxiv.org
It is well known that the standard likelihood training and approximate decoding objectives in
neural text generation models lead to less human-like responses for open-ended tasks such …

Summeval: Re-evaluating summarization evaluation

AR Fabbri, W Kryściński, B McCann, C **ong… - Transactions of the …, 2021 - direct.mit.edu
The scarcity of comprehensive up-to-date studies on evaluation metrics for text
summarization and the lack of consensus regarding evaluation protocols continue to inhibit …

Beyond goldfish memory: Long-term open-domain conversation

J Xu, A Szlam, J Weston - arxiv preprint arxiv:2107.07567, 2021 - arxiv.org
Despite recent improvements in open-domain dialogue models, state of the art models are
trained and evaluated on short conversations with little context. In contrast, the long-term …

Understanding factuality in abstractive summarization with FRANK: A benchmark for factuality metrics

A Pagnoni, V Balachandran, Y Tsvetkov - arxiv preprint arxiv:2104.13346, 2021 - arxiv.org
Modern summarization models generate highly fluent but often factually unreliable outputs.
This motivated a surge of metrics attempting to measure the factuality of automatically …

Pegasus: Pre-training with extracted gap-sentences for abstractive summarization

J Zhang, Y Zhao, M Saleh, P Liu - … conference on machine …, 2020 - proceedings.mlr.press
Recent work pre-training Transformers with self-supervised objectives on large text corpora
has shown great success when fine-tuned on downstream NLP tasks including text …