Automatic text summarization: A comprehensive survey
Abstract Automatic Text Summarization (ATS) is becoming much more important because of
the huge amount of textual content that grows exponentially on the Internet and the various …
the huge amount of textual content that grows exponentially on the Internet and the various …
Repairing the cracked foundation: A survey of obstacles in evaluation practices for generated text
Abstract Evaluation practices in natural language generation (NLG) have many known flaws,
but improved evaluation approaches are rarely widely adopted. This issue has become …
but improved evaluation approaches are rarely widely adopted. This issue has become …
Bartscore: Evaluating generated text as text generation
A wide variety of NLP applications, such as machine translation, summarization, and dialog,
involve text generation. One major challenge for these applications is how to evaluate …
involve text generation. One major challenge for these applications is how to evaluate …
Graph neural networks for natural language processing: A survey
Deep learning has become the dominant approach in addressing various tasks in Natural
Language Processing (NLP). Although text inputs are typically represented as a sequence …
Language Processing (NLP). Although text inputs are typically represented as a sequence …
Big bird: Transformers for longer sequences
Transformers-based models, such as BERT, have been one of the most successful deep
learning models for NLP. Unfortunately, one of their core limitations is the quadratic …
learning models for NLP. Unfortunately, one of their core limitations is the quadratic …
Summeval: Re-evaluating summarization evaluation
The scarcity of comprehensive up-to-date studies on evaluation metrics for text
summarization and the lack of consensus regarding evaluation protocols continue to inhibit …
summarization and the lack of consensus regarding evaluation protocols continue to inhibit …
On faithfulness and factuality in abstractive summarization
It is well known that the standard likelihood training and approximate decoding objectives in
neural text generation models lead to less human-like responses for open-ended tasks such …
neural text generation models lead to less human-like responses for open-ended tasks such …
Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers
Pre-trained language models (eg, BERT (Devlin et al., 2018) and its variants) have achieved
remarkable success in varieties of NLP tasks. However, these models usually consist of …
remarkable success in varieties of NLP tasks. However, these models usually consist of …
Understanding factuality in abstractive summarization with FRANK: A benchmark for factuality metrics
Modern summarization models generate highly fluent but often factually unreliable outputs.
This motivated a surge of metrics attempting to measure the factuality of automatically …
This motivated a surge of metrics attempting to measure the factuality of automatically …
Text summarization with pretrained encoders
Bidirectional Encoder Representations from Transformers (BERT) represents the latest
incarnation of pretrained language models which have recently advanced a wide range of …
incarnation of pretrained language models which have recently advanced a wide range of …