Llm-blender: Ensembling large language models with pairwise ranking and generative fusion
We present LLM-Blender, an ensembling framework designed to attain consistently superior
performance by leveraging the diverse strengths of multiple open-source large language …
performance by leveraging the diverse strengths of multiple open-source large language …
Calibrating sequence likelihood improves conditional language generation
Conditional language models are predominantly trained with maximum likelihood estimation
(MLE), giving probability mass to sparsely observed target sequences. While MLE trained …
(MLE), giving probability mass to sparsely observed target sequences. While MLE trained …
Lift yourself up: Retrieval-augmented text generation with self-memory
With direct access to human-written reference as memory, retrieval-augmented generation
has achieved much progress in a wide range of text generation tasks. Since better memory …
has achieved much progress in a wide range of text generation tasks. Since better memory …
Extractive summarization via chatgpt for faithful summary generation
Extractive summarization is a crucial task in natural language processing that aims to
condense long documents into shorter versions by directly extracting sentences. The recent …
condense long documents into shorter versions by directly extracting sentences. The recent …
Single-Document Abstractive Text Summarization: A Systematic Literature Review
Abstractive text summarization is a task in natural language processing that automatically
generates the summary from the source document in a human-written form with minimal loss …
generates the summary from the source document in a human-written form with minimal loss …
Prompted opinion summarization with GPT-3.5
Large language models have shown impressive performance across a wide variety of tasks,
including text summarization. In this paper, we show that this strong performance extends to …
including text summarization. In this paper, we show that this strong performance extends to …
Detecting and mitigating hallucinations in multilingual summarisation
Hallucinations pose a significant challenge to the reliability of neural models for abstractive
summarisation. While automatically generated summaries may be fluent, they often lack …
summarisation. While automatically generated summaries may be fluent, they often lack …
Large language model routing with benchmark datasets
There is a rapidly growing number of open-source Large Language Models (LLMs) and
benchmark datasets to compare them. While some models dominate these benchmarks, no …
benchmark datasets to compare them. While some models dominate these benchmarks, no …
Mvp: Multi-task supervised pre-training for natural language generation
Pre-trained language models (PLMs) have achieved remarkable success in natural
language generation (NLG) tasks. Up to now, most NLG-oriented PLMs are pre-trained in an …
language generation (NLG) tasks. Up to now, most NLG-oriented PLMs are pre-trained in an …
Faithfulness-aware decoding strategies for abstractive summarization
Despite significant progress in understanding and improving faithfulness in abstractive
summarization, the question of how decoding strategies affect faithfulness is less studied …
summarization, the question of how decoding strategies affect faithfulness is less studied …