Pre-trained language models for text generation: A survey
Text Generation aims to produce plausible and readable text in human language from input
data. The resurgence of deep learning has greatly advanced this field, in particular, with the …
data. The resurgence of deep learning has greatly advanced this field, in particular, with the …
Recent advances in deep learning based dialogue systems: A systematic survey
Dialogue systems are a popular natural language processing (NLP) task as it is promising in
real-life applications. It is also a complicated task since many NLP tasks deserving study are …
real-life applications. It is also a complicated task since many NLP tasks deserving study are …
Retrieval augmentation reduces hallucination in conversation
Despite showing increasingly human-like conversational abilities, state-of-the-art dialogue
models often suffer from factual incorrectness and hallucination of knowledge (Roller et al …
models often suffer from factual incorrectness and hallucination of knowledge (Roller et al …
A survey of knowledge enhanced pre-trained language models
Pre-trained Language Models (PLMs) which are trained on large text corpus via self-
supervised learning method, have yielded promising performance on various tasks in …
supervised learning method, have yielded promising performance on various tasks in …
Language models are few-shot multilingual learners
General-purpose language models have demonstrated impressive capabilities, performing
on par with state-of-the-art approaches on a range of downstream natural language …
on par with state-of-the-art approaches on a range of downstream natural language …
FaithDial: A Faithful Benchmark for Information-Seeking Dialogue
The goal of information-seeking dialogue is to respond to seeker queries with natural
language utterances that are grounded on knowledge sources. However, dialogue systems …
language utterances that are grounded on knowledge sources. However, dialogue systems …
Contrastive learning reduces hallucination in conversations
Pre-trained language models (LMs) store knowledge in their parameters and can generate
informative responses when used in conversational systems. However, LMs suffer from the …
informative responses when used in conversational systems. However, LMs suffer from the …
Large language models are strong zero-shot retriever
In this work, we propose a simple method that applies a large language model (LLM) to
large-scale retrieval in zero-shot scenarios. Our method, the Language language model as …
large-scale retrieval in zero-shot scenarios. Our method, the Language language model as …
Summarization as indirect supervision for relation extraction
Relation extraction (RE) models have been challenged by their reliance on training data
with expensive annotations. Considering that summarization tasks aim at acquiring concise …
with expensive annotations. Considering that summarization tasks aim at acquiring concise …
Knowledge-grounded dialogue generation with a unified knowledge representation
Knowledge-grounded dialogue systems are challenging to build due to the lack of training
data and heterogeneous knowledge sources. Existing systems perform poorly on unseen …
data and heterogeneous knowledge sources. Existing systems perform poorly on unseen …