Survey of hallucination in natural language generation
Natural Language Generation (NLG) has improved exponentially in recent years thanks to
the development of sequence-to-sequence deep learning technologies such as Transformer …
the development of sequence-to-sequence deep learning technologies such as Transformer …
Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing
This article surveys and organizes research works in a new paradigm in natural language
processing, which we dub “prompt-based learning.” Unlike traditional supervised learning …
processing, which we dub “prompt-based learning.” Unlike traditional supervised learning …
Siren's song in the AI ocean: a survey on hallucination in large language models
While large language models (LLMs) have demonstrated remarkable capabilities across a
range of downstream tasks, a significant concern revolves around their propensity to exhibit …
range of downstream tasks, a significant concern revolves around their propensity to exhibit …
Trustworthy LLMs: A survey and guideline for evaluating large language models' alignment
Ensuring alignment, which refers to making models behave in accordance with human
intentions [1, 2], has become a critical task before deploying large language models (LLMs) …
intentions [1, 2], has become a critical task before deploying large language models (LLMs) …
Factuality enhanced language models for open-ended text generation
Pretrained language models (LMs) are susceptible to generate text with nonfactual
information. In this work, we measure and improve the factual accuracy of large-scale LMs …
information. In this work, we measure and improve the factual accuracy of large-scale LMs …
Trusting your evidence: Hallucinate less with context-aware decoding
Language models (LMs) often struggle to pay enough attention to the input context, and
generate texts that are unfaithful or contain hallucinations. To mitigate this issue, we present …
generate texts that are unfaithful or contain hallucinations. To mitigate this issue, we present …
A survey of knowledge-enhanced text generation
The goal of text-to-text generation is to make machines express like a human in many
applications such as conversation, summarization, and translation. It is one of the most …
applications such as conversation, summarization, and translation. It is one of the most …
Factual error correction for abstractive summarization models
Neural abstractive summarization systems have achieved promising progress, thanks to the
availability of large-scale datasets and models pre-trained with self-supervised methods …
availability of large-scale datasets and models pre-trained with self-supervised methods …
Generative knowledge graph construction: A review
Generative Knowledge Graph Construction (KGC) refers to those methods that leverage the
sequence-to-sequence framework for building knowledge graphs, which is flexible and can …
sequence-to-sequence framework for building knowledge graphs, which is flexible and can …
Contrastive triple extraction with generative transformer
Triple extraction is an essential task in information extraction for natural language
processing and knowledge graph construction. In this paper, we revisit the end-to-end triple …
processing and knowledge graph construction. In this paper, we revisit the end-to-end triple …