Survey of hallucination in natural language generation

Z Ji, N Lee, R Frieske, T Yu, D Su, Y Xu, E Ishii… - ACM Computing …, 2023 - dl.acm.org
Natural Language Generation (NLG) has improved exponentially in recent years thanks to
the development of sequence-to-sequence deep learning technologies such as Transformer …

Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing

P Liu, W Yuan, J Fu, Z Jiang, H Hayashi… - ACM Computing …, 2023 - dl.acm.org
This article surveys and organizes research works in a new paradigm in natural language
processing, which we dub “prompt-based learning.” Unlike traditional supervised learning …

Siren's song in the AI ocean: a survey on hallucination in large language models

Y Zhang, Y Li, L Cui, D Cai, L Liu, T Fu… - arxiv preprint arxiv …, 2023 - arxiv.org
While large language models (LLMs) have demonstrated remarkable capabilities across a
range of downstream tasks, a significant concern revolves around their propensity to exhibit …

Trustworthy LLMs: A survey and guideline for evaluating large language models' alignment

Y Liu, Y Yao, JF Ton, X Zhang, RGH Cheng… - arxiv preprint arxiv …, 2023 - arxiv.org
Ensuring alignment, which refers to making models behave in accordance with human
intentions [1, 2], has become a critical task before deploying large language models (LLMs) …

Factuality enhanced language models for open-ended text generation

N Lee, W **, P Xu, M Patwary… - Advances in …, 2022 - proceedings.neurips.cc
Pretrained language models (LMs) are susceptible to generate text with nonfactual
information. In this work, we measure and improve the factual accuracy of large-scale LMs …

Trusting your evidence: Hallucinate less with context-aware decoding

W Shi, X Han, M Lewis, Y Tsvetkov… - arxiv preprint arxiv …, 2023 - arxiv.org
Language models (LMs) often struggle to pay enough attention to the input context, and
generate texts that are unfaithful or contain hallucinations. To mitigate this issue, we present …

A survey of knowledge-enhanced text generation

W Yu, C Zhu, Z Li, Z Hu, Q Wang, H Ji… - ACM Computing …, 2022 - dl.acm.org
The goal of text-to-text generation is to make machines express like a human in many
applications such as conversation, summarization, and translation. It is one of the most …

Factual error correction for abstractive summarization models

M Cao, Y Dong, J Wu, JCK Cheung - arxiv preprint arxiv:2010.08712, 2020 - arxiv.org
Neural abstractive summarization systems have achieved promising progress, thanks to the
availability of large-scale datasets and models pre-trained with self-supervised methods …

Generative knowledge graph construction: A review

H Ye, N Zhang, H Chen, H Chen - arxiv preprint arxiv:2210.12714, 2022 - arxiv.org
Generative Knowledge Graph Construction (KGC) refers to those methods that leverage the
sequence-to-sequence framework for building knowledge graphs, which is flexible and can …

Contrastive triple extraction with generative transformer

H Ye, N Zhang, S Deng, M Chen, C Tan… - Proceedings of the …, 2021 - ojs.aaai.org
Triple extraction is an essential task in information extraction for natural language
processing and knowledge graph construction. In this paper, we revisit the end-to-end triple …