Survey of hallucination in natural language generation

Z Ji, N Lee, R Frieske, T Yu, D Su, Y Xu, E Ishii… - ACM computing …, 2023‏ - dl.acm.org
Natural Language Generation (NLG) has improved exponentially in recent years thanks to
the development of sequence-to-sequence deep learning technologies such as Transformer …

Towards mitigating hallucination in large language models via self-reflection

Z Ji, T Yu, Y Xu, N Lee, E Ishii, P Fung - arxiv preprint arxiv:2310.06271, 2023‏ - arxiv.org
Large language models (LLMs) have shown promise for generative and knowledge-
intensive tasks including question-answering (QA) tasks. However, the practical deployment …

Contrastive learning reduces hallucination in conversations

W Sun, Z Shi, S Gao, P Ren, M de Rijke… - Proceedings of the AAAI …, 2023‏ - ojs.aaai.org
Pre-trained language models (LMs) store knowledge in their parameters and can generate
informative responses when used in conversational systems. However, LMs suffer from the …

Think before you speak: Explicitly generating implicit commonsense knowledge for response generation

P Zhou, K Gopalakrishnan, B Hedayatnia, S Kim… - arxiv preprint arxiv …, 2021‏ - arxiv.org
Implicit knowledge, such as common sense, is key to fluid human conversations. Current
neural response generation (RG) models are trained to generate responses directly …

Deep learning for dialogue systems: Chit-chat and beyond

R Yan, J Li, Z Yu - Foundations and Trends® in Information …, 2022‏ - nowpublishers.com
With the rapid progress of deep neural models and the explosion of available data
resources, dialogue systems that supports extensive topics and chit-chat conversations are …

Infusing internalized knowledge of language models into hybrid prompts for knowledgeable dialogue generation

J Bai, Z Yan, S Zhang, J Yang, H Guo, Z Li - Knowledge-Based Systems, 2024‏ - Elsevier
Existing knowledge-grounded dialogue (KGD) systems access the knowledge from an
external knowledge base, then generate the context-coherent response accordingly …

[PDF][PDF] Evaluating adapter-based knowledge-enhanced language models in the biomedical domain

A Fichtl - 2024‏ - wwwmatthes.in.tum.de
In the rapidly evolving field of biomedical natural language processing (BioNLP),
knowledgeenhanced language models (KELMs) have emerged as promising tools to bridge …

Stabilized in-context learning with pre-trained language models for few shot dialogue state tracking

D Chen, K Qian, Z Yu - arxiv preprint arxiv:2302.05932, 2023‏ - arxiv.org
Prompt-based methods with large pre-trained language models (PLMs) have shown
impressive unaided performance across many NLP tasks. These models improve even …

Knowing What to Say: Towards knowledge grounded code-mixed response generation for open-domain conversations

GV Singh, M Firdaus, S Mishra, A Ekbal - Knowledge-Based Systems, 2022‏ - Elsevier
Inculcating knowledge in the dialogue agents is an important step towards creating any
agent more human-like. Hence, the use of knowledge while conversing is crucial for building …

Yes, I am afraid of the sharks and also wild lions!: A multitask framework for enhancing dialogue generation via knowledge and emotion grounding

D Varshney, A Ekbal - Computer Speech & Language, 2024‏ - Elsevier
Current end-to-end neural conversation models inherently lack the capability to generate
coherently engaging responses. Efforts to boost informativeness have an adversarial effect …