When Can LLMs Actually Correct Their Own Mistakes? A Critical Survey of Self-Correction of LLMs
Self-correction is an approach to improving responses from large language models (LLMs)
by refining the responses using LLMs during inference. Prior work has proposed various self …
by refining the responses using LLMs during inference. Prior work has proposed various self …
Cognitive mirage: A review of hallucinations in large language models
As large language models continue to develop in the field of AI, text generation systems are
susceptible to a worrisome phenomenon known as hallucination. In this study, we …
susceptible to a worrisome phenomenon known as hallucination. In this study, we …
Is ChatGPT a general-purpose natural language processing task solver?
Spurred by advancements in scale, large language models (LLMs) have demonstrated the
ability to perform a variety of natural language processing (NLP) tasks zero-shot--ie, without …
ability to perform a variety of natural language processing (NLP) tasks zero-shot--ie, without …
Siren's song in the AI ocean: a survey on hallucination in large language models
While large language models (LLMs) have demonstrated remarkable capabilities across a
range of downstream tasks, a significant concern revolves around their propensity to exhibit …
range of downstream tasks, a significant concern revolves around their propensity to exhibit …
A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions
The emergence of large language models (LLMs) has marked a significant breakthrough in
natural language processing (NLP), fueling a paradigm shift in information acquisition …
natural language processing (NLP), fueling a paradigm shift in information acquisition …
Chain-of-verification reduces hallucination in large language models
Generation of plausible yet incorrect factual information, termed hallucination, is an
unsolved issue in large language models. We study the ability of language models to …
unsolved issue in large language models. We study the ability of language models to …
Hallucination is inevitable: An innate limitation of large language models
Hallucination has been widely recognized to be a significant drawback for large language
models (LLMs). There have been many works that attempt to reduce the extent of …
models (LLMs). There have been many works that attempt to reduce the extent of …
Large language models and knowledge graphs: Opportunities and challenges
Large Language Models (LLMs) have taken Knowledge Representation--and the world--by
storm. This inflection point marks a shift from explicit knowledge representation to a renewed …
storm. This inflection point marks a shift from explicit knowledge representation to a renewed …
Halo: Estimation and reduction of hallucinations in open-source weak large language models
Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP).
Although convenient for research and practical applications, open-source LLMs with fewer …
Although convenient for research and practical applications, open-source LLMs with fewer …
Towards benchmarking and improving the temporal reasoning capability of large language models
Reasoning about time is of fundamental importance. Many facts are time-dependent. For
example, athletes change teams from time to time, and different government officials are …
example, athletes change teams from time to time, and different government officials are …