Cognitive mirage: A review of hallucinations in large language models
As large language models continue to develop in the field of AI, text generation systems are
susceptible to a worrisome phenomenon known as hallucination. In this study, we …
susceptible to a worrisome phenomenon known as hallucination. In this study, we …
Factuality challenges in the era of large language models and opportunities for fact-checking
The emergence of tools based on large language models (LLMs), such as OpenAI's
ChatGPT and Google's Gemini, has garnered immense public attention owing to their …
ChatGPT and Google's Gemini, has garnered immense public attention owing to their …
Trustllm: Trustworthiness in large language models
Large language models (LLMs), exemplified by ChatGPT, have gained considerable
attention for their excellent natural language processing capabilities. Nonetheless, these …
attention for their excellent natural language processing capabilities. Nonetheless, these …
Woodpecker: Hallucination correction for multimodal large language models
Hallucinations is a big shadow hanging over the rapidly evolving multimodal large language
models (MLLMs), referring to that the generated text is inconsistent with the image content …
models (MLLMs), referring to that the generated text is inconsistent with the image content …
[HTML][HTML] Position: TrustLLM: Trustworthiness in large language models
Large language models (LLMs) have gained considerable attention for their excellent
natural language processing capabilities. Nonetheless, these LLMs present many …
natural language processing capabilities. Nonetheless, these LLMs present many …
Factuality challenges in the era of large language models
The emergence of tools based on Large Language Models (LLMs), such as OpenAI's
ChatGPT, Microsoft's Bing Chat, and Google's Bard, has garnered immense public attention …
ChatGPT, Microsoft's Bing Chat, and Google's Bard, has garnered immense public attention …
Towards trustworthy LLMs: a review on debiasing and dehallucinating in large language models
Z Lin, S Guan, W Zhang, H Zhang, Y Li… - Artificial Intelligence …, 2024 - Springer
Recently, large language models (LLMs) have attracted considerable attention due to their
remarkable capabilities. However, LLMs' generation of biased or hallucinatory content …
remarkable capabilities. However, LLMs' generation of biased or hallucinatory content …
Self-checker: Plug-and-play modules for fact-checking with large language models
Fact-checking is an essential task in NLP that is commonly utilized for validating the factual
accuracy of claims. Prior work has mainly focused on fine-tuning pre-trained languages …
accuracy of claims. Prior work has mainly focused on fine-tuning pre-trained languages …
Risk taxonomy, mitigation, and assessment benchmarks of large language model systems
Large language models (LLMs) have strong capabilities in solving diverse natural language
processing tasks. However, the safety and security issues of LLM systems have become the …
processing tasks. However, the safety and security issues of LLM systems have become the …
Zero-resource hallucination prevention for large language models
The prevalent use of large language models (LLMs) in various domains has drawn attention
to the issue of" hallucination," which refers to instances where LLMs generate factually …
to the issue of" hallucination," which refers to instances where LLMs generate factually …