Benchmarking uncertainty quantification methods for large language models with lm-polygraph
Uncertainty quantification (UQ) is a critical component of machine learning (ML)
applications. The rapid proliferation of large language models (LLMs) has stimulated …
applications. The rapid proliferation of large language models (LLMs) has stimulated …
LLM hallucination reasoning with zero-shot knowledge test
LLM hallucination, where LLMs occasionally generate unfaithful text, poses significant
challenges for their practical applications. Most existing detection methods rely on external …
challenges for their practical applications. Most existing detection methods rely on external …
Machine translation hallucination detection for low and high resource languages using large language models
K Benkirane, L Gongas, S Pelles, N Fuchs… - arxiv preprint arxiv …, 2024 - arxiv.org
Recent advancements in massively multilingual machine translation systems have
significantly enhanced translation accuracy; however, even the best performing systems still …
significantly enhanced translation accuracy; however, even the best performing systems still …
Literature Review of AI Hallucination Research Since the Advent of ChatGPT: Focusing on Papers from arxiv
DM Park, HJ Lee - Informatization Policy, 2024 - koreascience.kr
Hallucination is a significant barrier to the utilization of large-scale language models or
multimodal models. In this study, we collected 654 computer science papers with" …
multimodal models. In this study, we collected 654 computer science papers with" …