Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A survey of hallucination in large foundation models
Hallucination in a foundation model (FM) refers to the generation of content that strays from
factual reality or includes fabricated information. This survey paper provides an extensive …
factual reality or includes fabricated information. This survey paper provides an extensive …
Survey on factuality in large language models: Knowledge, retrieval and domain-specificity
This survey addresses the crucial issue of factuality in Large Language Models (LLMs). As
LLMs find applications across diverse domains, the reliability and accuracy of their outputs …
LLMs find applications across diverse domains, the reliability and accuracy of their outputs …
Siren's song in the AI ocean: a survey on hallucination in large language models
While large language models (LLMs) have demonstrated remarkable capabilities across a
range of downstream tasks, a significant concern revolves around their propensity to exhibit …
range of downstream tasks, a significant concern revolves around their propensity to exhibit …
Survey of hallucination in natural language generation
Natural Language Generation (NLG) has improved exponentially in recent years thanks to
the development of sequence-to-sequence deep learning technologies such as Transformer …
the development of sequence-to-sequence deep learning technologies such as Transformer …
Cognitive mirage: A review of hallucinations in large language models
As large language models continue to develop in the field of AI, text generation systems are
susceptible to a worrisome phenomenon known as hallucination. In this study, we …
susceptible to a worrisome phenomenon known as hallucination. In this study, we …
Mitigating large language model hallucinations via autonomous knowledge graph-based retrofitting
Incorporating factual knowledge in knowledge graph is regarded as a promising approach
for mitigating the hallucination of large language models (LLMs). Existing methods usually …
for mitigating the hallucination of large language models (LLMs). Existing methods usually …
The dawn after the dark: An empirical study on factuality hallucination in large language models
In the era of large language models (LLMs), hallucination (ie, the tendency to generate
factually incorrect content) poses great challenge to trustworthy and reliable deployment of …
factually incorrect content) poses great challenge to trustworthy and reliable deployment of …
Contextcite: Attributing model generation to context
How do language models use information provided as context when generating a
response? Can we infer whether a particular generated statement is actually grounded in …
response? Can we infer whether a particular generated statement is actually grounded in …
Risk taxonomy, mitigation, and assessment benchmarks of large language model systems
Large language models (LLMs) have strong capabilities in solving diverse natural language
processing tasks. However, the safety and security issues of LLM systems have become the …
processing tasks. However, the safety and security issues of LLM systems have become the …
Towards trustworthy LLMs: a review on debiasing and dehallucinating in large language models
Z Lin, S Guan, W Zhang, H Zhang, Y Li… - Artificial Intelligence …, 2024 - Springer
Recently, large language models (LLMs) have attracted considerable attention due to their
remarkable capabilities. However, LLMs' generation of biased or hallucinatory content …
remarkable capabilities. However, LLMs' generation of biased or hallucinatory content …