Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A probabilistic framework for llm hallucination detection via belief tree propagation
This paper focuses on the task of hallucination detection, which aims to determine the
truthfulness of LLM-generated statements. To address this problem, a popular class of …
truthfulness of LLM-generated statements. To address this problem, a popular class of …
Embedding and Gradient Say Wrong: A White-Box Method for Hallucination Detection
In recent years, large language models (LLMs) have achieved remarkable success in the
field of natural language generation. Compared to previous small-scale models, they are …
field of natural language generation. Compared to previous small-scale models, they are …
Benchmarking LLMs in Political Content Text-Annotation: Proof-of-Concept with Toxicity and Incivility Data
B González-Bustamante - arxiv preprint arxiv:2409.09741, 2024 - arxiv.org
This article benchmarked the ability of OpenAI's GPTs and a number of open-source LLMs to
perform annotation tasks on political content. We used a novel protest event dataset …
perform annotation tasks on political content. We used a novel protest event dataset …
Decoding Knowledge in Large Language Models: A Framework for Categorization and Comprehension
Y Fang, R Tang - arxiv preprint arxiv:2501.01332, 2025 - arxiv.org
Understanding how large language models (LLMs) acquire, retain, and apply knowledge
remains an open challenge. This paper introduces a novel framework, K-(CSA)^ 2, which …
remains an open challenge. This paper introduces a novel framework, K-(CSA)^ 2, which …
Anah-v2: Scaling analytical hallucination annotation of large language models
Large language models (LLMs) exhibit hallucinations in long-form question-answering tasks
across various domains and wide applications. Current hallucination detection and …
across various domains and wide applications. Current hallucination detection and …