Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Survey of hallucination in natural language generation
Natural Language Generation (NLG) has improved exponentially in recent years thanks to
the development of sequence-to-sequence deep learning technologies such as Transformer …
the development of sequence-to-sequence deep learning technologies such as Transformer …
Evaluating large language models: A comprehensive survey
Large language models (LLMs) have demonstrated remarkable capabilities across a broad
spectrum of tasks. They have attracted significant attention and been deployed in numerous …
spectrum of tasks. They have attracted significant attention and been deployed in numerous …
Trusting your evidence: Hallucinate less with context-aware decoding
Abstract Language models (LMs) often struggle to pay enough attention to the input context,
and generate texts that are unfaithful or contain hallucinations. To mitigate this issue, we …
and generate texts that are unfaithful or contain hallucinations. To mitigate this issue, we …
Factuality enhanced language models for open-ended text generation
Pretrained language models (LMs) are susceptible to generate text with nonfactual
information. In this work, we measure and improve the factual accuracy of large-scale LMs …
information. In this work, we measure and improve the factual accuracy of large-scale LMs …
Evaluating human-language model interaction
Many real-world applications of language models (LMs), such as writing assistance and
code autocomplete, involve human-LM interaction. However, most benchmarks are non …
code autocomplete, involve human-LM interaction. However, most benchmarks are non …
Minicheck: Efficient fact-checking of llms on grounding documents
Recognizing if LLM output can be grounded in evidence is central to many tasks in NLP:
retrieval-augmented generation, summarization, document-grounded dialogue, and more …
retrieval-augmented generation, summarization, document-grounded dialogue, and more …
Factually consistent summarization via reinforcement learning with textual entailment feedback
Despite the seeming success of contemporary grounded text generation systems, they often
tend to generate factually inconsistent text with respect to their input. This phenomenon is …
tend to generate factually inconsistent text with respect to their input. This phenomenon is …
Contrastive learning reduces hallucination in conversations
Pre-trained language models (LMs) store knowledge in their parameters and can generate
informative responses when used in conversational systems. However, LMs suffer from the …
informative responses when used in conversational systems. However, LMs suffer from the …
SummEdits: Measuring LLM ability at factual reasoning through the lens of summarization
With the recent appearance of LLMs in practical settings, having methods that can effectively
detect factual inconsistencies is crucial to reduce the propagation of misinformation and …
detect factual inconsistencies is crucial to reduce the propagation of misinformation and …
FaithDial: A Faithful Benchmark for Information-Seeking Dialogue
The goal of information-seeking dialogue is to respond to seeker queries with natural
language utterances that are grounded on knowledge sources. However, dialogue systems …
language utterances that are grounded on knowledge sources. However, dialogue systems …