Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Survey of hallucination in natural language generation
Natural Language Generation (NLG) has improved exponentially in recent years thanks to
the development of sequence-to-sequence deep learning technologies such as Transformer …
the development of sequence-to-sequence deep learning technologies such as Transformer …
Knowledge graphs meet multi-modal learning: A comprehensive survey
Knowledge Graphs (KGs) play a pivotal role in advancing various AI applications, with the
semantic web community's exploration into multi-modal dimensions unlocking new avenues …
semantic web community's exploration into multi-modal dimensions unlocking new avenues …
A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions
The emergence of large language models (LLMs) has marked a significant breakthrough in
natural language processing (NLP), fueling a paradigm shift in information acquisition …
natural language processing (NLP), fueling a paradigm shift in information acquisition …
A survey on knowledge distillation of large language models
In the era of Large Language Models (LLMs), Knowledge Distillation (KD) emerges as a
pivotal methodology for transferring advanced capabilities from leading proprietary LLMs …
pivotal methodology for transferring advanced capabilities from leading proprietary LLMs …
Does fine-tuning LLMs on new knowledge encourage hallucinations?
When large language models are aligned via supervised fine-tuning, they may encounter
new factual information that was not acquired through pre-training. It is often conjectured that …
new factual information that was not acquired through pre-training. It is often conjectured that …
Don't Hallucinate, Abstain: Identifying LLM Knowledge Gaps via Multi-LLM Collaboration
Despite efforts to expand the knowledge of large language models (LLMs), knowledge gaps-
-missing or outdated information in LLMs--might always persist given the evolving nature of …
-missing or outdated information in LLMs--might always persist given the evolving nature of …
Unfamiliar finetuning examples control how language models hallucinate
Large language models are known to hallucinate when faced with unfamiliar queries, but
the underlying mechanism that govern how models hallucinate are not yet fully understood …
the underlying mechanism that govern how models hallucinate are not yet fully understood …
Can AI assistants know what they don't know?
Recently, AI assistants based on large language models (LLMs) show surprising
performance in many tasks, such as dialogue, solving math problems, writing code, and …
performance in many tasks, such as dialogue, solving math problems, writing code, and …
Alleviating hallucinations of large language models through induced hallucinations
Despite their impressive capabilities, large language models (LLMs) have been observed to
generate responses that include inaccurate or fabricated information, a phenomenon …
generate responses that include inaccurate or fabricated information, a phenomenon …
The art of saying no: Contextual noncompliance in language models
Chat-based language models are designed to be helpful, yet they should not comply with
every user request. While most existing work primarily focuses on refusal of" unsafe" …
every user request. While most existing work primarily focuses on refusal of" unsafe" …