Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Backdoor attacks and defenses targeting multi-domain ai models: A comprehensive review
Since the emergence of security concerns in artificial intelligence (AI), there has been
significant attention devoted to the examination of backdoor attacks. Attackers can utilize …
significant attention devoted to the examination of backdoor attacks. Attackers can utilize …
Privacy in large language models: Attacks, defenses and future directions
The advancement of large language models (LLMs) has significantly enhanced the ability to
effectively tackle various downstream NLP tasks and unify these tasks into generative …
effectively tackle various downstream NLP tasks and unify these tasks into generative …
Backdoorllm: A comprehensive benchmark for backdoor attacks on large language models
Generative Large Language Models (LLMs) have made significant strides across various
tasks, but they remain vulnerable to backdoor attacks, where specific triggers in the prompt …
tasks, but they remain vulnerable to backdoor attacks, where specific triggers in the prompt …
Cleangen: Mitigating backdoor attacks for generation tasks in large language models
The remarkable performance of large language models (LLMs) in generation tasks has
enabled practitioners to leverage publicly available models to power custom applications …
enabled practitioners to leverage publicly available models to power custom applications …
Text-tuple-table: Towards information integration in text-to-table generation via global tuple extraction
The task of condensing large chunks of textual information into concise and structured tables
has gained attention recently due to the emergence of Large Language Models (LLMs) and …
has gained attention recently due to the emergence of Large Language Models (LLMs) and …
Negotiationtom: A benchmark for stress-testing machine theory of mind on negotiation surrounding
Large Language Models (LLMs) have sparked substantial interest and debate concerning
their potential emergence of Theory of Mind (ToM) ability. Theory of mind evaluations …
their potential emergence of Theory of Mind (ToM) ability. Theory of mind evaluations …
Beear: Embedding-based adversarial removal of safety backdoors in instruction-tuned language models
Safety backdoor attacks in large language models (LLMs) enable the stealthy triggering of
unsafe behaviors while evading detection during normal interactions. The high …
unsafe behaviors while evading detection during normal interactions. The high …
Safety at Scale: A Comprehensive Survey of Large Model Safety
The rapid advancement of large models, driven by their exceptional abilities in learning and
generalization through large-scale pre-training, has reshaped the landscape of Artificial …
generalization through large-scale pre-training, has reshaped the landscape of Artificial …
PoisonBench: Assessing Large Language Model Vulnerability to Data Poisoning
Preference learning is a central component for aligning current LLMs, but this process can
be vulnerable to data poisoning attacks. To address this concern, we introduce …
be vulnerable to data poisoning attacks. To address this concern, we introduce …
ECON: On the Detection and Resolution of Evidence Conflicts
The rise of large language models (LLMs) has significantly influenced the quality of
information in decision-making systems, leading to the prevalence of AI-generated content …
information in decision-making systems, leading to the prevalence of AI-generated content …