Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Can large language models identify authorship?
The ability to accurately identify authorship is crucial for verifying content authenticity and
mitigating misinformation. Large Language Models (LLMs) have demonstrated an …
mitigating misinformation. Large Language Models (LLMs) have demonstrated an …
Agentreview: Exploring peer review dynamics with llm agents
Peer review is fundamental to the integrity and advancement of scientific publication.
Traditional methods of peer review analyses often rely on exploration and statistics of …
Traditional methods of peer review analyses often rely on exploration and statistics of …
Editing conceptual knowledge for large language models
Recently, there has been a growing interest in knowledge editing for Large Language
Models (LLMs). Current approaches and evaluations merely explore the instance-level …
Models (LLMs). Current approaches and evaluations merely explore the instance-level …
Harmful fine-tuning attacks and defenses for large language models: A survey
Recent research demonstrates that the nascent fine-tuning-as-a-service business model
exposes serious safety concerns--fine-tuning over a few harmful data uploaded by the users …
exposes serious safety concerns--fine-tuning over a few harmful data uploaded by the users …
Mlake: Multilingual knowledge editing benchmark for large language models
The extensive utilization of large language models (LLMs) underscores the crucial necessity
for precise and contemporary knowledge embedded within their intrinsic parameters …
for precise and contemporary knowledge embedded within their intrinsic parameters …
Can Knowledge Editing Really Correct Hallucinations?
Large Language Models (LLMs) suffer from hallucinations, referring to the non-factual
information in generated content, despite their superior capacities across tasks. Meanwhile …
information in generated content, despite their superior capacities across tasks. Meanwhile …
Locking down the finetuned llms safety
Fine-tuning large language models (LLMs) on additional datasets is often necessary to
optimize them for specific downstream tasks. However, existing safety alignment measures …
optimize them for specific downstream tasks. However, existing safety alignment measures …
Lisa: Lazy safety alignment for large language models against harmful fine-tuning attack
Recent studies show that Large Language Models (LLMs) with safety alignment can be jail-
broken by fine-tuning on a dataset mixed with harmful data. First time in the literature, we …
broken by fine-tuning on a dataset mixed with harmful data. First time in the literature, we …
Retrieval-enhanced knowledge editing in language models for multi-hop question answering
Large Language Models (LLMs) have shown proficiency in question-answering tasks but
often struggle to integrate real-time knowledge, leading to potentially outdated or inaccurate …
often struggle to integrate real-time knowledge, leading to potentially outdated or inaccurate …
Political-llm: Large language models in political science
In recent years, large language models (LLMs) have been widely adopted in political
science tasks such as election prediction, sentiment analysis, policy impact assessment, and …
science tasks such as election prediction, sentiment analysis, policy impact assessment, and …