Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
[HTML][HTML] A survey on large language model (llm) security and privacy: The good, the bad, and the ugly
Abstract Large Language Models (LLMs), such as ChatGPT and Bard, have revolutionized
natural language understanding and generation. They possess deep language …
natural language understanding and generation. They possess deep language …
Combating misinformation in the age of llms: Opportunities and challenges
C Chen, K Shu - AI Magazine, 2024 - Wiley Online Library
Misinformation such as fake news and rumors is a serious threat for information ecosystems
and public trust. The emergence of large language models (LLMs) has great potential to …
and public trust. The emergence of large language models (LLMs) has great potential to …
Llm evaluators recognize and favor their own generations
A Panickssery, S Bowman… - Advances in Neural …, 2025 - proceedings.neurips.cc
Self-evaluation using large language models (LLMs) has proven valuable not only in
benchmarking but also methods like reward modeling, constitutional AI, and self-refinement …
benchmarking but also methods like reward modeling, constitutional AI, and self-refinement …
Scalable watermarking for identifying large language model outputs
Large language models (LLMs) have enabled the generation of high-quality synthetic text,
often indistinguishable from human-written content, at a scale that can markedly affect the …
often indistinguishable from human-written content, at a scale that can markedly affect the …
On protecting the data privacy of large language models (llms): A survey
Large language models (LLMs) are complex artificial intelligence systems capable of
understanding, generating and translating human language. They learn language patterns …
understanding, generating and translating human language. They learn language patterns …
Watermarks in the sand: Impossibility of strong watermarking for generative models
Watermarking generative models consists of planting a statistical signal (watermark) in a
model's output so that it can be later verified that the output was generated by the given …
model's output so that it can be later verified that the output was generated by the given …
Detecting multimedia generated by large ai models: A survey
The rapid advancement of Large AI Models (LAIMs), particularly diffusion models and large
language models, has marked a new era where AI-generated multimedia is increasingly …
language models, has marked a new era where AI-generated multimedia is increasingly …
[PDF][PDF] Reviewing the performance of AI detection tools in differentiating between AI-generated and human-written texts: A literature and integrative hybrid review
C Chaka - Journal of Applied Learning and Teaching, 2024 - researchgate.net
Since the launch of ChatGPT on 30 November 2022, much research, both academic and
non-academic papers, and numerous preprints have been published on the multiple uses …
non-academic papers, and numerous preprints have been published on the multiple uses …
A robust semantics-based watermark for large language model against paraphrasing
Large language models (LLMs) have show great ability in various natural language tasks.
However, there are concerns that LLMs are possible to be used improperly or even illegally …
However, there are concerns that LLMs are possible to be used improperly or even illegally …
Can large language models identify authorship?
The ability to accurately identify authorship is crucial for verifying content authenticity and
mitigating misinformation. Large Language Models (LLMs) have demonstrated an …
mitigating misinformation. Large Language Models (LLMs) have demonstrated an …