Watermark stealing in large language models

N Jovanović, R Staab, M Vechev - arxiv preprint arxiv:2402.19361, 2024 - arxiv.org
LLM watermarking has attracted attention as a promising way to detect AI-generated
content, with some works suggesting that current schemes may already be fit for …

The Imitation Game revisited: A comprehensive survey on recent advances in AI-generated text detection

Z Yang, Z Feng, R Huo, H Lin, H Zheng, R Nie… - Expert Systems with …, 2025 - Elsevier
In recent years, AI-generated text detection (AIGTD) has attracted more and more attention,
with numerous novel methodologies being proposed. However, most existing reviews on …

Survey on plagiarism detection in large language models: The impact of chatgpt and gemini on academic integrity

S Pudasaini, L Miralles-Pechuán, D Lillis… - arxiv preprint arxiv …, 2024 - arxiv.org
The rise of Large Language Models (LLMs) such as ChatGPT and Gemini has posed new
challenges for the academic community. With the help of these models, students can easily …

Stumbling blocks: Stress testing the robustness of machine-generated text detectors under attacks

Y Wang, S Feng, AB Hou, X Pu, C Shen, X Liu… - arxiv preprint arxiv …, 2024 - arxiv.org
The widespread use of large language models (LLMs) is increasing the demand for
methods that detect machine-generated text to prevent misuse. The goal of our study is to …

Simllm: Detecting sentences generated by large language models using similarity between the generation and its re-generation

HQ Nguyen-Son, MS Dao, K Zettsu - Proceedings of the 2024 …, 2024 - aclanthology.org
Large language models have emerged as a significant phenomenon due to their ability to
produce natural text across various applications. However, the proliferation of generated text …

Does detectgpt fully utilize perturbation? bridging selective perturbation to fine-tuned contrastive learning detector would be better

S Liu, X Liu, Y Wang, Z Cheng, C Li… - Proceedings of the …, 2024 - aclanthology.org
The burgeoning generative capabilities of large language models (LLMs) have raised
growing concerns about abuse, demanding automatic machine-generated text detectors …

Learning to Rewrite: Generalized LLM-Generated Text Detection

W Hao, R Li, W Zhao, J Yang, C Mao - arxiv preprint arxiv:2408.04237, 2024 - arxiv.org
Large language models (LLMs) can be abused at scale to create non-factual content and
spread disinformation. Detecting LLM-generated content is essential to mitigate these risks …

RAFT: Realistic Attacks to Fool Text Detectors

J Wang, R Li, J Yang, C Mao - arxiv preprint arxiv:2410.03658, 2024 - arxiv.org
Large language models (LLMs) have exhibited remarkable fluency across various tasks.
However, their unethical applications, such as disseminating disinformation, have become a …

A Metric-Based Detection System for Large Language Model Texts

L Le, D Tran - ACM Transactions on Management Information …, 2024 - dl.acm.org
More efforts are being put into improving Large Language Models'(LLM) capabilities than
into dealing with their implications. Current LLMs are able to generate high quality texts …

Survey on AI-Generated Media Detection: From Non-MLLM to MLLM

Y Zou, P Li, Z Li, H Huang, X Cui, X Liu… - arxiv preprint arxiv …, 2025 - arxiv.org
The proliferation of AI-generated media poses significant challenges to information
authenticity and social trust, making reliable detection methods highly demanded. Methods …