A survey of text watermarking in the era of large language models

A Liu, L Pan, Y Lu, J Li, X Hu, X Zhang, L Wen… - ACM Computing …, 2024 - dl.acm.org
Text watermarking algorithms are crucial for protecting the copyright of textual content.
Historically, their capabilities and application scenarios were limited. However, recent …

Authorship attribution in the era of llms: Problems, methodologies, and challenges

B Huang, C Chen, K Shu - ACM SIGKDD Explorations Newsletter, 2025 - dl.acm.org
Accurate attribution of authorship is crucial for maintaining the integrity of digital content,
improving forensic investigations, and mitigating the risks of misinformation and plagiarism …

A watermark for large language models

J Kirchenbauer, J Gei**, Y Wen… - International …, 2023 - proceedings.mlr.press
Potential harms of large language models can be mitigated by watermarking model output,
ie, embedding signals into generated text that are invisible to humans but algorithmically …

Can AI-generated text be reliably detected?

VS Sadasivan, A Kumar, S Balasubramanian… - arxiv preprint arxiv …, 2023 - arxiv.org
The unregulated use of LLMs can potentially lead to malicious consequences such as
plagiarism, generating fake news, spamming, etc. Therefore, reliable detection of AI …

The science of detecting LLM-generated text

R Tang, YN Chuang, X Hu - Communications of the ACM, 2024 - dl.acm.org
ACM: Digital Library: Communications of the ACM ACM Digital Library Communications of the
ACM Volume 67, Number 4 (2024), Pages 50-59 The Science of Detecting LLM-Generated Text …

Identifying and mitigating the security risks of generative ai

C Barrett, B Boyd, E Bursztein, N Carlini… - … and Trends® in …, 2023 - nowpublishers.com
Every major technical invention resurfaces the dual-use dilemma—the new technology has
the potential to be used for good as well as for harm. Generative AI (GenAI) techniques, such …

Undetectable watermarks for language models

M Christ, S Gunn, O Zamir - The Thirty Seventh Annual …, 2024 - proceedings.mlr.press
Recent advances in the capabilities of large language models such as GPT-4 have spurred
increasing concern about our ability to detect AI-generated text. Prior works have suggested …

Provable robust watermarking for ai-generated text

X Zhao, P Ananth, L Li, YX Wang - arxiv preprint arxiv:2306.17439, 2023 - arxiv.org
We study the problem of watermarking large language models (LLMs) generated text--one
of the most promising approaches for addressing the safety challenges of LLM usage. In this …

Robust distortion-free watermarks for language models

R Kuditipudi, J Thickstun, T Hashimoto… - arxiv preprint arxiv …, 2023 - arxiv.org
We propose a methodology for planting watermarks in text from an autoregressive language
model that are robust to perturbations without changing the distribution over text up to a …

Unbiased watermark for large language models

Z Hu, L Chen, X Wu, Y Wu, H Zhang… - arxiv preprint arxiv …, 2023 - arxiv.org
The recent advancements in large language models (LLMs) have sparked a growing
apprehension regarding the potential misuse. One approach to mitigating this risk is to …