Combating misinformation in the age of llms: Opportunities and challenges

C Chen, K Shu - AI Magazine, 2024 - Wiley Online Library
Misinformation such as fake news and rumors is a serious threat for information ecosystems
and public trust. The emergence of large language models (LLMs) has great potential to …

A survey of text watermarking in the era of large language models

A Liu, L Pan, Y Lu, J Li, X Hu, X Zhang, L Wen… - ACM Computing …, 2024 - dl.acm.org
Text watermarking algorithms are crucial for protecting the copyright of textual content.
Historically, their capabilities and application scenarios were limited. However, recent …

A survey on LLM-generated text detection: Necessity, methods, and future directions

J Wu, S Yang, R Zhan, Y Yuan, LS Chao… - Computational …, 2025 - direct.mit.edu
The remarkable ability of large language models (LLMs) to comprehend, interpret, and
generate complex language has rapidly integrated LLM-generated text into various aspects …

Monitoring ai-modified content at scale: A case study on the impact of chatgpt on ai conference peer reviews

W Liang, Z Izzo, Y Zhang, H Lepp, H Cao… - arxiv preprint arxiv …, 2024 - arxiv.org
We present an approach for estimating the fraction of text in a large corpus which is likely to
be substantially modified or produced by a large language model (LLM). Our maximum …

A survey on detection of llms-generated content

X Yang, L Pan, X Zhao, H Chen, L Petzold… - arxiv preprint arxiv …, 2023 - arxiv.org
The burgeoning capabilities of advanced large language models (LLMs) such as ChatGPT
have led to an increase in synthetic content generation with implications across a variety of …

Authorship attribution in the era of llms: Problems, methodologies, and challenges

B Huang, C Chen, K Shu - ACM SIGKDD Explorations Newsletter, 2025 - dl.acm.org
Accurate attribution of authorship is crucial for maintaining the integrity of digital content,
improving forensic investigations, and mitigating the risks of misinformation and plagiarism …

Watermark stealing in large language models

N Jovanović, R Staab, M Vechev - arxiv preprint arxiv:2402.19361, 2024 - arxiv.org
LLM watermarking has attracted attention as a promising way to detect AI-generated
content, with some works suggesting that current schemes may already be fit for …

Stumbling blocks: Stress testing the robustness of machine-generated text detectors under attacks

Y Wang, S Feng, AB Hou, X Pu, C Shen, X Liu… - arxiv preprint arxiv …, 2024 - arxiv.org
The widespread use of large language models (LLMs) is increasing the demand for
methods that detect machine-generated text to prevent misuse. The goal of our study is to …

SoK: Watermarking for AI-Generated Content

X Zhao, S Gunn, M Christ, J Fairoze, A Fabrega… - arxiv preprint arxiv …, 2024 - arxiv.org
As the outputs of generative AI (GenAI) techniques improve in quality, it becomes
increasingly challenging to distinguish them from human-created content. Watermarking …

Adaptive text watermark for large language models

Y Liu, Y Bu - arxiv preprint arxiv:2401.13927, 2024 - arxiv.org
The advancement of Large Language Models (LLMs) has led to increasing concerns about
the misuse of AI-generated text, and watermarking for LLM-generated text has emerged as a …