Combating misinformation in the age of llms: Opportunities and challenges
Misinformation such as fake news and rumors is a serious threat for information ecosystems
and public trust. The emergence of large language models (LLMs) has great potential to …
and public trust. The emergence of large language models (LLMs) has great potential to …
A survey of text watermarking in the era of large language models
Text watermarking algorithms are crucial for protecting the copyright of textual content.
Historically, their capabilities and application scenarios were limited. However, recent …
Historically, their capabilities and application scenarios were limited. However, recent …
A survey on LLM-generated text detection: Necessity, methods, and future directions
The remarkable ability of large language models (LLMs) to comprehend, interpret, and
generate complex language has rapidly integrated LLM-generated text into various aspects …
generate complex language has rapidly integrated LLM-generated text into various aspects …
Monitoring ai-modified content at scale: A case study on the impact of chatgpt on ai conference peer reviews
We present an approach for estimating the fraction of text in a large corpus which is likely to
be substantially modified or produced by a large language model (LLM). Our maximum …
be substantially modified or produced by a large language model (LLM). Our maximum …
A survey on detection of llms-generated content
The burgeoning capabilities of advanced large language models (LLMs) such as ChatGPT
have led to an increase in synthetic content generation with implications across a variety of …
have led to an increase in synthetic content generation with implications across a variety of …
Authorship attribution in the era of llms: Problems, methodologies, and challenges
Accurate attribution of authorship is crucial for maintaining the integrity of digital content,
improving forensic investigations, and mitigating the risks of misinformation and plagiarism …
improving forensic investigations, and mitigating the risks of misinformation and plagiarism …
Watermark stealing in large language models
LLM watermarking has attracted attention as a promising way to detect AI-generated
content, with some works suggesting that current schemes may already be fit for …
content, with some works suggesting that current schemes may already be fit for …
Stumbling blocks: Stress testing the robustness of machine-generated text detectors under attacks
The widespread use of large language models (LLMs) is increasing the demand for
methods that detect machine-generated text to prevent misuse. The goal of our study is to …
methods that detect machine-generated text to prevent misuse. The goal of our study is to …
SoK: Watermarking for AI-Generated Content
As the outputs of generative AI (GenAI) techniques improve in quality, it becomes
increasingly challenging to distinguish them from human-created content. Watermarking …
increasingly challenging to distinguish them from human-created content. Watermarking …
Adaptive text watermark for large language models
The advancement of Large Language Models (LLMs) has led to increasing concerns about
the misuse of AI-generated text, and watermarking for LLM-generated text has emerged as a …
the misuse of AI-generated text, and watermarking for LLM-generated text has emerged as a …