Challenges and applications of large language models

J Kaddour, J Harris, M Mozes, H Bradley… - arxiv preprint arxiv …, 2023 - arxiv.org
Large Language Models (LLMs) went from non-existent to ubiquitous in the machine
learning discourse within a few years. Due to the fast pace of the field, it is difficult to identify …

A survey of text watermarking in the era of large language models

A Liu, L Pan, Y Lu, J Li, X Hu, X Zhang, L Wen… - ACM Computing …, 2024 - dl.acm.org
Text watermarking algorithms are crucial for protecting the copyright of textual content.
Historically, their capabilities and application scenarios were limited. However, recent …

Identifying and mitigating the security risks of generative ai

C Barrett, B Boyd, E Bursztein, N Carlini… - … and Trends® in …, 2023 - nowpublishers.com
Every major technical invention resurfaces the dual-use dilemma—the new technology has
the potential to be used for good as well as for harm. Generative AI (GenAI) techniques, such …

Scalable watermarking for identifying large language model outputs

S Dathathri, A See, S Ghaisas, PS Huang, R McAdam… - Nature, 2024 - nature.com
Large language models (LLMs) have enabled the generation of high-quality synthetic text,
often indistinguishable from human-written content, at a scale that can markedly affect the …

Robust distortion-free watermarks for language models

R Kuditipudi, J Thickstun, T Hashimoto… - arxiv preprint arxiv …, 2023 - arxiv.org
We propose a methodology for planting watermarks in text from an autoregressive language
model that are robust to perturbations without changing the distribution over text up to a …

{REMARK-LLM}: A robust and efficient watermarking framework for generative large language models

R Zhang, SS Hussain, P Neekhara… - 33rd USENIX Security …, 2024 - usenix.org
We present REMARK-LLM, a novel efficient, and robust watermarking framework designed
for texts generated by large language models (LLMs). Synthesizing human-like content …

Unbiased watermark for large language models

Z Hu, L Chen, X Wu, Y Wu, H Zhang… - arxiv preprint arxiv …, 2023 - arxiv.org
The recent advancements in large language models (LLMs) have sparked a growing
apprehension regarding the potential misuse. One approach to mitigating this risk is to …

Decoding academic integrity policies: A corpus linguistics investigation of AI and other technological threats

M Perkins, J Roe - Higher Education Policy, 2024 - Springer
This study presents a corpus analysis of academic integrity policies from Higher Education
Institutions (HEIs) worldwide, exploring how they address the issues posed by technological …

Provable robust watermarking for ai-generated text

X Zhao, P Ananth, L Li, YX Wang - arxiv preprint arxiv:2306.17439, 2023 - arxiv.org
As AI-generated text increasingly resembles human-written content, the ability to detect
machine-generated text becomes crucial. To address this challenge, we present …

Copyright protection in generative ai: A technical perspective

J Ren, H Xu, P He, Y Cui, S Zeng, J Zhang… - arxiv preprint arxiv …, 2024 - arxiv.org
Generative AI has witnessed rapid advancement in recent years, expanding their
capabilities to create synthesized content such as text, images, audio, and code. The high …