Large language models for generative information extraction: A survey

D Xu, W Chen, W Peng, C Zhang, T Xu, X Zhao… - Frontiers of Computer …, 2024‏ - Springer
Abstract Information Extraction (IE) aims to extract structural knowledge from plain natural
language texts. Recently, generative Large Language Models (LLMs) have demonstrated …

A survey of text watermarking in the era of large language models

A Liu, L Pan, Y Lu, J Li, X Hu, X Zhang, L Wen… - ACM Computing …, 2024‏ - dl.acm.org
Text watermarking algorithms are crucial for protecting the copyright of textual content.
Historically, their capabilities and application scenarios were limited. However, recent …

When foundation model meets federated learning: Motivations, challenges, and future directions

W Zhuang, C Chen, L Lyu - arxiv preprint arxiv:2306.15546, 2023‏ - arxiv.org
The intersection of the Foundation Model (FM) and Federated Learning (FL) provides mutual
benefits, presents a unique opportunity to unlock new possibilities in AI research, and …

A pathway towards responsible ai generated content

C Chen, J Fu, L Lyu - arxiv preprint arxiv:2303.01325, 2023‏ - arxiv.org
AI Generated Content (AIGC) has received tremendous attention within the past few years,
with content generated in the format of image, text, audio, video, etc. Meanwhile, AIGC has …

Watermark stealing in large language models

N Jovanović, R Staab, M Vechev - arxiv preprint arxiv:2402.19361, 2024‏ - arxiv.org
LLM watermarking has attracted attention as a promising way to detect AI-generated
content, with some works suggesting that current schemes may already be fit for …

Copyright protection in generative ai: A technical perspective

J Ren, H Xu, P He, Y Cui, S Zeng, J Zhang… - arxiv preprint arxiv …, 2024‏ - arxiv.org
Generative AI has witnessed rapid advancement in recent years, expanding their
capabilities to create synthesized content such as text, images, audio, and code. The high …

Navigating llm ethics: Advancements, challenges, and future directions

J Jiao, S Afroogh, Y Xu, C Phillips - arxiv preprint arxiv:2406.18841, 2024‏ - arxiv.org
This study addresses ethical issues surrounding Large Language Models (LLMs) within the
field of artificial intelligence. It explores the common ethical challenges posed by both LLMs …

Watermarking makes language models radioactive

T Sander, P Fernandez, A Durmus… - Advances in Neural …, 2025‏ - proceedings.neurips.cc
We investigate the radioactivity of text generated by large language models (LLM),\ie
whether it is possible to detect that such synthetic input was used to train a subsequent LLM …

Securing large language models: Threats, vulnerabilities and responsible practices

S Abdali, R Anarfi, CJ Barberan, J He - arxiv preprint arxiv:2403.12503, 2024‏ - arxiv.org
Large language models (LLMs) have significantly transformed the landscape of Natural
Language Processing (NLP). Their impact extends across a diverse spectrum of tasks …

A survey of backdoor attacks and defenses on large language models: Implications for security measures

S Zhao, M Jia, Z Guo, L Gan, X Xu, X Wu, J Fu… - arxiv preprint arxiv …, 2024‏ - arxiv.org
Large Language Models (LLMs), which bridge the gap between human language
understanding and complex problem-solving, achieve state-of-the-art performance on …