Security and privacy challenges of large language models: A survey

BC Das, MH Amini, Y Wu - ACM Computing Surveys, 2024 - dl.acm.org
Large language models (LLMs) have demonstrated extraordinary capabilities and
contributed to multiple fields, such as generating and summarizing text, language …

Llm-based edge intelligence: A comprehensive survey on architectures, applications, security and trustworthiness

O Friha, MA Ferrag, B Kantarci… - IEEE Open Journal …, 2024 - ieeexplore.ieee.org
The integration of Large Language Models (LLMs) and Edge Intelligence (EI) introduces a
groundbreaking paradigm for intelligent edge devices. With their capacity for human-like …

What does it mean for a language model to preserve privacy?

H Brown, K Lee, F Mireshghallah, R Shokri… - Proceedings of the 2022 …, 2022 - dl.acm.org
Natural language reflects our private lives and identities, making its privacy concerns as
broad as those of real life. Language models lack the ability to understand the context and …

Flocks of stochastic parrots: Differentially private prompt learning for large language models

H Duan, A Dziedzic, N Papernot… - Advances in Neural …, 2024 - proceedings.neurips.cc
Large language models (LLMs) are excellent in-context learners. However, the sensitivity of
data contained in prompts raises privacy concerns. Our work first shows that these concerns …

Detecting pretraining data from large language models

W Shi, A Ajith, M **a, Y Huang, D Liu, T Blevins… - arxiv preprint arxiv …, 2023 - arxiv.org
Although large language models (LLMs) are widely deployed, the data used to train them is
rarely disclosed. Given the incredible scale of this data, up to trillions of tokens, it is all but …

Quantifying privacy risks of masked language models using membership inference attacks

F Mireshghallah, K Goyal, A Uniyal… - arxiv preprint arxiv …, 2022 - arxiv.org
The wide adoption and application of Masked language models~(MLMs) on sensitive data
(from legal to medical) necessitates a thorough quantitative investigation into their privacy …

Did the neurons read your book? document-level membership inference for large language models

M Meeus, S Jain, M Rei, YA de Montjoye - 33rd USENIX Security …, 2024 - usenix.org
With large language models (LLMs) poised to become embedded in our daily lives,
questions are starting to be raised about the data they learned from. These questions range …

On the privacy risk of in-context learning

H Duan, A Dziedzic, M Yaghini, N Papernot… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models (LLMs) are excellent few-shot learners. They can perform a wide
variety of tasks purely based on natural language prompts provided to them. These prompts …

Identifying and mitigating privacy risks stemming from language models: A survey

V Smith, AS Shamsabadi, C Ashurst… - arxiv preprint arxiv …, 2023 - arxiv.org
Rapid advancements in language models (LMs) have led to their adoption across many
sectors. Alongside the potential benefits, such models present a range of risks, including …

Subject membership inference attacks in federated learning

A Suri, P Kanani, VJ Marathe, DW Peterson - arxiv preprint arxiv …, 2022 - arxiv.org
Privacy attacks on Machine Learning (ML) models often focus on inferring the existence of
particular data points in the training data. However, what the adversary really wants to know …