Security and privacy challenges of large language models: A survey
Large language models (LLMs) have demonstrated extraordinary capabilities and
contributed to multiple fields, such as generating and summarizing text, language …
contributed to multiple fields, such as generating and summarizing text, language …
Llm-based edge intelligence: A comprehensive survey on architectures, applications, security and trustworthiness
The integration of Large Language Models (LLMs) and Edge Intelligence (EI) introduces a
groundbreaking paradigm for intelligent edge devices. With their capacity for human-like …
groundbreaking paradigm for intelligent edge devices. With their capacity for human-like …
What does it mean for a language model to preserve privacy?
Natural language reflects our private lives and identities, making its privacy concerns as
broad as those of real life. Language models lack the ability to understand the context and …
broad as those of real life. Language models lack the ability to understand the context and …
Flocks of stochastic parrots: Differentially private prompt learning for large language models
Large language models (LLMs) are excellent in-context learners. However, the sensitivity of
data contained in prompts raises privacy concerns. Our work first shows that these concerns …
data contained in prompts raises privacy concerns. Our work first shows that these concerns …
Detecting pretraining data from large language models
Although large language models (LLMs) are widely deployed, the data used to train them is
rarely disclosed. Given the incredible scale of this data, up to trillions of tokens, it is all but …
rarely disclosed. Given the incredible scale of this data, up to trillions of tokens, it is all but …
Quantifying privacy risks of masked language models using membership inference attacks
The wide adoption and application of Masked language models~(MLMs) on sensitive data
(from legal to medical) necessitates a thorough quantitative investigation into their privacy …
(from legal to medical) necessitates a thorough quantitative investigation into their privacy …
Did the neurons read your book? document-level membership inference for large language models
With large language models (LLMs) poised to become embedded in our daily lives,
questions are starting to be raised about the data they learned from. These questions range …
questions are starting to be raised about the data they learned from. These questions range …
On the privacy risk of in-context learning
Large language models (LLMs) are excellent few-shot learners. They can perform a wide
variety of tasks purely based on natural language prompts provided to them. These prompts …
variety of tasks purely based on natural language prompts provided to them. These prompts …
Identifying and mitigating privacy risks stemming from language models: A survey
V Smith, AS Shamsabadi, C Ashurst… - arxiv preprint arxiv …, 2023 - arxiv.org
Rapid advancements in language models (LMs) have led to their adoption across many
sectors. Alongside the potential benefits, such models present a range of risks, including …
sectors. Alongside the potential benefits, such models present a range of risks, including …
Subject membership inference attacks in federated learning
Privacy attacks on Machine Learning (ML) models often focus on inferring the existence of
particular data points in the training data. However, what the adversary really wants to know …
particular data points in the training data. However, what the adversary really wants to know …