Fedfed: Feature distillation against data heterogeneity in federated learning

Z Yang, Y Zhang, Y Zheng, X Tian… - Advances in …, 2023 - proceedings.neurips.cc
Federated learning (FL) typically faces data heterogeneity, ie, distribution shifting among
clients. Sharing clients' information has shown great potentiality in mitigating data …

Differentially private fine-tuning of language models

D Yu, S Naik, A Backurs, S Gopi, HA Inan… - arxiv preprint arxiv …, 2021 - arxiv.org
We give simpler, sparser, and faster algorithms for differentially private fine-tuning of large-
scale pre-trained language models, which achieve the state-of-the-art privacy versus utility …

Differentially private natural language models: Recent advances and future directions

L Hu, I Habernal, L Shen, D Wang - arxiv preprint arxiv:2301.09112, 2023 - arxiv.org
Recent developments in deep learning have led to great success in various natural
language processing (NLP) tasks. However, these applications may involve data that …

On protecting the data privacy of large language models (llms): A survey

B Yan, K Li, M Xu, Y Dong, Y Zhang, Z Ren… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models (LLMs) are complex artificial intelligence systems capable of
understanding, generating and translating human language. They learn language patterns …

Privacy in large language models: Attacks, defenses and future directions

H Li, Y Chen, J Luo, J Wang, H Peng, Y Kang… - arxiv preprint arxiv …, 2023 - arxiv.org
The advancement of large language models (LLMs) has significantly enhanced the ability to
effectively tackle various downstream NLP tasks and unify these tasks into generative …

Privacy issues in large language models: A survey

S Neel, P Chang - arxiv preprint arxiv:2312.06717, 2023 - arxiv.org
This is the first survey of the active area of AI research that focuses on privacy issues in
Large Language Models (LLMs). Specifically, we focus on work that red-teams models to …

Preserving privacy in large language models: A survey on current threats and solutions

M Miranda, ES Ruzzetti, A Santilli, FM Zanzotto… - arxiv preprint arxiv …, 2024 - arxiv.org
Large Language Models (LLMs) represent a significant advancement in artificial
intelligence, finding applications across various domains. However, their reliance on …

Sok: Cryptographic neural-network computation

LKL Ng, SSM Chow - 2023 IEEE Symposium on Security and …, 2023 - ieeexplore.ieee.org
We studied 53 privacy-preserving neural-network papers in 2016-2022 based on
cryptography (without trusted processors or differential privacy), 16 of which only use …

Privacy-preserving instructions for aligning large language models

D Yu, P Kairouz, S Oh, Z Xu - arxiv preprint arxiv:2402.13659, 2024 - arxiv.org
Service providers of large language model (LLM) applications collect user instructions in the
wild and use them in further aligning LLMs with users' intentions. These instructions, which …

Privacy preserving prompt engineering: A survey

K Edemacu, X Wu - arxiv preprint arxiv:2404.06001, 2024 - arxiv.org
Pre-trained language models (PLMs) have demonstrated significant proficiency in solving a
wide range of general natural language processing (NLP) tasks. Researchers have …