On the opportunities and risks of foundation models

R Bommasani, DA Hudson, E Adeli, R Altman… - arxiv preprint arxiv …, 2021 - arxiv.org
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …

Differentially private fine-tuning of language models

D Yu, S Naik, A Backurs, S Gopi, HA Inan… - arxiv preprint arxiv …, 2021 - arxiv.org
We give simpler, sparser, and faster algorithms for differentially private fine-tuning of large-
scale pre-trained language models, which achieve the state-of-the-art privacy versus utility …

Applications of federated learning; taxonomy, challenges, and research trends

M Shaheen, MS Farooq, T Umer, BS Kim - Electronics, 2022 - mdpi.com
The federated learning technique (FL) supports the collaborative training of machine
learning and deep learning models for edge network optimization. Although a complex edge …

Preserving privacy in large language models: A survey on current threats and solutions

M Miranda, ES Ruzzetti, A Santilli, FM Zanzotto… - arxiv preprint arxiv …, 2024 - arxiv.org
Large Language Models (LLMs) represent a significant advancement in artificial
intelligence, finding applications across various domains. However, their reliance on …

The-x: Privacy-preserving transformer inference with homomorphic encryption

T Chen, H Bao, S Huang, L Dong, B Jiao… - arxiv preprint arxiv …, 2022 - arxiv.org
As more and more pre-trained language models adopt on-cloud deployment, the privacy
issues grow quickly, mainly for the exposure of plain-text user data (eg, search history …

An empirical analysis of memorization in fine-tuned autoregressive language models

F Mireshghallah, A Uniyal, T Wang… - Proceedings of the …, 2022 - aclanthology.org
Large language models are shown to present privacy risks through memorization of training
data, andseveral recent works have studied such risks for the pre-training phase. Little …

Artificial intelligence foundation and pre-trained models: Fundamentals, applications, opportunities, and social impacts

A Kolides, A Nawaz, A Rathor, D Beeman… - … Modelling Practice and …, 2023 - Elsevier
With the emergence of foundation models (FMs) that are trained on large amounts of data at
scale and adaptable to a wide range of downstream applications, AI is experiencing a …

Surviving ChatGPT in healthcare

Z Liu, L Zhang, Z Wu, X Yu, C Cao, H Dai, N Liu… - Frontiers in …, 2024 - frontiersin.org
At the dawn of of Artificial General Intelligence (AGI), the emergence of large language
models such as ChatGPT show promise in revolutionizing healthcare by improving patient …

The minipile challenge for data-efficient language models

J Kaddour - arxiv preprint arxiv:2304.08442, 2023 - arxiv.org
The ever-growing diversity of pre-training text corpora has equipped language models with
generalization capabilities across various downstream tasks. However, such diverse …

Identifying and mitigating privacy risks stemming from language models: A survey

V Smith, AS Shamsabadi, C Ashurst… - arxiv preprint arxiv …, 2023 - arxiv.org
Rapid advancements in language models (LMs) have led to their adoption across many
sectors. Alongside the potential benefits, such models present a range of risks, including …