On the opportunities and risks of foundation models
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …
Differentially private fine-tuning of language models
We give simpler, sparser, and faster algorithms for differentially private fine-tuning of large-
scale pre-trained language models, which achieve the state-of-the-art privacy versus utility …
scale pre-trained language models, which achieve the state-of-the-art privacy versus utility …
Applications of federated learning; taxonomy, challenges, and research trends
The federated learning technique (FL) supports the collaborative training of machine
learning and deep learning models for edge network optimization. Although a complex edge …
learning and deep learning models for edge network optimization. Although a complex edge …
Preserving privacy in large language models: A survey on current threats and solutions
Large Language Models (LLMs) represent a significant advancement in artificial
intelligence, finding applications across various domains. However, their reliance on …
intelligence, finding applications across various domains. However, their reliance on …
The-x: Privacy-preserving transformer inference with homomorphic encryption
As more and more pre-trained language models adopt on-cloud deployment, the privacy
issues grow quickly, mainly for the exposure of plain-text user data (eg, search history …
issues grow quickly, mainly for the exposure of plain-text user data (eg, search history …
An empirical analysis of memorization in fine-tuned autoregressive language models
Large language models are shown to present privacy risks through memorization of training
data, andseveral recent works have studied such risks for the pre-training phase. Little …
data, andseveral recent works have studied such risks for the pre-training phase. Little …
Artificial intelligence foundation and pre-trained models: Fundamentals, applications, opportunities, and social impacts
A Kolides, A Nawaz, A Rathor, D Beeman… - … Modelling Practice and …, 2023 - Elsevier
With the emergence of foundation models (FMs) that are trained on large amounts of data at
scale and adaptable to a wide range of downstream applications, AI is experiencing a …
scale and adaptable to a wide range of downstream applications, AI is experiencing a …
Surviving ChatGPT in healthcare
At the dawn of of Artificial General Intelligence (AGI), the emergence of large language
models such as ChatGPT show promise in revolutionizing healthcare by improving patient …
models such as ChatGPT show promise in revolutionizing healthcare by improving patient …
The minipile challenge for data-efficient language models
J Kaddour - arxiv preprint arxiv:2304.08442, 2023 - arxiv.org
The ever-growing diversity of pre-training text corpora has equipped language models with
generalization capabilities across various downstream tasks. However, such diverse …
generalization capabilities across various downstream tasks. However, such diverse …
Identifying and mitigating privacy risks stemming from language models: A survey
V Smith, AS Shamsabadi, C Ashurst… - arxiv preprint arxiv …, 2023 - arxiv.org
Rapid advancements in language models (LMs) have led to their adoption across many
sectors. Alongside the potential benefits, such models present a range of risks, including …
sectors. Alongside the potential benefits, such models present a range of risks, including …