Unique security and privacy threats of large language model: A comprehensive survey
With the rapid development of artificial intelligence, large language models (LLMs) have
made remarkable advancements in natural language processing. These models are trained …
made remarkable advancements in natural language processing. These models are trained …
Privacy in large language models: Attacks, defenses and future directions
The advancement of large language models (LLMs) has significantly enhanced the ability to
effectively tackle various downstream NLP tasks and unify these tasks into generative …
effectively tackle various downstream NLP tasks and unify these tasks into generative …
Heterogeneous lora for federated fine-tuning of on-device foundation models
Foundation models (FMs) adapt surprisingly well to downstream tasks with fine-tuning.
However, their colossal parameter space prohibits their training on resource-constrained …
However, their colossal parameter space prohibits their training on resource-constrained …
Fedbiot: Llm local fine-tuning in federated learning without full model
Large language models (LLMs) show amazing performance on many domain-specific tasks
after fine-tuning with some appropriate data. However, many domain-specific data are …
after fine-tuning with some appropriate data. However, many domain-specific data are …
Federated large language models: Current progress and future directions
Large language models are rapidly gaining popularity and have been widely adopted in real-
world applications. While the quality of training data is essential, privacy concerns arise …
world applications. While the quality of training data is essential, privacy concerns arise …
Reusing pretrained models by multi-linear operators for efficient training
Training large models from scratch usually costs a substantial amount of resources. Towards
this problem, recent studies such as bert2BERT and LiGO have reused small pretrained …
this problem, recent studies such as bert2BERT and LiGO have reused small pretrained …
Grounding foundation models through federated transfer learning: A general framework
Foundation Models (FMs) such as GPT-4 encoded with vast knowledge and powerful
emergent abilities have achieved remarkable success in various natural language …
emergent abilities have achieved remarkable success in various natural language …
Federa: Efficient fine-tuning of language models in federated learning leveraging weight decomposition
Despite their exceptional performance on various tasks after fine-tuning, pre-trained
language models (PLMs) face significant challenges due to growing privacy concerns with …
language models (PLMs) face significant challenges due to growing privacy concerns with …
Federated full-parameter tuning of billion-sized language models with communication cost under 18 kilobytes
Pre-trained large language models (LLMs) require fine-tuning to improve their
responsiveness to natural language instructions. Federated learning (FL) offers a way to …
responsiveness to natural language instructions. Federated learning (FL) offers a way to …
Federated fine-tuning of large language models under heterogeneous language tasks and client resources
Federated Learning (FL) has recently been applied to the parameter-efficient fine-tuning of
Large Language Models (LLMs). While promising, it raises significant challenges due to the …
Large Language Models (LLMs). While promising, it raises significant challenges due to the …