Llm-based edge intelligence: A comprehensive survey on architectures, applications, security and trustworthiness

O Friha, MA Ferrag, B Kantarci… - IEEE Open Journal …, 2024 - ieeexplore.ieee.org
The integration of Large Language Models (LLMs) and Edge Intelligence (EI) introduces a
groundbreaking paradigm for intelligent edge devices. With their capacity for human-like …

Federated and transfer learning for cancer detection based on image analysis

A Bechar, R Medjoudj, Y Elmir, Y Himeur… - Neural Computing and …, 2025 - Springer
This review highlights the efficacy of combining federated learning (FL) and transfer learning
(TL) for cancer detection via image analysis. By integrating these techniques, research has …

Splitlora: A split parameter-efficient fine-tuning framework for large language models

Z Lin, X Hu, Y Zhang, Z Chen, Z Fang, X Chen… - arxiv preprint arxiv …, 2024 - arxiv.org
The scalability of large language models (LLMs) in handling high-complexity models and
large-scale datasets has led to tremendous successes in pivotal domains. While there is an …

Self-alignment of large language models via monopolylogue-based social scene simulation

X Pang, S Tang, R Ye, Y **ong, B Zhang… - arxiv preprint arxiv …, 2024 - arxiv.org
Aligning large language models (LLMs) with human values is imperative to mitigate
potential adverse effects resulting from their misuse. Drawing from the sociological insight …

Flora: Federated fine-tuning large language models with heterogeneous low-rank adaptations

Z Wang, Z Shen, Y He, G Sun, H Wang, L Lyu… - arxiv preprint arxiv …, 2024 - arxiv.org
The rapid development of Large Language Models (LLMs) has been pivotal in advancing AI,
with pre-trained LLMs being adaptable to diverse downstream tasks through fine-tuning …

Emerging safety attack and defense in federated instruction tuning of large language models

R Ye, J Chai, X Liu, Y Yang, Y Wang… - arxiv preprint arxiv …, 2024 - arxiv.org
Federated learning (FL) enables multiple parties to collaboratively fine-tune an large
language model (LLM) without the need of direct data sharing. Ideally, by training on …

Split-and-denoise: Protect large language model inference with local differential privacy

P Mai, R Yan, Z Huang, Y Yang, Y Pang - arxiv preprint arxiv:2310.09130, 2023 - arxiv.org
Large Language Models (LLMs) excel in natural language understanding by capturing
hidden semantics in vector space. This process enriches the value of text embeddings for …

Time-ffm: Towards lm-empowered federated foundation model for time series forecasting

Q Liu, X Liu, C Liu, Q Wen, Y Liang - arxiv preprint arxiv:2405.14252, 2024 - arxiv.org
Unlike natural language processing and computer vision, the development of Foundation
Models (FMs) for time series forecasting is blocked due to data scarcity. While recent efforts …

Fedllm-bench: Realistic benchmarks for federated learning of large language models

R Ye, R Ge, X Zhu, J Chai, Y Du, Y Liu, Y Wang… - arxiv preprint arxiv …, 2024 - arxiv.org
Federated learning has enabled multiple parties to collaboratively train large language
models without directly sharing their data (FedLLM). Following this training paradigm, the …

Rflpa: A robust federated learning framework against poisoning attacks with secure aggregation

P Mai, R Yan, Y Pang - Advances in Neural Information …, 2025 - proceedings.neurips.cc
Federated learning (FL) allows multiple devices to train a model collaboratively without
sharing their data. Despite its benefits, FL is vulnerable to privacy leakage and poisoning …