Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Llm-based edge intelligence: A comprehensive survey on architectures, applications, security and trustworthiness
The integration of Large Language Models (LLMs) and Edge Intelligence (EI) introduces a
groundbreaking paradigm for intelligent edge devices. With their capacity for human-like …
groundbreaking paradigm for intelligent edge devices. With their capacity for human-like …
Federated and transfer learning for cancer detection based on image analysis
This review highlights the efficacy of combining federated learning (FL) and transfer learning
(TL) for cancer detection via image analysis. By integrating these techniques, research has …
(TL) for cancer detection via image analysis. By integrating these techniques, research has …
Splitlora: A split parameter-efficient fine-tuning framework for large language models
The scalability of large language models (LLMs) in handling high-complexity models and
large-scale datasets has led to tremendous successes in pivotal domains. While there is an …
large-scale datasets has led to tremendous successes in pivotal domains. While there is an …
Self-alignment of large language models via monopolylogue-based social scene simulation
Aligning large language models (LLMs) with human values is imperative to mitigate
potential adverse effects resulting from their misuse. Drawing from the sociological insight …
potential adverse effects resulting from their misuse. Drawing from the sociological insight …
Flora: Federated fine-tuning large language models with heterogeneous low-rank adaptations
The rapid development of Large Language Models (LLMs) has been pivotal in advancing AI,
with pre-trained LLMs being adaptable to diverse downstream tasks through fine-tuning …
with pre-trained LLMs being adaptable to diverse downstream tasks through fine-tuning …
Emerging safety attack and defense in federated instruction tuning of large language models
Federated learning (FL) enables multiple parties to collaboratively fine-tune an large
language model (LLM) without the need of direct data sharing. Ideally, by training on …
language model (LLM) without the need of direct data sharing. Ideally, by training on …
Split-and-denoise: Protect large language model inference with local differential privacy
P Mai, R Yan, Z Huang, Y Yang, Y Pang - arxiv preprint arxiv:2310.09130, 2023 - arxiv.org
Large Language Models (LLMs) excel in natural language understanding by capturing
hidden semantics in vector space. This process enriches the value of text embeddings for …
hidden semantics in vector space. This process enriches the value of text embeddings for …
Time-ffm: Towards lm-empowered federated foundation model for time series forecasting
Unlike natural language processing and computer vision, the development of Foundation
Models (FMs) for time series forecasting is blocked due to data scarcity. While recent efforts …
Models (FMs) for time series forecasting is blocked due to data scarcity. While recent efforts …
Fedllm-bench: Realistic benchmarks for federated learning of large language models
Federated learning has enabled multiple parties to collaboratively train large language
models without directly sharing their data (FedLLM). Following this training paradigm, the …
models without directly sharing their data (FedLLM). Following this training paradigm, the …
Rflpa: A robust federated learning framework against poisoning attacks with secure aggregation
P Mai, R Yan, Y Pang - Advances in Neural Information …, 2025 - proceedings.neurips.cc
Federated learning (FL) allows multiple devices to train a model collaboratively without
sharing their data. Despite its benefits, FL is vulnerable to privacy leakage and poisoning …
sharing their data. Despite its benefits, FL is vulnerable to privacy leakage and poisoning …