FLoRA: Federated Fine-Tuning Large Language Models with Heterogeneous Low-Rank Adaptations Z Wang, Z Shen, Y He, G Sun, H Wang, L Lyu, A Li arXiv preprint arXiv:2409.05976, 2024 | 12 | 2024 |
What Matters in Transformers? Not All Attention is Needed S He, G Sun, Z Shen, A Li arXiv preprint arXiv:2406.15786, 2024 | 7 | 2024 |
SHED: Shapley-Based Automated Dataset Refinement for Instruction Fine-Tuning Y He, Z Wang, Z Shen, G Sun, Y Dai, Y Wu, H Wang, A Li arXiv preprint arXiv:2405.00705, 2024 | 4 | 2024 |
Domino: Eliminating Communication in LLM Training via Generic Tensor Slicing and Overlapping G Wang, C Zhang, Z Shen, A Li, O Ruwase arXiv preprint arXiv:2409.15241, 2024 | 2 | 2024 |
Fair Diagnosis: Leveraging Causal Modeling to Mitigate Medical Bias B Tian, Y He, M Liu, Y Dai, Z Wang, S He, G Sun, Z Shen, W Ye, Y Wu, ... arXiv preprint arXiv:2412.04739, 2024 | | 2024 |
One Communication Round is All It Needs for Federated Fine-Tuning Foundation Models Z Wang, B Tian, Y He, Z Shen, L Liu, A Li arXiv preprint arXiv:2412.04650, 2024 | | 2024 |
ShareLoRA: Less Tuning, More Performance for LoRA Fine-tuning of LLMs Z Shen, G Sun, Y He, Z Wang, Y Zhang, S Kundu, EP Xing, H Wang, A Li | | |