フォロー
Pingzhi Li
Pingzhi Li
Ph.D. student @UNC-Chapel Hill
確認したメール アドレス: cs.unc.edu - ホームページ
タイトル
引用先
引用先
Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Y Zhang*, P Li*, J Hong*, J Li*, Y Zhang, W Zheng, PY Chen, JD Lee, ...
(* Equal Contribution) ICML 2024, 2024
382024
Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy
P Li, Z Zhang, P Yadav, YL Sung, Y Cheng, M Bansal, T Chen
ICLR 2024 Spotlight, 2024
222024
Examining Post-Training Quantization for Mixture-of-Experts: A Benchmark
P Li*, X Jin*, Y Cheng, T Chen
(* Equal Contribution) arXiv preprint arXiv:2406.08155, 2024
62024
Privacy-preserving Fine-tuning of Large Language Models through Flatness
T Chen, L Da, H Zhou, P Li, K Zhou, T Chen, H Wei
SDM 2025, 2024
22024
Model-GLUE: Democratized LLM Scaling for A Large Model Zoo in the Wild
X Zhao*, G Sun*, R Cai*, Y Zhou*, P Li*, P Wang*, B Tan, Y He, L Chen, ...
(* Equal Contribution) NeurIPS 2024 Dataset and Benchmark Track, 2024
12024
Hybrid Quantum-Classical Scheduling for Accelerating Neural Network Training with Newton's Gradient Descent
P Li, J Liu, H Wang, T Chen
arXiv preprint arXiv:2405.00252, 2024
12024
PortLLM: Personalizing Evolving Large Language Models with Training-Free and Portable Model Patches
RMS Khan, P Li, S Yun, Z Wang, S Nirjon, CW Wong, T Chen
ICLR 2025, 2025
2025
現在システムで処理を実行できません。しばらくしてからもう一度お試しください。
論文 1–7