Sledovat
Xingtai Lv
Xingtai Lv
E-mailová adresa ověřena na: mails.tsinghua.edu.cn
Název
Citace
Citace
Rok
Sparse Low-rank Adaptation of Pre-trained Language Models
N Ding, X Lv, Q Wang, Y Chen, B Zhou, Z Liu, M Sun
EMNLP 2023, 2023
712023
Ultramedical: Building specialized generalists in biomedicine
K Zhang, S Zeng, E Hua, N Ding, ZR Chen, Z Ma, H Li, G Cui, B Qi, X Zhu, ...
Advances in Neural Information Processing Systems 37, 26045-26081, 2025
172025
OpenDelta: a plug-and-play library for parameter-efficient adaptation of pre-trained models
S Hu, N Ding, W Zhao, X Lv, Z Zhang, Z Liu, M Sun
arXiv preprint arXiv:2307.03084, 2023
122023
Parameter-efficient weight ensembling facilitates task-level knowledge transfer
X Lv, N Ding, Y Qin, Z Liu, M Sun
Proceedings of the 61st Annual Meeting of the Association for Computational …, 2023
72023
Process reinforcement through implicit rewards
G Cui, L Yuan, Z Wang, H Wang, W Li, B He, Y Fan, T Yu, Q Xu, W Chen, ...
arXiv preprint arXiv:2502.01456, 2025
62025
Fast and Slow Generating: An Empirical Study on Large and Small Language Models Collaborative Decoding
K Zhang, J Wang, N Ding, B Qi, E Hua, X Lv, B Zhou
arXiv preprint arXiv:2406.12295, 2024
42024
Intuitive fine-tuning: Towards unifying sft and rlhf into a single process
E Hua, B Qi, K Zhang, Y Yu, N Ding, X Lv, K Tian, B Zhou
arXiv e-prints, arXiv: 2405.11870, 2024
4*2024
Mastering text, code and math simultaneously via fusing highly specialized language models
N Ding, Y Chen, G Cui, X Lv, W Zhao, R Xie, B Zhou, Z Liu, M Sun
arXiv preprint arXiv:2403.08281, 2024
42024
Automating exploratory proteomics research via language models
N Ding, S Qu, L Xie, Y Li, Z Liu, K Zhang, Y Xiong, Y Zuo, Z Chen, E Hua, ...
arXiv preprint arXiv:2411.03743, 2024
32024
How to Synthesize Text Data without Model Collapse?
X Zhu, D Cheng, H Li, K Zhang, E Hua, X Lv, N Ding, Z Lin, Z Zheng, ...
arXiv preprint arXiv:2412.14689, 2024
12024
Fourier Position Embedding: Enhancing Attention's Periodic Extension for Length Generalization
E Hua, C Jiang, X Lv, K Zhang, N Ding, Y Sun, B Qi, Y Fan, XK Zhu, ...
arXiv preprint arXiv:2412.17739, 2024
2024
Scalable Efficient Training of Large Language Models with Low-dimensional Projected Attention
X Lv, N Ding, K Zhang, E Hua, G Cui, B Zhou
arXiv preprint arXiv:2411.02063, 2024
2024
OpenPRM: Building Open-domain Process-based Reward Models with Preference Trees
K Zhang, J Zhang, H Li, X Zhu, E Hua, X Lv, N Ding, B Qi, B Zhou
The Thirteenth International Conference on Learning Representations, 0
ToEdit: How to Synthesize Text Data to Avoid Model Collapse?
X Zhu, D Cheng, H Li, K Zhang, E Hua, X Lv, N Ding, Z Lin, Z Zheng, ...
Systém momentálně nemůže danou operaci provést. Zkuste to znovu později.
Články 1–14