Secrets of rlhf in large language models part i: Ppo R Zheng, S Dou, S Gao, Y Hua, W Shen, B Wang, Y Liu, S Jin, Q Liu, ... arXiv preprint arXiv:2307.04964, 2023 | 108 | 2023 |
Secrets of rlhf in large language models part ii: Reward modeling B Wang, R Zheng, L Chen, Y Liu, S Dou, C Huang, W Shen, S Jin, E Zhou, ... arXiv preprint arXiv:2401.06080, 2024 | 65 | 2024 |
Self-polish: Enhance reasoning in large language models via problem refinement Z Xi, S Jin, Y Zhou, R Zheng, S Gao, T Gui, Q Zhang, X Huang arXiv preprint arXiv:2305.14497, 2023 | 39 | 2023 |
Easyjailbreak: A unified framework for jailbreaking large language models W Zhou, X Wang, L Xiong, H Xia, Y Gu, M Chai, F Zhu, C Huang, S Dou, ... arXiv preprint arXiv:2403.12171, 2024 | 35 | 2024 |
Zhiheng Xi R Zheng, S Dou, S Gao, Y Hua, W Shen, B Wang, Y Liu, S Jin, Q Liu, ... Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng …, 2023 | 33 | 2023 |
Map-neo: Highly capable and transparent bilingual large language model series G Zhang, S Qu, J Liu, C Zhang, C Lin, CL Yu, D Pan, E Cheng, J Liu, ... arXiv preprint arXiv:2405.19327, 2024 | 32 | 2024 |
Zhiheng Xi, Xiao Wang, Xiaoran Fan, Shiliang Pu, Jiang Zhu, Rui Zheng, Tao Gui, Qi Zhang, and Xuanjing Huang S Dou, E Zhou, Y Liu, S Gao, J Zhao, W Shen, Y Zhou Loramoe: Alleviate world knowledge forgetting in large language models via …, 2024 | 26 | 2024 |
Agentgym: Evolving large language model-based agents across diverse environments Z Xi, Y Ding, W Chen, B Hong, H Guo, J Wang, D Yang, C Liao, X Guo, ... arXiv preprint arXiv:2406.04151, 2024 | 20 | 2024 |
Zhiheng Xi B Wang, R Zheng, L Chen, Y Liu, S Dou, C Huang, W Shen, S Jin, E Zhou, ... Jun Zhao, Xiao Wang, Tao Ji, Hang Yan, Lixing Shen, Zhan Chen, Tao Gui, Qi …, 2024 | 20 | 2024 |
Loramoe: Revolutionizing mixture of experts for maintaining world knowledge in language model alignment S Dou, E Zhou, Y Liu, S Gao, J Zhao, W Shen, Y Zhou, Z Xi, X Wang, ... arXiv preprint arXiv:2312.09979 4 (7), 2023 | 19 | 2023 |
ToolEyes: fine-grained evaluation for tool learning capabilities of large language models in real-world scenarios J Ye, G Li, S Gao, C Huang, Y Wu, S Li, X Fan, S Dou, Q Zhang, T Gui, ... arXiv preprint arXiv:2401.00741, 2024 | 18 | 2024 |
Chinese tiny llm: Pretraining a chinese-centric large language model X Du, Z Yu, S Gao, D Pan, Y Cheng, Z Ma, R Yuan, X Qu, J Liu, T Zheng, ... arXiv preprint arXiv:2404.04167, 2024 | 16 | 2024 |
Navigating the overkill in large language models C Shi, X Wang, Q Ge, S Gao, X Yang, T Gui, Q Zhang, X Huang, X Zhao, ... arXiv preprint arXiv:2401.17633, 2024 | 16 | 2024 |
LoRAMoE: Alleviate world knowledge forgetting in large language models via MoE-style plugin S Dou, E Zhou, Y Liu, S Gao, J Zhao, W Shen, Y Zhou, Z Xi, X Wang, ... arXiv preprint arXiv:2312.09979, 2023 | 16 | 2023 |
Zhiheng Xi, Xiao Wang, Xiaoran Fan, et al. 2023. Loramoe: Revolutionizing mixture of experts for maintaining world knowledge in language model alignment S Dou, E Zhou, Y Liu, S Gao, J Zhao, W Shen, Y Zhou arXiv preprint arXiv:2312.09979 4 (7), 2023 | 13 | 2023 |
Toolsword: Unveiling safety issues of large language models in tool learning across three stages J Ye, S Li, G Li, C Huang, S Gao, Y Wu, Q Zhang, T Gui, X Huang arXiv preprint arXiv:2402.10753, 2024 | 12 | 2024 |
Decorrelate irrelevant, purify relevant: Overcome textual spurious correlations from a feature perspective S Dou, R Zheng, T Wu, S Gao, J Shan, Q Zhang, Y Wu, X Huang arXiv preprint arXiv:2202.08048, 2022 | 11 | 2022 |
Trace: A comprehensive benchmark for continual learning in large language models X Wang, Y Zhang, T Chen, S Gao, S Jin, X Yang, Z Xi, R Zheng, Y Zou, ... arXiv preprint arXiv:2310.06762, 2023 | 10 | 2023 |
Zhiheng Xi, Xiao Wang, Xiaoran Fan, et al. Loramoe: Revolutionizing mixture of experts for maintaining world knowledge in language model alignment S Dou, E Zhou, Y Liu, S Gao, J Zhao, W Shen, Y Zhou arXiv preprint arXiv:2312.09979 4 (7), 2023 | 10 | 2023 |
Zhiheng Xi, Rui Zheng, Yicheng Zou, Tao Gui, et al. 2023b. Trace: A comprehensive benchmark for continual learning in large language models X Wang, Y Zhang, T Chen, S Gao, S Jin, X Yang arXiv preprint arXiv:2310.06762, 3, 0 | 10 | |