Baichuan 2: Open large-scale language models A Yang, B Xiao, B Wang, B Zhang, C Bian, C Yin, C Lv, D Pan, D Wang, ... arXiv preprint arXiv:2309.10305, 2023 | 501 | 2023 |
Beavertails: Towards improved safety alignment of llm via a human-preference dataset J Ji, M Liu, J Dai, X Pan, C Zhang, C Bian, B Chen, R Sun, Y Wang, ... Advances in Neural Information Processing Systems 36, 2024 | 335 | 2024 |
Safe rlhf: Safe reinforcement learning from human feedback J Dai*, X Pan*, R Sun*, J Ji*, X Xu, M Liu, Y Wang, Y Yang The Twelfth International Conference on Learning Representations, 2023 | 258 | 2023 |
Safety gymnasium: A unified safe reinforcement learning benchmark J Ji, B Zhang, J Zhou, X Pan, W Huang, R Sun, Y Geng, Y Zhong, J Dai, ... Advances in Neural Information Processing Systems 36, 2023 | 59 | 2023 |
OmniSafe: An Infrastructure for Accelerating Safe Reinforcement Learning Research J Ji, J Zhou, B Zhang, J Dai, X Pan, R Sun, W Huang, Y Geng, M Liu, ... Journal of Machine Learning Research 25, 1-6, 2023 | 49 | 2023 |
Pku-beaver: Constrained value-aligned llm via safe rlhf J Dai, X Pan, J Ji, R Sun, Y Wang, Y Yang | 13 | 2023 |