Graph neural networks: A review of methods and applications J Zhou, G Cui, S Hu, Z Zhang, C Yang, Z Liu, L Wang, C Li, M Sun AI open 1, 57-81, 2020 | 7114 | 2020 |
ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback G Cui, L Yuan, N Ding, G Yao, B He, W Zhu, Y Ni, G Xie, R Xie, Y Lin, ... ICML 2024, 2024 | 300* | 2024 |
Tool learning with foundation models Y Qin, S Hu, Y Lin, W Chen, N Ding, G Cui, Z Zeng, X Zhou, Y Huang, ... ACM Computing Surveys 57 (4), 1-40, 2024 | 269 | 2024 |
Adaptive graph encoder for attributed graph embedding G Cui, J Zhou, C Yang, Z Liu KDD 2020, 976-985, 2020 | 252 | 2020 |
Introduction to graph neural networks Z Liu, J Zhou Springer Nature, 2022 | 191 | 2022 |
Minicpm: Unveiling the potential of small language models with scalable training strategies S Hu, Y Tu, X Han, C He, G Cui, X Long, Z Zheng, Y Fang, Y Huang, ... COLM 2024 Oral, 2024 | 178 | 2024 |
Rlhf-v: Towards trustworthy mllms via behavior alignment from fine-grained correctional human feedback T Yu, Y Yao, H Zhang, T He, Y Han, G Cui, J Hu, Z Liu, HT Zheng, M Sun, ... Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2024 | 159 | 2024 |
Full-scale information diffusion prediction with reinforced recurrent networks C Yang, H Wang, J Tang, C Shi, M Sun, G Cui, Z Liu IEEE Transactions on Neural Networks and Learning Systems 34 (5), 2271-2283, 2021 | 139 | 2021 |
Prototypical verbalizer for prompt-based few-shot tuning G Cui, S Hu, N Ding, L Huang, Z Liu ACL 2022, 2022 | 105 | 2022 |
Exploring the universal vulnerability of prompt-based learning paradigm L Xu, Y Chen, G Cui, H Gao, Z Liu NAACL 2022 Findings, 2022 | 88 | 2022 |
A unified evaluation of textual backdoor learning: Frameworks and benchmarks G Cui, L Yuan, B He, Y Chen, Z Liu, M Sun NeurIPS 2022 Datasets and Benchmarks Track, 2022 | 82 | 2022 |
Revisiting Out-of-distribution Robustness in NLP: Benchmarks, Analysis, and LLMs Evaluations L Yuan, Y Chen, G Cui, H Gao, F Zou, X Cheng, H Ji, Z Liu, M Sun NeurIPS 2023 Datasets and Benchmarks Track 36, 2024 | 77 | 2024 |
Advancing llm reasoning generalists with preference trees L Yuan, G Cui, H Wang, N Ding, X Wang, J Deng, B Shan, H Chen, R Xie, ... ICLR 2025, 2024 | 69 | 2024 |
Rlaif-v: Aligning mllms through open-source ai feedback for super gpt-4v trustworthiness T Yu, H Zhang, Y Yao, Y Dang, D Chen, X Lu, G Cui, T He, Z Liu, TS Chua, ... arXiv preprint arXiv:2405.17220, 2024 | 58 | 2024 |
A close look into the calibration of pre-trained language models Y Chen, L Yuan, G Cui, Z Liu, H Ji ACL 2023, 2022 | 48 | 2022 |
Why should adversarial perturbations be imperceptible? rethink the research paradigm in adversarial NLP Y Chen, H Gao, G Cui, F Qi, L Huang, Z Liu, M Sun EMNLP 2022, 2022 | 41 | 2022 |
Moderate-fitting as a natural backdoor defender for pre-trained language models B Zhu, Y Qin, G Cui, Y Chen, W Zhao, C Fu, Y Deng, Z Liu, J Wang, W Wu, ... Advances in Neural Information Processing Systems 35, 1086-1099, 2022 | 31 | 2022 |
Controllable preference optimization: Toward controllable multi-objective alignment Y Guo, G Cui, L Yuan, N Ding, Z Sun, B Sun, H Chen, R Xie, J Zhou, Y Lin, ... EMNLP 2024, 2024 | 25* | 2024 |
Noise contrastive alignment of language models with explicit rewards H Chen, G He, L Yuan, G Cui, H Su, J Zhu NeurIPS 2024, 2024 | 21 | 2024 |
Ultramedical: Building specialized generalists in biomedicine K Zhang, S Zeng, E Hua, N Ding, ZR Chen, Z Ma, H Li, G Cui, B Qi, X Zhu, ... arXiv preprint arXiv:2406.03949, 2024 | 16 | 2024 |