WizardLM: Empowering Large Language Models to Follow Complex Instructions C Xu*, Q Sun*, K Zheng*, X Geng, P Zhao, J Feng, C Tao, D Jiang ICLR 2024, 2023 | 744 | 2023 |
WizardLM: Empowering large pre-trained language models to follow complex instructions C Xu*, Q Sun*, K Zheng*, X Geng, P Zhao, J Feng, C Tao, Q Lin, D Jiang The Twelfth International Conference on Learning Representations, 2023 | 125 | 2023 |
Multimodal dialogue response generation Q Sun, Y Wang, C Xu, K Zheng, Y Yang, H Hu, F Xu, J Zhang, X Geng, ... ACL 2022, 2021 | 53 | 2021 |
Knowledge stimulated contrastive prompting for low-resource stance detection K Zheng, Q Sun, Y Yang, F Xu Findings of the Association for Computational Linguistics: EMNLP 2022, 1168-1178, 2022 | 12 | 2022 |
Self-supervised multi-modal sequential recommendation K Song, Q Sun, C Xu, K Zheng, Y Yang arXiv preprint arXiv:2304.13277, 2023 | 8 | 2023 |
Towards a Unified Paradigm: Integrating Recommendation Systems as a New Language in Large Models K Zheng, Q Sun, C Xu, P Yu, Q Guo arXiv preprint arXiv:2412.16933, 2024 | | 2024 |
Adversarial Knowledge Stimulated Contrastive Prompting for Few-shot Language Learners K Zheng, Q Sun, Y Yang, T Lv, Y Pi, C Zhao, F Xu, Q Zhang Findings of the Association for Computational Linguistics: ACL 2023, 13495-13507, 2023 | | 2023 |
Notes on Fidelity of Coherence K Zheng, XL Yong, YY Song, Y Tao | | |