팔로우
Kai Zheng
Kai Zheng
Microsoft AI
microsoft.com의 이메일 확인됨 - 홈페이지
제목
인용
인용
연도
WizardLM: Empowering Large Language Models to Follow Complex Instructions
C Xu*, Q Sun*, K Zheng*, X Geng, P Zhao, J Feng, C Tao, D Jiang
ICLR 2024, 2023
7442023
WizardLM: Empowering large pre-trained language models to follow complex instructions
C Xu*, Q Sun*, K Zheng*, X Geng, P Zhao, J Feng, C Tao, Q Lin, D Jiang
The Twelfth International Conference on Learning Representations, 2023
1252023
Multimodal dialogue response generation
Q Sun, Y Wang, C Xu, K Zheng, Y Yang, H Hu, F Xu, J Zhang, X Geng, ...
ACL 2022, 2021
532021
Knowledge stimulated contrastive prompting for low-resource stance detection
K Zheng, Q Sun, Y Yang, F Xu
Findings of the Association for Computational Linguistics: EMNLP 2022, 1168-1178, 2022
122022
Self-supervised multi-modal sequential recommendation
K Song, Q Sun, C Xu, K Zheng, Y Yang
arXiv preprint arXiv:2304.13277, 2023
82023
Towards a Unified Paradigm: Integrating Recommendation Systems as a New Language in Large Models
K Zheng, Q Sun, C Xu, P Yu, Q Guo
arXiv preprint arXiv:2412.16933, 2024
2024
Adversarial Knowledge Stimulated Contrastive Prompting for Few-shot Language Learners
K Zheng, Q Sun, Y Yang, T Lv, Y Pi, C Zhao, F Xu, Q Zhang
Findings of the Association for Computational Linguistics: ACL 2023, 13495-13507, 2023
2023
Notes on Fidelity of Coherence
K Zheng, XL Yong, YY Song, Y Tao
현재 시스템이 작동되지 않습니다. 나중에 다시 시도해 주세요.
학술자료 1–8