Folgen
Chongyang Tao
Titel
Zitiert von
Zitiert von
Jahr
WizardLM: Empowering Large Language Models to Follow Complex Instructions
C Xu*, Q Sun*, K Zheng*, X Geng, P Zhao, J Feng, C Tao, D Jiang
Proc. ICLR, 2023
838*2023
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Z Luo, C Xu, P Zhao, Q Sun, X Geng, W Hu, C Tao, J Ma, Q Lin, D Jiang
Proc. ICLR, 2023
5502023
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct
H Luo, Q Sun, C Xu, P Zhao, J Lou, C Tao, X Geng, Q Lin, S Chen, ...
Proc. ICLR, 2023
3312023
RUBER: An Unsupervised Method for Automatic Evaluation of Open-Domain Dialog Systems
C Tao, L Mou, D Zhao, R Yan
Proc. AAAI, 722--729, 2018
2602018
Knowledge-Grounded Dialogue Generation with Pre-trained Language Models
X Zhao, W Wu, C Xu, C Tao, D Zhao, R Yan
Proc. EMNLP, 2020
2262020
Overcoming Catastrophic Forgetting for Continual Learning via Model Adaptation
W Hu*, Z Lin*, B Liu*, C Tao, Z Tao, J Ma, D Zhao, R Yan
Proc. ICLR, 2018
1902018
Get The Point of My Utterance! Learning Towards Effective Responses with Multi-Head Attention Mechanism.
C Tao, S Gao, M Shang, W Wu, D Zhao, R Yan
Proc. IJCAI, 4418-4424, 2018
1622018
Multi-Representation Fusion Network for Multi-Turn Response Selection in Retrieval-Based Chatbots
C Tao, W Wu, C Xu, W Hu, D Zhao, R Yan
Proc. WSDM, 267-275, 2019
1562019
One Time of Interaction May Not be Enough: Go Deep With an Interaction-over-interaction Network for Response Selection in Dialogues
C Tao, W Wu, C Xu, W Hu, D Zhao, R Yan
Proc. ACL, 2019
1392019
A survey on knowledge distillation of large language models
X Xu, M Li, C Tao#, T Shen, R Cheng, J Li, C Xu, D Tao, T Zhou
arXiv preprint arXiv:2402.13116, 2024
1192024
Low-Resource Knowledge-Grounded Dialogue Generation
X Zhao, W Wu, C Tao, C Xu, D Zhao, R Yan
Proc. ICLR, 2020
1172020
PromDA: Prompt-based Data Augmentation for Low-Resource NLU Tasks
Y Wang, C Xu, Q Sun, H Hu, C Tao, X Geng, D Jiang
Proc. ACL, 2022
882022
Zero-Resource Knowledge-Grounded Dialogue Generation
L Li, C Xu, W Wu, Y Zhao, X Zhao, C Tao
Proc. NeurIPS, 2020
802020
Learning an Effective Context-Response Matching Model with Self-Supervised Tasks for Retrieval-based Dialogues
R Xu, C Tao, D Jiang, X Zhao, D Zhao, R Yan
Proc. AAAI, 2021
692021
Multi-Granularity Structural Knowledge Distillation for Language Model Compression
C Liu, C Tao, J Feng, D Zhao
Proc. ACL, 1001-1011, 2022
552022
Leveraging Large Language Models for NLG Evaluation: Advances and Challenges
Z Li*, X Xu*, T Shen, C Xu, JC Gu, Y Lai, C Tao#, S Ma
Proc. EMNLP, 2024
54*2024
MPC-BERT: A Pre-Trained Language Model for Multi-Party Conversation Understanding
JC Gu, C Tao, ZH Ling, C Xu, X Geng, D Jiang
Proc. ACL, 2021
532021
Towards Robust Ranker for Text Retrieval
Y Zhou, T Shen, X Geng, C Tao, C Xu, G Long, B Jiao, D Jiang
Proc. ACL Findings, 2022
482022
Iterative Document Representation Learning Towards Summarization with Polishing
X Chen, S Gao, C Tao, Y Song, D Zhao, R Yan
Proc. EMNLP, 2018
472018
Thread of thought unraveling chaotic contexts
Y Zhou, X Geng, T Shen, C Tao, G Long, JG Lou, J Shen
arXiv preprint arXiv:2311.08734, 2023
462023
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20