Ultrafeedback: Boosting language models with high-quality feedback G Cui, L Yuan, N Ding, G Yao, W Zhu, Y Ni, G Xie, Z Liu, M Sun | 242 | 2023 |
Unified demonstration retriever for in-context learning X Li, K Lv, H Yan, T Lin, W Zhu, Y Ni, G Xie, X Wang, X Qiu arXiv preprint arXiv:2305.04320, 2023 | 115 | 2023 |
LeeBERT: Learned early exit for BERT with cross-level optimization W Zhu Proceedings of the 59th Annual Meeting of the Association for Computational …, 2021 | 58 | 2021 |
Global attention decoder for Chinese spelling error correction Z Guo, Y Ni, K Wang, W Zhu, G Xie Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 …, 2021 | 56 | 2021 |
ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback G Cui, L Yuan, N Ding, G Yao, B He, W Zhu, Y Ni, G Xie, R Xie, Y Lin, ... Forty-first International Conference on Machine Learning, 2024 | 42 | 2024 |
A simple hash-based early exiting approach for language understanding and generation T Sun, X Liu, W Zhu, Z Geng, L Wu, Y He, Y Ni, G Xie, X Huang, X Qiu arXiv preprint arXiv:2203.01670, 2022 | 42 | 2022 |
Medical knowledge graph to enhance fraud, waste, and abuse detection on claim data: Model development and performance evaluation H Sun, J Xiao, W Zhu, Y He, S Zhang, X Xu, L Hou, J Li, Y Ni, G Xie JMIR Medical Informatics 8 (7), e17653, 2020 | 39 | 2020 |
PCEE-BERT: accelerating BERT inference via patient and confident early exiting Z Zhang, W Zhu, J Zhang, P Wang, R Jin, TS Chung Findings of the Association for Computational Linguistics: NAACL 2022, 327-338, 2022 | 33 | 2022 |
Panlp at mediqa 2019: Pre-trained language models, transfer learning and knowledge distillation W Zhu, X Zhou, K Wang, X Luo, X Li, Y Ni, G Xie Proceedings of the 18th BioNLP Workshop and Shared Task, 380-388, 2019 | 32 | 2019 |
Limited participation under ambiguity of correlation HH Huang, S Zhang, W Zhu Journal of Financial Markets 32, 97-143, 2017 | 31 | 2017 |
PromptCBLUE: a Chinese prompt tuning benchmark for the medical domain W Zhu, X Wang, H Zheng, M Chen, B Tang arXiv preprint arXiv:2310.14151, 2023 | 26 | 2023 |
AutoRC: Improving BERT based relation classification models via architecture search W Zhu, X Qiu, Y Ni, G Xie arXiv preprint arXiv:2009.10680, 2020 | 24 | 2020 |
Autotrans: Automating transformer design via reinforced architecture search W Zhu, X Wang, Y Ni, G Xie Natural Language Processing and Chinese Computing: 10th CCF International …, 2021 | 22 | 2021 |
GAML-BERT: improving BERT early exiting by gradient aligned mutual learning W Zhu, X Wang, Y Ni, G Xie Proceedings of the 2021 Conference on Empirical Methods in Natural Language …, 2021 | 21 | 2021 |
SPT: learning to selectively insert prompts for better prompt tuning W Zhu, M Tan Proceedings of the 2023 Conference on Empirical Methods in Natural Language …, 2023 | 18 | 2023 |
Extracting decision trees from medical texts: an overview of the Text2DT track in CHIP2022 W Zhu, W Li, X Wang, W Ji, Y Wu, J Chen, L Chen, B Tang China Health Information Processing Conference, 89-102, 2022 | 18 | 2022 |
Mining infrequent high-quality phrases from domain-specific corpora L Wang, W Zhu, S Jiang, S Zhang, K Wang, Y Ni, G Xie, Y Xiao Proceedings of the 29th ACM International Conference on Information …, 2020 | 18 | 2020 |
Alora: Allocating low-rank adaptation for fine-tuning large language models Z Liu, J Lyn, W Zhu, X Tian, Y Graham arXiv preprint arXiv:2403.16187, 2024 | 17 | 2024 |
Pingan smart health and sjtu at coin-shared task: utilizing pre-trained language models and common-sense knowledge in machine reading tasks X Li, Z Zhang, W Zhu, Z Li, Y Ni, P Gao, J Yan, G Xie Proceedings of the First Workshop on Commonsense Inference in Natural …, 2019 | 16 | 2019 |
Continually detection, rapidly react: unseen rumors detection based on continual prompt-tuning Y Zuo, W Zhu, GG Cai Proceedings of the 29th International Conference on Computational …, 2022 | 14 | 2022 |