Parameter-efficient fine-tuning of large-scale pre-trained language models N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ... Nature Machine Intelligence 5 (3), 220-235, 2023 | 666 | 2023 |
Chateval: Towards better llm-based evaluators through multi-agent debate CM Chan, W Chen, Y Su, J Yu, W Xue, S Zhang, J Fu, Z Liu ICLR 2024, 2023 | 335 | 2023 |
Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ... arXiv preprint arXiv:2203.06904, 2022 | 236 | 2022 |
Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors in agents W Chen, Y Su, J Zuo, C Yang, C Yuan, C Qian, CM Chan, Y Qin, Y Lu, ... arXiv preprint arXiv:2308.10848 2 (4), 6, 2023 | 159 | 2023 |
On transferability of prompt tuning for natural language processing Y Su, X Wang, Y Qin, CM Chan, Y Lin, H Wang, K Wen, Z Liu, P Li, J Li, ... NAACL 2022, 2021 | 151 | 2021 |
Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors W Chen, Y Su, J Zuo, C Yang, C Yuan, CM Chan, H Yu, Y Lu, YH Hung, ... The Twelfth International Conference on Learning Representations, 2023 | 95 | 2023 |
Rq-rag: Learning to refine queries for retrieval augmented generation CM Chan, C Xu, R Yuan, H Luo, W Xue, Y Guo, J Fu COLM 2024, 2024 | 43 | 2024 |
Plug-and-play document modules for pre-trained models C Xiao, Z Zhang, X Han, CM Chan, Y Lin, Z Liu, X Li, Z Li, Z Cao, M Sun ACL 2023, 2023 | 8 | 2023 |
Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models. CoRR, abs/2203.06904, 2022. doi: 10.48550 N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ... arXiv preprint arXiv.2203.06904, 0 | 8 | |
Exploring the Impact of Model Scaling on Parameter-Efficient Tuning Y Su*, CM Chan*, J Cheng, Y Qin, Y Lin, S Hu, Z Yang, N Ding, X Sun, ... The 2023 Conference on Empirical Methods in Natural Language Processing, 2023 | 5 | 2023 |
Hiprompt: Tuning-free higher-resolution generation with hierarchical mllm prompts X Liu, Y He, L Guo, X Li, B Jin, P Li, Y Li, CM Chan, Q Chen, W Xue, ... arXiv preprint arXiv:2409.02919, 2024 | 3 | 2024 |
Arbitrary few parameters are good enough for adapting large-scale pre-trained language models Y Su*, CM Chan*, J Cheng, Y Qin, Y Lin, S Hu, Z Yang, N Ding, Z Liu, ... arXiv preprint arXiv:2306.02320, 2023 | 3 | 2023 |
Agentmonitor: A plug-and-play framework for predictive and secure multi-agent systems CM Chan, J Yu, W Chen, C Jiang, X Liu, W Shi, Z Liu, W Xue, Y Guo arXiv preprint arXiv:2408.14972, 2024 | 2 | 2024 |
EVA: An Embodied World Model for Future Video Anticipation X Chi, H Zhang, CK Fan, X Qi, R Zhang, A Chen, C Chan, W Xue, W Luo, ... arXiv preprint arXiv:2410.15461, 2024 | 1 | 2024 |
Importance weighting can help large language models self-improve C Jiang, C Chan, W Xue, Q Liu, Y Guo AAAI 2025, 2024 | 1 | 2024 |
PIP: Perturbation-based Iterative Pruning for Large Language Models Y Cao, WJ Xu, Y Shen, W Shi, CM Chan, J Xu arXiv preprint arXiv:2501.15278, 2025 | | 2025 |