Folgen
Chan Chi-Min
Chan Chi-Min
Bestätigte E-Mail-Adresse bei connect.ust.hk - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Parameter-efficient fine-tuning of large-scale pre-trained language models
N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ...
Nature Machine Intelligence 5 (3), 220-235, 2023
6662023
Chateval: Towards better llm-based evaluators through multi-agent debate
CM Chan, W Chen, Y Su, J Yu, W Xue, S Zhang, J Fu, Z Liu
ICLR 2024, 2023
3352023
Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models
N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ...
arXiv preprint arXiv:2203.06904, 2022
2362022
Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors in agents
W Chen, Y Su, J Zuo, C Yang, C Yuan, C Qian, CM Chan, Y Qin, Y Lu, ...
arXiv preprint arXiv:2308.10848 2 (4), 6, 2023
1592023
On transferability of prompt tuning for natural language processing
Y Su, X Wang, Y Qin, CM Chan, Y Lin, H Wang, K Wen, Z Liu, P Li, J Li, ...
NAACL 2022, 2021
1512021
Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors
W Chen, Y Su, J Zuo, C Yang, C Yuan, CM Chan, H Yu, Y Lu, YH Hung, ...
The Twelfth International Conference on Learning Representations, 2023
952023
Rq-rag: Learning to refine queries for retrieval augmented generation
CM Chan, C Xu, R Yuan, H Luo, W Xue, Y Guo, J Fu
COLM 2024, 2024
432024
Plug-and-play document modules for pre-trained models
C Xiao, Z Zhang, X Han, CM Chan, Y Lin, Z Liu, X Li, Z Li, Z Cao, M Sun
ACL 2023, 2023
82023
Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models. CoRR, abs/2203.06904, 2022. doi: 10.48550
N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ...
arXiv preprint arXiv.2203.06904, 0
8
Exploring the Impact of Model Scaling on Parameter-Efficient Tuning
Y Su*, CM Chan*, J Cheng, Y Qin, Y Lin, S Hu, Z Yang, N Ding, X Sun, ...
The 2023 Conference on Empirical Methods in Natural Language Processing, 2023
52023
Hiprompt: Tuning-free higher-resolution generation with hierarchical mllm prompts
X Liu, Y He, L Guo, X Li, B Jin, P Li, Y Li, CM Chan, Q Chen, W Xue, ...
arXiv preprint arXiv:2409.02919, 2024
32024
Arbitrary few parameters are good enough for adapting large-scale pre-trained language models
Y Su*, CM Chan*, J Cheng, Y Qin, Y Lin, S Hu, Z Yang, N Ding, Z Liu, ...
arXiv preprint arXiv:2306.02320, 2023
32023
Agentmonitor: A plug-and-play framework for predictive and secure multi-agent systems
CM Chan, J Yu, W Chen, C Jiang, X Liu, W Shi, Z Liu, W Xue, Y Guo
arXiv preprint arXiv:2408.14972, 2024
22024
EVA: An Embodied World Model for Future Video Anticipation
X Chi, H Zhang, CK Fan, X Qi, R Zhang, A Chen, C Chan, W Xue, W Luo, ...
arXiv preprint arXiv:2410.15461, 2024
12024
Importance weighting can help large language models self-improve
C Jiang, C Chan, W Xue, Q Liu, Y Guo
AAAI 2025, 2024
12024
PIP: Perturbation-based Iterative Pruning for Large Language Models
Y Cao, WJ Xu, Y Shen, W Shi, CM Chan, J Xu
arXiv preprint arXiv:2501.15278, 2025
2025
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–16