متابعة
Chang Gao
Chang Gao
بريد إلكتروني تم التحقق منه على se.cuhk.edu.hk - الصفحة الرئيسية
عنوان
عدد مرات الاقتباسات
عدد مرات الاقتباسات
السنة
M3exam: A multilingual, multimodal, multilevel benchmark for examining large language models
W Zhang, M Aljunied, C Gao, YK Chia, L Bing
Advances in Neural Information Processing Systems 36, 5484-5505, 2023
1062023
Rotate3d: Representing relations as rotations in three-dimensional space for knowledge graph embedding
C Gao, C Sun, L Shan, L Lin, M Wang
Proceedings of the 29th ACM international conference on information …, 2020
782020
Prompt conditioned vae: Enhancing generative replay for lifelong learning in task-oriented dialogue
Y Zhao, Y Zheng, Z Tian, C Gao, B Yu, H Yu, Y Li, J Sun, NL Zhang
arXiv preprint arXiv:2210.07783, 2022
242022
Exploring safety generalization challenges of large language models via code
Q Ren, C Gao, J Shao, J Yan, X Tan, W Lam, L Ma
arXiv preprint arXiv:2403.07865, 2024
152024
UniGDD: A unified generative framework for goal-oriented document-grounded dialogue
C Gao, W Zhang, W Lam
arXiv preprint arXiv:2204.07770, 2022
142022
Easy-to-hard learning for information extraction
C Gao, W Zhang, W Lam, L Bing
arXiv preprint arXiv:2305.09193, 2023
102023
Towards generalizable and robust text-to-sql parsing
C Gao, B Li, W Zhang, W Lam, B Li, F Huang, L Si, Y Li
arXiv preprint arXiv:2210.12674, 2022
72022
Search clarification selection via query-intent-clarification graph attention
C Gao, W Lam
European Conference on Information Retrieval, 230-243, 2022
62022
Codeattack: Revealing safety generalization challenges of large language models via code completion
Q Ren, C Gao, J Shao, J Yan, X Tan, W Lam, L Ma
Findings of the Association for Computational Linguistics ACL 2024, 11437-11452, 2024
42024
Strategyllm: Large language models as strategy generators, executors, optimizers, and evaluators for problem solving
C Gao, H Jiang, D Cai, S Shi, W Lam
arXiv preprint arXiv:2311.08803, 2023
42023
JsonTuning: Towards Generalizable, Robust, and Controllable Instruction Tuning
C Gao, W Zhang, G Chen, W Lam
arXiv preprint arXiv:2310.02953, 2023
42023
SWE-Fixer: Training Open-Source LLMs for Effective and Efficient GitHub Issue Resolution
C Xie, B Li, C Gao, H Du, W Lam, D Zou, K Chen
arXiv preprint arXiv:2501.05040, 2025
2025
يتعذر على النظام إجراء العملية في الوقت الحالي. عاود المحاولة لاحقًا.
مقالات 1–12