关注
Zhe Yang
Zhe Yang
在 pku.edu.cn 的电子邮件经过验证
标题
引用次数
引用次数
年份
Unlocking efficiency in large language model inference: A comprehensive survey of speculative decoding
H Xia, Z Yang, Q Dong, P Wang, Y Li, T Ge, T Liu, W Li, Z Sui
ACL 2024 Findings, 2024
682024
Omni-math: A universal olympiad level mathematic benchmark for large language models
B Gao, F Song, Z Yang, Z Cai, Y Miao, Q Dong, L Li, C Ma, L Chen, R Xu, ...
arXiv preprint arXiv:2410.07985, 2024
112024
Periodiclora: Breaking the low-rank bottleneck in lora optimization
X Meng, D Dai, W Luo, Z Yang, S Wu, X Wang, P Wang, Q Dong, L Chen, ...
arXiv preprint arXiv:2402.16141, 2024
102024
Not All Demonstration Examples are Equally Beneficial: Reweighting Demonstration Examples for In-Context Learning
Z Yang, D Dai, P Wang, Z Sui
EMNLP 2023 Findings, 2023
92023
Towards a unified view of preference learning for large language models: A survey
B Gao, F Song, Y Miao, Z Cai, Z Yang, L Chen, H Hu, R Xu, Q Dong, ...
arXiv preprint arXiv:2409.02795, 2024
62024
Can Large Language Models Always Solve Easy Problems if They Can Solve Harder Ones?
Z Yang, Y Zhang, T Liu, J Yang, J Lin, C Zhou, Z Sui
EMNLP 2024 main, 2024
52024
Confidence vs Critique: A Decomposition of Self-Correction Capability for LLMs
Z Yang, Y Zhang, Y Wang, Z Xu, J Lin, Z Sui
arXiv preprint arXiv:2412.19513, 2024
2024
系统目前无法执行此操作,请稍后再试。
文章 1–7