フォロー
Yongchao Zhou
Yongchao Zhou
確認したメール アドレス: mail.utoronto.ca
タイトル
引用先
引用先
Large language models are human-level prompt engineers
Y Zhou, AI Muresanu, Z Han, K Paster, S Pitis, H Chan, J Ba
International Conference on Learning Representations (ICLR 2023), 2023
9842023
Dataset distillation using neural feature regression
Y Zhou, E Nezhadarya, J Ba
Advances in Neural Information Processing Systems 35 (NeurIPS 2022), 2022
1712022
On-Policy Distillation of Language Models: Learning from Self-Generated Mistakes
R Agarwal, N Vieillard, Y Zhou, P Stanczyk, S Ramos, M Geist, O Bachem
International Conference on Learning Representations (ICLR 2024), 2024
136*2024
Identifying the risks of lm agents with an lm-emulated sandbox
Y Ruan, H Dong, A Wang, S Pitis, Y Zhou, J Ba, Y Dubois, CJ Maddison, ...
International Conference on Learning Representations (ICLR 2024), 2024
692024
Distillspec: Improving speculative decoding via knowledge distillation
Y Zhou, K Lyu, AS Rawat, AK Menon, A Rostamizadeh, S Kumar, JF Kagy, ...
International Conference on Learning Representations (ICLR 2024), 2024
652024
Transcriptome-wide off-target effects of steric-blocking oligonucleotides
EM Holgersen, S Gandhi, Y Zhou, J Kim, B Vaz, J Bogojeski, M Bugno, ...
nucleic acid therapeutics 31 (6), 392-403, 2021
622021
Training on Thin Air: Improve Image Classification with Generated Data
Y Zhou, H Sahak, J Ba
ICML Workshop on Data-centric Machine Learning, 2023, 2023
412023
Transformers Can Achieve Length Generalization But Not Robustly
Y Zhou, U Alon, X Chen, X Wang, R Agarwal, D Zhou
arXiv preprint arXiv:2402.09371, 2024
312024
現在システムで処理を実行できません。しばらくしてからもう一度お試しください。
論文 1–8