Следене
Yutao Sun
Yutao Sun
Потвърден имейл адрес: mails.tsinghua.edu.cn - Начална страница
Заглавие
Позовавания
Позовавания
Година
Why can gpt learn in-context? language models implicitly perform gradient descent as meta-optimizers
D Dai, Y Sun, L Dong, Y Hao, S Ma, Z Sui, F Wei
arXiv preprint arXiv:2212.10559, 2022
4052022
Retentive network: A successor to transformer for large language models
Y Sun, L Dong, S Huang, S Ma, Y Xia, J Xue, J Wang, F Wei
arXiv preprint arXiv:2307.08621, 2023
3442023
A length-extrapolatable transformer
Y Sun, L Dong, B Patra, S Ma, S Huang, A Benhaim, V Chaudhary, ...
arXiv preprint arXiv:2212.10554, 2022
1562022
Structured prompting: Scaling in-context learning to 1,000 examples
Y Hao, Y Sun, L Dong, Z Han, Y Gu, F Wei
arXiv preprint arXiv:2212.06713, 2022
532022
Prototypical calibration for few-shot learning of language models
Z Han, Y Hao, L Dong, Y Sun, F Wei
arXiv preprint arXiv:2205.10183, 2022
432022
You only cache once: Decoder-decoder architectures for language models
Y Sun, L Dong, Y Zhu, S Huang, W Wang, S Ma, Q Zhang, J Wang, F Wei
Advances in Neural Information Processing Systems 37, 7339-7361, 2025
412025
Differential transformer
T Ye, L Dong, Y Xia, Y Sun, Y Zhu, G Huang, F Wei
arXiv preprint arXiv:2410.05258, 2024
262024
Why can gpt learn incontext? language models implicitly perform gradient descent as meta-optimizers
D Dai, Y Sun, L Dong, Y Hao, S Ma, Z Sui, F Wei
ICLR 2023 Workshop on Mathematical and Empirical Understanding of Foundation …, 2023
102023
FocusLLM: Scaling LLM's Context by Parallel Decoding
Z Li, Y Zhang, T Pan, Y Sun, Z Duan, J Fang, R Han, Z Wang, J Wang
arXiv preprint arXiv:2408.11745, 2024
52024
Multimodal Latent Language Modeling with Next-Token Diffusion
Y Sun, H Bao, W Wang, Z Peng, L Dong, S Huang, J Wang, F Wei
arXiv preprint arXiv:2412.08635, 2024
12024
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–10