Urmăriți
Ping (Iris) Yu
Ping (Iris) Yu
FAIR researcher at Meta AI
Adresă de e-mail confirmată pe meta.com
Titlu
Citat de
Citat de
Anul
Lima: Less is more for alignment
C Zhou, P Liu, P Xu, S Iyer, J Sun, Y Mao, X Ma, A Efrat, P Yu, L Yu, ...
Advances in Neural Information Processing Systems 36, 55006-55021, 2023
9832023
Self-alignment with instruction backtranslation
X Li, P Yu, C Zhou, T Schick, O Levy, L Zettlemoyer, J Weston, M Lewis
arXiv preprint arXiv:2308.06259, 2023
2162023
Chameleon: Mixed-modal early-fusion foundation models
C Team
arXiv preprint arXiv:2405.09818, 2024
1742024
Opt-iml: Scaling language model instruction meta learning through the lens of generalization
S Iyer, XV Lin, R Pasunuru, T Mihaylov, D Simig, P Yu, K Shuster, T Wang, ...
arXiv preprint arXiv:2212.12017, 2022
1102022
Shepherd: A critic for language model generation
T Wang, P Yu, XE Tan, S O'Brien, R Pasunuru, J Dwivedi-Yu, ...
arXiv preprint arXiv:2308.04592, 2023
662023
Learning diverse stochastic human-action generators by learning smooth latent transitions
Z Wang, P Yu, Y Zhao, R Zhang, Y Zhou, J Yuan, C Chen
Proceedings of the AAAI conference on artificial intelligence 34 (07), 12281 …, 2020
532020
Feature quantization improves gan training
Y Zhao, C Li, P Yu, J Gao, C Chen
arXiv preprint arXiv:2004.02088, 2020
462020
Structure-aware human-action generation
P Yu, Y Zhao, C Li, J Yuan, C Chen
Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23 …, 2020
412020
Self-taught evaluators
T Wang, I Kulikov, O Golovneva, P Yu, W Yuan, J Dwivedi-Yu, RY Pang, ...
arXiv preprint arXiv:2408.02666, 2024
402024
Distilling system 2 into system 1
P Yu, J Xu, J Weston, I Kulikov
arXiv preprint arXiv:2407.06023, 2024
342024
The art of llm refinement: Ask, refine, and trust
K Shridhar, K Sinha, A Cohen, T Wang, P Yu, R Pasunuru, M Sachan, ...
arXiv preprint arXiv:2311.07961, 2023
242023
Bayesian meta sampling for fast uncertainty adaptation
Z Wang, Y Zhao, P Yu, R Zhang, C Chen
International Conference on Learning Representations, 2020
232020
Efficient tool use with chain-of-abstraction reasoning
S Gao, J Dwivedi-Yu, P Yu, XE Tan, R Pasunuru, O Golovneva, K Sinha, ...
arXiv preprint arXiv:2401.17464, 2024
222024
Lima: Less is more for alignment, 2023
C Zhou, P Liu, P Xu, S Iyer, J Sun, Y Mao, X Ma, A Efrat, P Yu, L Yu, ...
URL https://arxiv. org/abs/2305.11206, 2023
182023
Alert: Adapting language models to reasoning tasks
P Yu, T Wang, O Golovneva, B AlKhamissi, S Verma, Z Jin, G Ghosh, ...
arXiv preprint arXiv:2212.08286, 2022
172022
LIMA: Less Is More for Alignment. CoRR abs/2305.11206 (2023)
C Zhou, P Liu, P Xu, S Iyer, J Sun, Y Mao, X Ma, A Efrat, P Yu, L Yu, ...
arXiv preprint arXiv:2305.11206 10, 2023
152023
OPT-R: Exploring the role of explanations in finetuning and prompting for reasoning skills of large language models
B AlKhamissi, S Verma, P Yu, Z Jin, A Celikyilmaz, M Diab
arXiv preprint arXiv:2305.12001, 2023
142023
LIMA: Less Is More for Alignment. arXiv 2023
C Zhou, P Liu, P Xu, S Iyer, J Sun, Y Mao, X Ma, A Efrat, P Yu, L Yu
arXiv preprint arXiv:2305.11206, 0
14
Efficient language modeling with sparse all-mlp
P Yu, M Artetxe, M Ott, S Shleifer, H Gong, V Stoyanov, X Li
arXiv preprint arXiv:2203.06850, 2022
132022
ALERT: Adapt language models to reasoning tasks
P Yu, T Wang, O Golovneva, B AlKhamissi, S Verma, Z Jin, G Ghosh, ...
Proceedings of the 61st Annual Meeting of the Association for Computational …, 2023
112023
Sistemul nu poate realiza operația în acest moment. Încercați din nou mai târziu.
Articole 1–20