Takip et
Yao Zhao
Yao Zhao
Google Brain
google.com üzerinde doğrulanmış e-posta adresine sahip
Başlık
Alıntı yapanlar
Alıntı yapanlar
Yıl
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805, 2023
25172023
Pegasus: Pre-training with extracted gap-sentences for abstractive summarization
J Zhang, Y Zhao, M Saleh, P Liu
International conference on machine learning, 11328-11339, 2020
23592020
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
G Team, P Georgiev, VI Lei, R Burnell, L Bai, A Gulati, G Tanzer, ...
arXiv preprint arXiv:2403.05530, 2024
9932024
Adversarial attacks and defences competition
A Kurakin, I Goodfellow, S Bengio, Y Dong, F Liao, M Liang, T Pang, ...
The NIPS'17 Competition: Building Intelligent Systems, 195-231, 2018
3722018
Paragraph-level neural question generation with maxout pointer and gated self-attention networks
Y Zhao, X Ni, Y Ding, Q Ke
Proceedings of the 2018 conference on empirical methods in natural language …, 2018
3392018
The tethering of chromatin to the nuclear envelope supports nuclear mechanics
SM Schreiner, PK Koo, Y Zhao, SGJ Mochrie, MC King
Nature communications 6 (1), 7159, 2015
2362015
Slic-hf: Sequence likelihood calibration with human feedback
Y Zhao, R Joshi, T Liu, M Khalman, M Saleh, PJ Liu
arXiv preprint arXiv:2305.10425, 2023
2292023
Talm: Tool augmented language models
A Parisi, Y Zhao, N Fiedel
arXiv preprint arXiv:2205.12255, 2022
2022022
Statistical rejection sampling improves preference optimization
T Liu, Y Zhao, R Joshi, M Khalman, M Saleh, PJ Liu, J Liu
arXiv preprint arXiv:2309.06657, 2023
1602023
Planning with learned entity prompts for abstractive summarization
S Narayan, Y Zhao, J Maynez, G Simões, V Nikolaev, R McDonald
Transactions of the Association for Computational Linguistics 9, 1475-1492, 2021
134*2021
Calibrating sequence likelihood improves conditional language generation
Y Zhao, M Khalman, R Joshi, S Narayan, M Saleh, PJ Liu
arXiv preprint arXiv:2210.00045, 2022
1162022
Direct language model alignment from online ai feedback
S Guo, B Zhang, T Liu, T Liu, M Khalman, F Llinares, A Rame, T Mesnard, ...
arXiv preprint arXiv:2402.04792, 2024
942024
Out-of-distribution detection and selective generation for conditional language models
J Ren, J Luo, Y Zhao, K Krishna, M Saleh, B Lakshminarayanan, PJ Liu
The Eleventh International Conference on Learning Representations, 2022
782022
Investigating efficiently extending transformers for long input summarization
J Phang, Y Zhao, PJ Liu
arXiv preprint arXiv:2208.04347, 2022
712022
Self-evaluation improves selective generation in large language models
J Ren, Y Zhao, T Vu, PJ Liu, B Lakshminarayanan
Proceedings on, 49-64, 2023
332023
Lipo: Listwise preference optimization through learning-to-rank
T Liu, Z Qin, J Wu, J Shen, M Khalman, R Joshi, Y Zhao, M Saleh, ...
arXiv preprint arXiv:2402.01878, 2024
322024
A well-composed text is half done! composition sampling for diverse conditional generation
S Narayan, G Simões, Y Zhao, J Maynez, D Das, M Collins, M Lapata
arXiv preprint arXiv:2203.15108, 2022
322022
Seal: Segment-wise extractive-abstractive long-form text summarization
Y Zhao, M Saleh, PJ Liu
arXiv preprint arXiv:2006.10213, 2020
302020
ForumSum: A multi-speaker conversation summarization dataset
M Khalman, Y Zhao, M Saleh
Findings of the Association for Computational Linguistics: EMNLP 2021, 4592-4599, 2021
232021
Smart: Sentences as basic units for text evaluation
RK Amplayo, PJ Liu, Y Zhao, S Narayan
arXiv preprint arXiv:2208.01030, 2022
202022
Sistem, işlemi şu anda gerçekleştiremiyor. Daha sonra yeniden deneyin.
Makaleler 1–20