Підписатись
Tu Vu
Tu Vu
Research Scientist, Google DeepMind; Assistant Professor, Virginia Tech
Підтверджена електронна адреса в google.com - Домашня сторінка
Назва
Посилання
Посилання
Рік
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805, 2023
31962023
The flan collection: Designing data and methods for effective instruction tuning
S Longpre, L Hou, T Vu, A Webson, HW Chung, Y Tay, D Zhou, QV Le, ...
ICML, 2023
6872023
Spot: Better frozen model adaptation through soft prompt transfer
T Vu, B Lester, N Constant, R Al-Rfou, D Cer
ACL, 2022
2832022
Gemini: A family of highly capable multimodal models
R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805, 2023
252*2023
Gemini: A Family of Highly Capable Multimodal Models (2023)
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805 1084, 2023
199*2023
Exploring and predicting transferability across NLP tasks
T Vu, T Wang, T Munkhdalai, A Sordoni, A Trischler, A Mattarella-Micke, ...
EMNLP, 2020
1742020
Freshllms: Refreshing large language models with search engine augmentation
T Vu, M Iyyer, X Wang, N Constant, J Wei, J Wei, C Tar, YH Sung, D Zhou, ...
ACL, 2024
1692024
JAIST: Combining Multiple Features for Answer Selection in Community Question Answering
Q Tran, V Tran, T Vu, M Nguyen, S Pham
SemEval@NAACL, 2015
982015
Sentence Simplification with Memory-Augmented Neural Networks
T Vu, B Hu, T Munkhdalai, H Yu
NAACL, 2018
872018
Mixture-of-experts meets instruction tuning: A winning combination for large language models
S Shen, L Hou, Y Zhou, N Du, S Longpre, J Wei, HW Chung, B Zoph, ...
ICLR, 2024
692024
Gemini: A family of highly capable multimodal models
R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805, 2023
63*2023
STraTA: Self-Training with Task Augmentation for Better Few-shot Learning
T Vu, MT Luong, QV Le, G Simon, M Iyyer
EMNLP, 2021
622021
Overcoming catastrophic forgetting in zero-shot cross-lingual generation
T Vu, A Barua, B Lester, D Cer, M Iyyer, N Constant
EMNLP, 2022
612022
Self-evaluation improves selective generation in large language models
J Ren, Y Zhao, T Vu, PJ Liu, B Lakshminarayanan
ICBINB@NeurIPS, 2023
362023
Foundational autoraters: Taming large language models for better automatic evaluation
T Vu*, K Krishna*, S Alzubi, C Tar, M Faruqui, YH Sung
EMNLP, 2024
28*2024
Learning to simplify children stories with limited data
T Vu, G Tran, S Pham
ACIIDS, 2014
232014
Dialect-robust Evaluation of Generated Text
J Sun, T Sellam, E Clark, T Vu, T Dozat, D Garrette, A Siddhant, ...
ACL, 2023
192023
The Flan Collection: Designing Data and Methods for Effective Instruction Tuning 2023
S Longpre, L Hou, T Vu, A Webson, HW Chung, Y Tay, D Zhou, QV Le, ...
arXiv preprint arXiv:2301.13688, 2023
18*2023
Leveraging QA Datasets to Improve Generative Data Augmentation
D Mekala, T Vu, T Schick, J Shang
EMNLP, 2022
162022
Integrating Multiplicative Features into Supervised Distributional Methods for Lexical Entailment
T Vu, V Shwartz
*SEM@NAACL, 2018
152018
У даний момент система не може виконати операцію. Спробуйте пізніше.
Статті 1–20