Seguir
Tu Vu
Tu Vu
Research Scientist, Google DeepMind; Assistant Professor, Virginia Tech
Dirección de correo verificada de google.com - Página principal
Título
Citado por
Citado por
Año
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805, 2023
24882023
The flan collection: Designing data and methods for effective instruction tuning
S Longpre, L Hou, T Vu, A Webson, HW Chung, Y Tay, D Zhou, QV Le, ...
ICML, 2023
6672023
Gemini: A family of highly capable multimodal models
R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805 1, 2023
3062023
Spot: Better frozen model adaptation through soft prompt transfer
T Vu, B Lester, N Constant, R Al-Rfou, D Cer
ACL, 2022
2862022
others. 2023. Gemini: a family of highly capable multimodal models
RA Gemini Team, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805, 2023
2392023
Gemini: A family of highly capable multimodal models
R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805, 2023
222*2023
Exploring and predicting transferability across NLP tasks
T Vu, T Wang, T Munkhdalai, A Sordoni, A Trischler, A Mattarella-Micke, ...
EMNLP, 2020
1742020
Freshllms: Refreshing large language models with search engine augmentation
T Vu, M Iyyer, X Wang, N Constant, J Wei, J Wei, C Tar, YH Sung, D Zhou, ...
ACL, 2024
1562024
Gemini: A Family of Highly Capable Multimodal Models (2023)
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805 1084, 2023
143*2023
JAIST: Combining Multiple Features for Answer Selection in Community Question Answering
Q Tran, V Tran, T Vu, M Nguyen, S Pham
SemEval@NAACL, 2015
992015
Sentence Simplification with Memory-Augmented Neural Networks
T Vu, B Hu, T Munkhdalai, H Yu
NAACL, 2018
872018
Mixture-of-experts meets instruction tuning: A winning combination for large language models
S Shen, L Hou, Y Zhou, N Du, S Longpre, J Wei, HW Chung, B Zoph, ...
ICLR, 2024
672024
Gemini: A family of highly capable multimodal models
R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805, 2023
66*2023
STraTA: Self-Training with Task Augmentation for Better Few-shot Learning
T Vu, MT Luong, QV Le, G Simon, M Iyyer
EMNLP, 2021
622021
Overcoming catastrophic forgetting in zero-shot cross-lingual generation
T Vu, A Barua, B Lester, D Cer, M Iyyer, N Constant
EMNLP, 2022
602022
Gemini: A Family of Highly Capable Multimodal Models (2023)
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805 1084, 2023
462023
Self-evaluation improves selective generation in large language models
J Ren, Y Zhao, T Vu, PJ Liu, B Lakshminarayanan
ICBINB@NeurIPS, 2023
332023
Flan-MoE: Scaling Instruction-Finetuned Language Models with Sparse Mixture of Experts
S Shen, L Hou, Y Zhou, N Du, S Longpre, J Wei, HW Chung, B Zoph, ...
ICLR, 2024
322024
Foundational autoraters: Taming large language models for better automatic evaluation
T Vu*, K Krishna*, S Alzubi, C Tar, M Faruqui, YH Sung
EMNLP, 2024
26*2024
Learning to simplify children stories with limited data
T Vu, G Tran, S Pham
ACIIDS, 2014
232014
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20