Gemini: a family of highly capable multimodal models G Team, R Anil, S Borgeaud, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ... arXiv preprint arXiv:2312.11805, 2023 | 2562 | 2023 |
The flan collection: Designing data and methods for effective instruction tuning S Longpre, L Hou, T Vu, A Webson, HW Chung, Y Tay, D Zhou, QV Le, ... ICML, 2023 | 674 | 2023 |
Gemini: A family of highly capable multimodal models R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ... arXiv preprint arXiv:2312.11805 1, 2023 | 313 | 2023 |
Spot: Better frozen model adaptation through soft prompt transfer T Vu, B Lester, N Constant, R Al-Rfou, D Cer ACL, 2022 | 287 | 2022 |
others. 2023. Gemini: a family of highly capable multimodal models RA Gemini Team, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ... arXiv preprint arXiv:2312.11805, 2023 | 228 | 2023 |
Gemini: A family of highly capable multimodal models R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ... arXiv preprint arXiv:2312.11805, 2023 | 227* | 2023 |
Exploring and predicting transferability across NLP tasks T Vu, T Wang, T Munkhdalai, A Sordoni, A Trischler, A Mattarella-Micke, ... EMNLP, 2020 | 175 | 2020 |
Freshllms: Refreshing large language models with search engine augmentation T Vu, M Iyyer, X Wang, N Constant, J Wei, J Wei, C Tar, YH Sung, D Zhou, ... ACL, 2024 | 158 | 2024 |
Gemini: A Family of Highly Capable Multimodal Models (2023) G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ... arXiv preprint arXiv:2312.11805 1084, 2023 | 144* | 2023 |
JAIST: Combining Multiple Features for Answer Selection in Community Question Answering Q Tran, V Tran, T Vu, M Nguyen, S Pham SemEval@NAACL, 2015 | 99 | 2015 |
Sentence Simplification with Memory-Augmented Neural Networks T Vu, B Hu, T Munkhdalai, H Yu NAACL, 2018 | 87 | 2018 |
Mixture-of-experts meets instruction tuning: A winning combination for large language models S Shen, L Hou, Y Zhou, N Du, S Longpre, J Wei, HW Chung, B Zoph, ... ICLR, 2024 | 67 | 2024 |
Gemini: A family of highly capable multimodal models R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ... arXiv preprint arXiv:2312.11805, 2023 | 66* | 2023 |
STraTA: Self-Training with Task Augmentation for Better Few-shot Learning T Vu, MT Luong, QV Le, G Simon, M Iyyer EMNLP, 2021 | 62 | 2021 |
Overcoming catastrophic forgetting in zero-shot cross-lingual generation T Vu, A Barua, B Lester, D Cer, M Iyyer, N Constant EMNLP, 2022 | 60 | 2022 |
Gemini: A Family of Highly Capable Multimodal Models (2023) G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ... arXiv preprint arXiv:2312.11805 1084, 2023 | 48 | 2023 |
Flan-MoE: Scaling Instruction-Finetuned Language Models with Sparse Mixture of Experts S Shen, L Hou, Y Zhou, N Du, S Longpre, J Wei, HW Chung, B Zoph, ... ICLR, 2024 | 33 | 2024 |
Self-evaluation improves selective generation in large language models J Ren, Y Zhao, T Vu, PJ Liu, B Lakshminarayanan ICBINB@NeurIPS, 2023 | 33 | 2023 |
Foundational autoraters: Taming large language models for better automatic evaluation T Vu*, K Krishna*, S Alzubi, C Tar, M Faruqui, YH Sung EMNLP, 2024 | 26* | 2024 |
Learning to simplify children stories with limited data T Vu, G Tran, S Pham ACIIDS, 2014 | 24 | 2014 |