Prati
Quoc V. Le
Quoc V. Le
Research Scientist, Google
Potvrđena adresa e-pošte na stanford.edu - Početna stranica
Naslov
Citirano
Citirano
Godina
Sequence to sequence learning with neural networks
I Sutskever, O Vinyals, QV Le
arXiv preprint arXiv:1409.3215, 2014
290742014
Efficientnet: Rethinking model scaling for convolutional neural networks
M Tan, Q Le
International Conference on Machine Learning, 6105-6114, 2019
264912019
Distributed representations of sentences and documents
Q Le, T Mikolov
International conference on machine learning, 1188-1196, 2014
135292014
Chain-of-thought prompting elicits reasoning in large language models
J Wei, X Wang, D Schuurmans, M Bosma, F Xia, E Chi, QV Le, D Zhou
Advances in neural information processing systems 35, 24824-24837, 2022
12100*2022
Xlnet: Generalized autoregressive pretraining for language understanding
Z Yang, Z Dai, Y Yang, J Carbonell, R Salakhutdinov, QV Le
arXiv preprint arXiv:1906.08237, 2019
108502019
Searching for mobilenetv3
A Howard, M Sandler, G Chu, LC Chen, B Chen, M Tan, W Wang, Y Zhu, ...
Proceedings of the IEEE/CVF international conference on computer vision …, 2019
100882019
Google's neural machine translation system: Bridging the gap between human and machine translation
Y Wu, M Schuster, Z Chen, QV Le, M Norouzi, W Macherey, M Krikun, ...
arXiv preprint arXiv:1609.08144, 2016
96502016
Efficientdet: Scalable and efficient object detection
M Tan, R Pang, QV Le
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2020
81782020
Learning transferable architectures for scalable image recognition
B Zoph, V Vasudevan, J Shlens, QV Le
Proceedings of the IEEE conference on computer vision and pattern …, 2018
77092018
Neural architecture search with reinforcement learning
B Zoph, QV Le
arXiv preprint arXiv:1611.01578, 2016
71102016
Searching for activation functions
P Ramachandran, B Zoph, QV Le
arXiv preprint arXiv:1710.05941, 2017
5200*2017
Large scale Distributed Deep Networks
AYN J. Dean, G.S. Corrado, R. Monga, K. Chen, M. Devin
Advances In Neural Information Processing Systems 25 (7), 2012
51472012
Autoaugment: Learning augmentation policies from data
ED Cubuk, B Zoph, D Mane, V Vasudevan, QV Le
arXiv preprint arXiv:1805.09501, 2018
5007*2018
Transformer-xl: Attentive language models beyond a fixed-length context
Z Dai
arXiv preprint arXiv:1901.02860, 2019
48122019
Electra: Pre-training text encoders as discriminators rather than generators
K Clark
arXiv preprint arXiv:2003.10555, 2020
47622020
Specaugment: A simple data augmentation method for automatic speech recognition
DS Park, W Chan, Y Zhang, CC Chiu, B Zoph, ED Cubuk, QV Le
arXiv preprint arXiv:1904.08779, 2019
44532019
Randaugment: Practical automated data augmentation with a reduced search space
ED Cubuk, B Zoph, J Shlens, QV Le
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2020
43792020
Mnasnet: Platform-aware neural architecture search for mobile
M Tan, B Chen, R Pang, V Vasudevan, M Sandler, A Howard, QV Le
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2019
40192019
Scaling up visual and vision-language representation learning with noisy text supervision
C Jia, Y Yang, Y Xia, YT Chen, Z Parekh, H Pham, Q Le, YH Sung, Z Li, ...
International conference on machine learning, 4904-4916, 2021
39512021
Efficientnetv2: Smaller models and faster training
M Tan, Q Le
International conference on machine learning, 10096-10106, 2021
36662021
Sustav trenutno ne može provesti ovu radnju. Pokušajte ponovo kasnije.
Članci 1–20