Language Models are Few-shot Learners TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ... arXiv preprint arXiv:2005.14165, 2020 | 49243* | 2020 |
Gpt-4 technical report J Achiam, S Adler, S Agarwal, L Ahmad, I Akkaya, FL Aleman, D Almeida, ... arXiv preprint arXiv:2303.08774, 2023 | 8077 | 2023 |
Glide: Towards photorealistic image generation and editing with text-guided diffusion models A Nichol, P Dhariwal, A Ramesh, P Shyam, P Mishkin, B McGrew, ... arXiv preprint arXiv:2112.10741, 2021 | 3487 | 2021 |
Gemini: a family of highly capable multimodal models G Team, R Anil, S Borgeaud, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ... arXiv preprint arXiv:2312.11805, 2023 | 2532 | 2023 |
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context G Team, P Georgiev, VI Lei, R Burnell, L Bai, A Gulati, G Tanzer, ... arXiv preprint arXiv:2403.05530, 2024 | 1006 | 2024 |
Text and code embeddings by contrastive pre-training A Neelakantan, T Xu, R Puri, A Radford, JM Han, J Tworek, Q Yuan, ... arXiv preprint arXiv:2201.10005, 2022 | 445 | 2022 |
Model-Based Active Exploration P Shyam, W Jaśkowski, F Gomez International Conference on Machine Learning (ICML), 2019, 2018 | 234 | 2018 |
Attentive Recurrent Comparators P Shyam, S Gupta, A Dukkipati International Conference on Machine Learning (ICML), 2017, 3173-3181, 2017 | 156 | 2017 |
Training agents using upside-down reinforcement learning RK Srivastava, P Shyam, F Mutz, W Jaśkowski, J Schmidhuber arXiv preprint arXiv:1912.02877, 2019 | 132 | 2019 |
Language models are few-shot learners. cite TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ... arXiv preprint arxiv:2005.14165, 2020 | 64 | 2020 |
Artificial Intelligence for Prosthetics - Challenge Solutions Ł Kidziński, C Ong, SP Mohanty, J Hicks, SF Carroll, B Zhou, H Zeng, ... arXiv preprint arXiv:1902.02441, 2019 | 53 | 2019 |
Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv 2021 A Nichol, P Dhariwal, A Ramesh, P Shyam, P Mishkin, B McGrew, ... arXiv preprint arXiv:2112.10741, 2021 | 45 | 2021 |
Unsupervised neural machine translation with generative language models only JM Han, I Babuschkin, H Edwards, A Neelakantan, T Xu, S Polu, A Ray, ... arXiv preprint arXiv:2110.05448, 2021 | 28 | 2021 |
Omnixr: Evaluating omni-modality language models on reasoning across modalities L Chen, H Hu, M Zhang, Y Chen, Z Wang, Y Li, P Shyam, T Zhou, ... arXiv preprint arXiv:2410.12219, 2024 | 1 | 2024 |