Language models are few-shot learners T Brown, B Mann, N Ryder, M Subbiah, JD Kaplan, P Dhariwal, ... Advances in neural information processing systems 33, 1877-1901, 2020 | 48887* | 2020 |
Deep reinforcement learning from human preferences PF Christiano, J Leike, T Brown, M Martic, S Legg, D Amodei Advances in neural information processing systems 30, 2017 | 3482 | 2017 |
Scaling laws for neural language models J Kaplan, S McCandlish, T Henighan, TB Brown, B Chess, R Child, ... arXiv preprint arXiv:2001.08361, 2020 | 3119* | 2020 |
Extracting training data from large language models N Carlini, F Tramer, E Wallace, M Jagielski, A Herbert-Voss, K Lee, ... 30th USENIX Security Symposium (USENIX Security 21), 2633-2650, 2021 | 1972 | 2021 |
Training a helpful and harmless assistant with reinforcement learning from human feedback Y Bai, A Jones, K Ndousse, A Askell, A Chen, N DasSarma, D Drain, ... arXiv preprint arXiv:2204.05862, 2022 | 1671 | 2022 |
Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models J Kaplan, S McCandlish, T Henighan, TB Brown, B Chess arXiv preprint arXiv:2001.08361 1 (2), 4, 2020 | 1603 | 2020 |
Fine-tuning language models from human preferences DM Ziegler, N Stiennon, J Wu, TB Brown, A Radford, D Amodei, ... arXiv preprint arXiv:1909.08593, 2019 | 1559 | 2019 |
Constitutional ai: Harmlessness from ai feedback Y Bai, S Kadavath, S Kundu, A Askell, J Kernion, A Jones, A Chen, ... arXiv preprint arXiv:2212.08073, 2022 | 1294 | 2022 |
Adversarial patch TB Brown, D Mané, A Roy, M Abadi, J Gilmer arXiv preprint arXiv:1712.09665, 2017 | 1121 | 2017 |
Technical report on the cleverhans v2. 1.0 adversarial examples library N Papernot, F Faghri, N Carlini, I Goodfellow, R Feinman, A Kurakin, ... arXiv preprint arXiv:1610.00768, 2016 | 605* | 2016 |
Language Models are Few-Shot Learners, arxiv, 2020 TB Brown | 601 | 2005 |
Aurko Roy, Martín Abadi, and Justin Gilmer. Adversarial patch TB Brown, D Mané 31st Conference on Neural Information Processing Systems (NIPS 2017), 2017 | 564 | 2017 |
Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned D Ganguli, L Lovitt, J Kernion, A Askell, Y Bai, S Kadavath, B Mann, ... arXiv preprint arXiv:2209.07858, 2022 | 479 | 2022 |
A general language assistant as a laboratory for alignment A Askell, Y Bai, A Chen, D Drain, D Ganguli, T Henighan, A Jones, ... arXiv preprint arXiv:2112.00861, 2021 | 394 | 2021 |
In-context learning and induction heads C Olsson, N Elhage, N Nanda, N Joseph, N DasSarma, T Henighan, ... arXiv preprint arXiv:2209.11895, 2022 | 390 | 2022 |
Scaling laws for autoregressive generative modeling T Henighan, J Kaplan, M Katz, M Chen, C Hesse, J Jackson, H Jun, ... arXiv preprint arXiv:2010.14701, 2020 | 388 | 2020 |
A mathematical framework for transformer circuits N Elhage, N Nanda, C Olsson, T Henighan, N Joseph, B Mann, A Askell, ... Transformer Circuits Thread 1 (1), 12, 2021 | 336 | 2021 |
cleverhans v2. 0.0: an adversarial machine learning library N Papernot, I Goodfellow, R Sheatsley, R Feinman, P McDaniel arXiv preprint arXiv:1610.00768 10, 2016 | 330 | 2016 |
Predictability and surprise in large generative models D Ganguli, D Hernandez, L Lovitt, A Askell, Y Bai, A Chen, T Conerly, ... Proceedings of the 2022 ACM Conference on Fairness, Accountability, and …, 2022 | 298 | 2022 |
Language Models are Few-Shot Learners. 2020. doi: 10.48550 TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ... arxiv, 5-7, 2005 | 264 | 2005 |