On the opportunities and risks of foundation models R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ... arXiv preprint arXiv:2108.07258, 2021 | 4712 | 2021 |
Stanford alpaca: An instruction-following llama model R Taori, I Gulrajani, T Zhang, Y Dubois, X Li, C Guestrin, P Liang, ... | 2809* | 2023 |
Emergent abilities of large language models J Wei, Y Tay, R Bommasani, C Raffel, B Zoph, S Borgeaud, D Yogatama, ... arXiv preprint arXiv:2206.07682, 2022 | 2752 | 2022 |
Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization S Sagawa, PW Koh, TB Hashimoto, P Liang arXiv preprint arXiv:1911.08731, 2019 | 1872 | 2019 |
Holistic evaluation of language models P Liang, R Bommasani, T Lee, D Tsipras, D Soylu, M Yasunaga, Y Zhang, ... arXiv preprint arXiv:2211.09110, 2022 | 1190 | 2022 |
Diffusion-lm improves controllable text generation X Li, J Thickstun, I Gulrajani, PS Liang, TB Hashimoto Advances in Neural Information Processing Systems 35, 4328-4343, 2022 | 729 | 2022 |
Fairness without demographics in repeated loss minimization T Hashimoto, M Srivastava, H Namkoong, P Liang International Conference on Machine Learning, 1929-1938, 2018 | 715 | 2018 |
Discovery of directional and nondirectional pioneer transcription factors by modeling DNase profile magnitude and shape RI Sherwood, T Hashimoto, CW O'donnell, S Lewis, AA Barkal, ... Nature biotechnology 32 (2), 171-178, 2014 | 508 | 2014 |
Alpacaeval: An automatic evaluator of instruction-following models X Li, T Zhang, Y Dubois, R Taori, I Gulrajani, C Guestrin, P Liang, ... | 497 | 2023 |
Benchmarking large language models for news summarization T Zhang, F Ladhak, E Durmus, P Liang, K McKeown, TB Hashimoto Transactions of the Association for Computational Linguistics 12, 39-57, 2024 | 470 | 2024 |
Alpacafarm: A simulation framework for methods that learn from human feedback Y Dubois, CX Li, R Taori, T Zhang, I Gulrajani, J Ba, C Guestrin, PS Liang, ... Advances in Neural Information Processing Systems 36, 2024 | 428 | 2024 |
Whose opinions do language models reflect? S Santurkar, E Durmus, F Ladhak, C Lee, P Liang, T Hashimoto International Conference on Machine Learning, 29971-30004, 2023 | 375 | 2023 |
Generating sentences by editing prototypes K Guu, TB Hashimoto, Y Oren, P Liang Transactions of the Association for Computational Linguistics 6, 437-450, 2018 | 375 | 2018 |
Large language models can be strong differentially private learners X Li, F Tramer, P Liang, T Hashimoto arXiv preprint arXiv:2110.05679, 2021 | 364 | 2021 |
Easily accessible text-to-image generation amplifies demographic stereotypes at large scale F Bianchi, P Kalluri, E Durmus, F Ladhak, M Cheng, D Nozza, ... Proceedings of the 2023 ACM Conference on Fairness, Accountability, and …, 2023 | 285 | 2023 |
Contrastive decoding: Open-ended text generation as optimization XL Li, A Holtzman, D Fried, P Liang, J Eisner, T Hashimoto, L Zettlemoyer, ... arXiv preprint arXiv:2210.15097, 2022 | 263 | 2022 |
Unifying human and statistical evaluation for natural language generation TB Hashimoto, H Zhang, P Liang arXiv preprint arXiv:1904.02792, 2019 | 258 | 2019 |
Exploiting programmatic behavior of llms: Dual-use through standard security attacks D Kang, X Li, I Stoica, C Guestrin, M Zaharia, T Hashimoto 2024 IEEE Security and Privacy Workshops (SPW), 132-143, 2024 | 231 | 2024 |
Jury learning: Integrating dissenting voices into machine learning models ML Gordon, MS Lam, JS Park, K Patel, J Hancock, T Hashimoto, ... Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems …, 2022 | 209 | 2022 |
Distributionally robust language modeling Y Oren, S Sagawa, TB Hashimoto, P Liang arXiv preprint arXiv:1909.02060, 2019 | 209 | 2019 |