Rethinking the role of demonstrations: What makes in-context learning work? S Min, X Lyu, A Holtzman, M Artetxe, M Lewis, H Hajishirzi, L Zettlemoyer arXiv preprint arXiv:2202.12837, 2022 | 1296 | 2022 |
Factscore: Fine-grained atomic evaluation of factual precision in long form text generation S Min, K Krishna, X Lyu, M Lewis, W Yih, PW Koh, M Iyyer, L Zettlemoyer, ... arXiv preprint arXiv:2305.14251, 2023 | 474 | 2023 |
Dolma: An open corpus of three trillion tokens for language model pretraining research L Soldaini, R Kinney, A Bhagia, D Schwenk, D Atkinson, R Authur, ... arXiv preprint arXiv:2402.00159, 2024 | 126* | 2024 |
Z-icl: Zero-shot in-context learning with pseudo-demonstrations X Lyu, S Min, I Beltagy, L Zettlemoyer, H Hajishirzi arXiv preprint arXiv:2212.09865, 2022 | 56 | 2022 |
Prompt waywardness: The curious case of discretized interpretation of continuous prompts D Khashabi, S Lyu, S Min, L Qin, K Richardson, S Welleck, H Hajishirzi, ... arXiv preprint arXiv:2112.08348, 2021 | 55* | 2021 |
T\" ulu 3: Pushing frontiers in open language model post-training N Lambert, J Morrison, V Pyatkin, S Huang, H Ivison, F Brahman, ... arXiv preprint arXiv:2411.15124, 2024 | 13 | 2024 |
HREF: Human Response-Guided Evaluation of Instruction Following in Language Models X Lyu, Y Wang, H Hajishirzi, P Dasigi arXiv preprint arXiv:2412.15524, 2024 | | 2024 |
Leveraging Set Assumption for Membership Inference in Language Models X Lyu, A Holtzman, N Mireshghallah, Y Elazar, S Min, H Hajishirzi, ... | | |