Rethinking the role of demonstrations: What makes in-context learning work? S Min, X Lyu, A Holtzman, M Artetxe, M Lewis, H Hajishirzi, L Zettlemoyer arXiv preprint arXiv:2202.12837, 2022 | 1287 | 2022 |
Factscore: Fine-grained atomic evaluation of factual precision in long form text generation S Min, K Krishna, X Lyu, M Lewis, W Yih, PW Koh, M Iyyer, L Zettlemoyer, ... arXiv preprint arXiv:2305.14251, 2023 | 467 | 2023 |
Dolma: An open corpus of three trillion tokens for language model pretraining research L Soldaini, R Kinney, A Bhagia, D Schwenk, D Atkinson, R Authur, ... arXiv preprint arXiv:2402.00159, 2024 | 119* | 2024 |
Z-icl: Zero-shot in-context learning with pseudo-demonstrations X Lyu, S Min, I Beltagy, L Zettlemoyer, H Hajishirzi arXiv preprint arXiv:2212.09865, 2022 | 55 | 2022 |
Prompt waywardness: The curious case of discretized interpretation of continuous prompts D Khashabi, S Lyu, S Min, L Qin, K Richardson, S Welleck, H Hajishirzi, ... arXiv preprint arXiv:2112.08348, 2021 | 55* | 2021 |
T\" ULU 3: Pushing Frontiers in Open Language Model Post-Training N Lambert, J Morrison, V Pyatkin, S Huang, H Ivison, F Brahman, ... arXiv preprint arXiv:2411.15124, 2024 | 9 | 2024 |
HREF: Human Response-Guided Evaluation of Instruction Following in Language Models X Lyu, Y Wang, H Hajishirzi, P Dasigi arXiv preprint arXiv:2412.15524, 2024 | | 2024 |