Reasoning or reciting? exploring the capabilities and limitations of language models through counterfactual tasks Z Wu, L Qiu, A Ross, E Akyürek, B Chen, B Wang, N Kim, J Andreas, ... NAACL 2024, 2023 | 166 | 2023 |
We're Afraid Language Models Aren't Modeling Ambiguity A Liu, Z Wu, J Michael, A Suhr, P West, A Koller, S Swayamdipta, ... EMNLP 2023, 2023 | 69 | 2023 |
Dynamic sparsity neural networks for automatic speech recognition Z Wu, D Zhao, Q Liang, J Yu, A Gulati, R Pang ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021 | 50 | 2021 |
Infusing finetuning with semantic dependencies Z Wu, H Peng, NA Smith Transactions of the Association for Computational Linguistics 9, 226-242, 2021 | 44 | 2021 |
ABC: Attention with bounded-memory control H Peng, J Kasai, N Pappas, D Yogatama, Z Wu, L Kong, R Schwartz, ... ACL 2022, 2021 | 21 | 2021 |
WTMED at MEDIQA 2019: A hybrid approach to biomedical natural language inference Z Wu, Y Song, S Huang, Y Tian, F Xia Proceedings of the 18th BioNLP workshop and shared task, 415-426, 2019 | 16 | 2019 |
Understanding Mention Detector-Linker Interaction for Neural Coreference Resolution Z Wu, M Gardner Proceedings of the Fourth Workshop on Computational Models of Reference …, 2021 | 14 | 2021 |
Synergizing Spatial Optimization with Large Language Models for Open-Domain Urban Itinerary Planning Y Tang, Z Wang, A Qu, Y Yan, Z Wu, K Hou, D Zhuang, X Guo, J Zhao, ... EMNLP 2024, 2024 | 13 | 2024 |
Reuse Your Rewards: Reward Model Transfer for Zero-Shot Cross-Lingual Alignment Z Wu, A Balashankar, Y Kim, J Eisenstein, A Beirami EMNLP 2024, 2024 | 8 | 2024 |
Transparency helps reveal when language models learn meaning Z Wu, W Merrill, H Peng, I Beltagy, NA Smith Transactions of the Association for Computational Linguistics 11, 617-634, 2023 | 6 | 2023 |
Learning with latent structures in natural language processing: A survey Z Wu arXiv preprint arXiv:2201.00490, 2022 | 6 | 2022 |
Can You Learn Semantics Through Next-Word Prediction? The Case of Entailment W Merrill*, Z Wu*, N Naka, Y Kim, T Linzen Findings of ACL 2024, 2024 | 5 | 2024 |
Modeling Context With Linear Attention for Scalable Document-Level Translation Z Wu, H Peng, N Pappas, NA Smith Findings of EMNLP 2022, 2022 | 5 | 2022 |
Continued Pretraining for Better Zero-and Few-Shot Promptability Z Wu, RL Logan IV, P Walsh, A Bhagia, D Groeneveld, S Singh, I Beltagy EMNLP 2022, 2022 | 4 | 2022 |
Sparkle: Mastering basic spatial capabilities in vision language models elicits generalization to composite spatial reasoning Y Tang, A Qu, Z Wang, D Zhuang, Z Wu, W Ma, S Wang, Y Zheng, Z Zhao, ... arXiv preprint arXiv:2410.16162, 2024 | 2 | 2024 |
A Taxonomy of Ambiguity Types for NLP MY Li, A Liu, Z Wu, NA Smith arXiv preprint arXiv:2403.14072, 2024 | 2 | 2024 |
The Semantic Hub Hypothesis: Language Models Share Semantic Representations Across Languages and Modalities Z Wu, XV Yu, D Yogatama, J Lu, Y Kim arXiv preprint arXiv:2411.04986, 2024 | | 2024 |