HoVer: A dataset for many-hop fact extraction and claim verification Y Jiang, S Bordia, Z Zhong, C Dognin, M Singh, M Bansal arXiv preprint arXiv:2011.03088, 2020 | 139 | 2020 |
Avoiding reasoning shortcuts: Adversarial evaluation, training, and model development for multi-hop QA Y Jiang, M Bansal arXiv preprint arXiv:1906.07132, 2019 | 109 | 2019 |
Self-assembling modular networks for interpretable multi-hop reasoning Y Jiang, M Bansal arXiv preprint arXiv:1909.05803, 2019 | 102 | 2019 |
Explore, propose, and assemble: An interpretable model for multi-hop reading comprehension Y Jiang, N Joshi, YC Chen, M Bansal arXiv preprint arXiv:1906.05210, 2019 | 57 | 2019 |
Closed-book training to improve summarization encoder memory Y Jiang, M Bansal arXiv preprint arXiv:1809.04585, 2018 | 41 | 2018 |
Inducing Transformer's Compositional Generalization Ability via Auxiliary Sequence Prediction Tasks Y Jiang, M Bansal arXiv preprint arXiv:2109.15256, 2021 | 31 | 2021 |
Enriching transformers with structured tensor-product representations for abstractive summarization Y Jiang, A Celikyilmaz, P Smolensky, P Soulos, S Rao, H Palangi, ... arXiv preprint arXiv:2106.01317, 2021 | 20 | 2021 |
Data factors for better compositional generalization X Zhou, Y Jiang, M Bansal arXiv preprint arXiv:2311.04420, 2023 | 7 | 2023 |
Mutual exclusivity training and primitive augmentation to induce compositionality Y Jiang, X Zhou, M Bansal arXiv preprint arXiv:2211.15578, 2022 | 7 | 2022 |
Hierarchical and Dynamic Prompt Compression for Efficient Zero-shot API Usage Y Jiang, M Vecchio, M Bansal, A Johannsen Findings of the Association for Computational Linguistics: EACL 2024, 2162-2174, 2024 | 5 | 2024 |
Structural biases for improving transformers on translation into morphologically rich languages P Soulos, S Rao, C Smith, E Rosen, A Celikyilmaz, RT McCoy, Y Jiang, ... arXiv preprint arXiv:2208.06061, 2022 | 3 | 2022 |
Learning and analyzing generation order for undirected sequence models Y Jiang, M Bansal arXiv preprint arXiv:2112.09097, 2021 | 2 | 2021 |
Inducing Systematicity in Transformers by Attending to Structurally Quantized Embeddings Y Jiang, X Zhou, M Bansal arXiv preprint arXiv:2402.06492, 2024 | 1 | 2024 |