Are NLP Models really able to Solve Simple Math Word Problems? A Patel, S Bhattamishra, N Goyal NAACL, 2021 | 673 | 2021 |
On the computational power of transformers and its implications in sequence modeling S Bhattamishra, A Patel, N Goyal CoNLL, 2020 | 77 | 2020 |
Understanding in-context learning in transformers and llms by learning to learn discrete functions S Bhattamishra, A Patel, P Blunsom, V Kanade ICLR, 2023 | 38 | 2023 |
Simplicity Bias in Transformers and their Ability to Learn Sparse Boolean Functions S Bhattamishra, A Patel, V Kanade, P Blunsom ACL, 2023 | 36 | 2023 |
Revisiting the Compositional Generalization Abilities of Neural Sequence Models A Patel, S Bhattamishra, P Blunsom, N Goyal ACL, 2022 | 31 | 2022 |
Vehiclechain: blockchain-based vehicular data transmission scheme for smart city A Patel, N Shah, T Limbasiya, D Das IEEE - SMC, 2019 | 24 | 2019 |
When Can Transformers Ground and Compose: Insights from Compositional Generalization Benchmarks A Sikarwar, A Patel, N Goyal EMNLP, 2022 | 13 | 2022 |
Evaluating In-Context Learning of Libraries for Code Generation A Patel, S Reddy, D Bahdanau, P Dasigi NAACL, 2024 | 9 | 2024 |
Universal adversarial triggers are not universal N Meade, A Patel, S Reddy arXiv preprint arXiv:2404.16020, 2024 | 5 | 2024 |
MAGNIFICo: Evaluating the In-Context Learning Ability of Large Language Models to Generalize to Novel Interpretations A Patel, S Bhattamishra, S Reddy, D Bahdanau EMNLP, 2023 | 4 | 2023 |
Humanity's Last Exam L Phan, A Gatti, Z Han, N Li, J Hu, H Zhang, S Shi, M Choi, A Agrawal, ... arXiv preprint arXiv:2501.14249, 2025 | 2 | 2025 |