Document ranking with a pretrained sequence-to-sequence model R Nogueira*, Z Jiang*, J Lin Findings of EMNLP 2020, 2020 | 583 | 2020 |
What the daam: Interpreting stable diffusion using cross attention R Tang, L Liu, A Pandey, Z Jiang, G Yang, K Kumar, P Stenetorp, J Lin, ... Association for Computational Linguistics: ACL 2023, Best Paper Award, 2022 | 139 | 2022 |
Investigating the limitations of transformers with simple arithmetic tasks R Nogueira, Z Jiang, J Lin 1st MATH-AI Workshop, ICLR 2021, 2021 | 122 | 2021 |
“Low-resource” text classification: A parameter-free classification method with compressors Z Jiang, M Yang, M Tsirlin, R Tang, Y Dai, J Lin Findings of the Association for Computational Linguistics: ACL 2023, 6810-6828, 2023 | 88 | 2023 |
PaperRobot: Incremental draft generation of scientific ideas Q Wang, L Huang, Z Jiang, K Knight, H Ji, M Bansal, Y Luan Association for Computational Linguistics: ACL 2019, 2019 | 65 | 2019 |
Describing a knowledge base Q Wang, X Pan, L Huang, B Zhang, Z Jiang, H Ji, K Knight Proceedings of The 11th International Natural Language Generation Conference …, 2018 | 61 | 2018 |
Navigation-based candidate expansion and pretrained language models for citation recommendation R Nogueira, Z Jiang, K Cho, J Lin Scientometrics 125 (3), 3001-3016, 2020 | 20 | 2020 |
Inserting information bottlenecks for attribution in transformers Z Jiang, R Tang, J Xin, J Lin Findings of EMNLP 2020, 2020 | 16 | 2020 |
Chengyu cloze test Z Jiang, B Zhang, L Huang, H Ji Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building …, 2018 | 15 | 2018 |
Few-shot non-parametric learning with deep latent variable model Z Jiang, Y Dai, J Xin, M Li, J Lin Advances in Neural Information Processing Systems, NeurIPS 2022, Spotlight …, 2022 | 14 | 2022 |
How does BERT rerank passages? an attribution analysis with information bottlenecks Z Jiang, R Tang, J Xin, J Lin Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting …, 2021 | 14 | 2021 |
Approximating Human-Like Few-shot Learning with GPT-based Compression C Huang*, Y Xie*, Z Jiang*, J Lin, M Li arXiv preprint arXiv:2308.06942, 2023 | 8 | 2023 |
Evaluating pretrained transformer models for citation recommendation R Nogueira, Z Jiang, K Cho, J Lin CEUR Workshop Proceedings 2591, 89-100, 2020 | 8 | 2020 |
Less is more: Parameter-free text classification with gzip Z Jiang, MYR Yang, M Tsirlin, R Tang, J Lin arXiv preprint arXiv:2212.09410, 2022 | 7 | 2022 |
Building an efficiency pipeline: Commutativity and cumulativeness of efficiency operators for transformers J Xin, R Tang, Z Jiang, Y Yu, J Lin arXiv preprint arXiv:2208.00483, 2022 | 2 | 2022 |
Less is More: Restricted Representations for Better Interpretability and Generalizability Z Jiang University of Waterloo, 2023 | | 2023 |
Operator Selection and Ordering in a Pipeline Approach to Efficiency Optimizations for Transformers J Xin, R Tang, Z Jiang, Y Yu, J Lin Findings of the Association for Computational Linguistics: ACL 2023, 2870-2882, 2023 | | 2023 |
With a Little Help from Gzip: Text Classification with No Training Z Jiang, MYR Yang, M Tsirlin, R Tang, J Lin | | |
Narrating a Knowledge Base Q Wang, X Pan, L Huang, B Zhang, Z Jiang, H Ji, K Knight | | |
Rensselaer Polytechnic Institute DiDi Labs Universit 4 University of North Carolina at Chapel Hill U kevinknight@ didiglobal. com, heng Q Wang, L Huang, Z Jiang, Q Wang, L Huang, Z Jiang | | |