Can neural networks understand monotonicity reasoning? H Yanaka, K Mineshima, D Bekki, K Inui, S Sekine, L Abzianidze, J Bos arXiv preprint arXiv:1906.06448, 2019 | 94 | 2019 |
HELP: A dataset for identifying shortcomings of neural models in monotonicity reasoning H Yanaka, K Mineshima, D Bekki, K Inui, S Sekine, L Abzianidze, J Bos arXiv preprint arXiv:1904.12166, 2019 | 67 | 2019 |
Do neural models learn systematicity of monotonicity inference in natural language? H Yanaka, K Mineshima, D Bekki, K Inui arXiv preprint arXiv:2004.14839, 2020 | 58 | 2020 |
Acquisition of phrase correspondences using natural deduction proofs H Yanaka, K Mineshima, P Martínez-Gómez, D Bekki arXiv preprint arXiv:1804.07656, 2018 | 27 | 2018 |
Compositional evaluation on Japanese textual entailment and similarity H Yanaka, K Mineshima Transactions of the Association for Computational Linguistics 10, 1266-1284, 2022 | 26 | 2022 |
Exploring transitivity in neural NLI models through veridicality H Yanaka, K Mineshima, K Inui arXiv preprint arXiv:2101.10713, 2021 | 22 | 2021 |
On the multilingual ability of decoder-based pre-trained language models: Finding and controlling language-specific neurons T Kojima, I Okimura, Y Iwasawa, H Yanaka, Y Matsuo arXiv preprint arXiv:2404.02431, 2024 | 21 | 2024 |
Multimodal logical inference system for visual-textual entailment R Suzuki, H Yanaka, M Yoshikawa, K Mineshima, D Bekki arXiv preprint arXiv:1906.03952, 2019 | 19 | 2019 |
Do grammatical error correction models realize grammatical generalization? M Mita, H Yanaka arXiv preprint arXiv:2106.03031, 2021 | 18 | 2021 |
Assessing the generalization capacity of pre-trained language models through Japanese adversarial natural language inference H Yanaka, K Mineshima Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting …, 2021 | 15 | 2021 |
SyGNS: A systematic generalization testbed based on natural language semantics H Yanaka, K Mineshima, K Inui arXiv preprint arXiv:2106.01077, 2021 | 15 | 2021 |
Determining semantic textual similarity using natural deduction proofs H Yanaka, K Mineshima, P Martínez-Gómez, D Bekki arXiv preprint arXiv:1707.08713, 2017 | 9 | 2017 |
Compositional semantics and inference system for temporal order based on Japanese CCG T Sugimoto, H Yanaka arXiv preprint arXiv:2204.09245, 2022 | 6 | 2022 |
Neural sentence generation from formal semantics K Manome, M Yoshikawa, H Yanaka, P Martínez-Gómez, K Mineshima, ... Proceedings of the 11th International Conference on Natural Language …, 2018 | 5 | 2018 |
Topic modeling for short texts with large language models T Doi, M Isonuma, H Yanaka Proceedings of the 62nd Annual Meeting of the Association for Computational …, 2024 | 4 | 2024 |
Analyzing social biases in japanese large language models H Yanaka, N Han, R Kumon, J Lu, M Takeshita, R Sekizawa, T Kato, ... arXiv preprint arXiv:2406.02050, 2024 | 4 | 2024 |
Jamp: Controlled Japanese temporal inference dataset for evaluating generalization capacity of language models T Sugimoto, Y Onoe, H Yanaka arXiv preprint arXiv:2306.10727, 2023 | 4 | 2023 |
Logical inference for counting on semi-structured tables T Kurosawa, H Yanaka arXiv preprint arXiv:2204.07803, 2022 | 4 | 2022 |
Medical Visual Textual Entailment for Numerical Understanding of Vision-and-Language Models H Yanaka, Y Nakamura, Y Chida, T Kurosawa Proceedings of the 5th Clinical Natural Language Processing Workshop, 8-18, 2023 | 3 | 2023 |
Is Japanese CCGBank empirically correct? A case study of passive and causative constructions D Bekki, H Yanaka arXiv preprint arXiv:2302.14708, 2023 | 3 | 2023 |