Prati
Hitomi Yanaka
Hitomi Yanaka
Potvrđena adresa e-pošte na is.s.u-tokyo.ac.jp - Početna stranica
Naslov
Citirano
Citirano
Godina
Can neural networks understand monotonicity reasoning?
H Yanaka, K Mineshima, D Bekki, K Inui, S Sekine, L Abzianidze, J Bos
arXiv preprint arXiv:1906.06448, 2019
942019
HELP: A dataset for identifying shortcomings of neural models in monotonicity reasoning
H Yanaka, K Mineshima, D Bekki, K Inui, S Sekine, L Abzianidze, J Bos
arXiv preprint arXiv:1904.12166, 2019
672019
Do neural models learn systematicity of monotonicity inference in natural language?
H Yanaka, K Mineshima, D Bekki, K Inui
arXiv preprint arXiv:2004.14839, 2020
582020
Acquisition of phrase correspondences using natural deduction proofs
H Yanaka, K Mineshima, P Martínez-Gómez, D Bekki
arXiv preprint arXiv:1804.07656, 2018
272018
Compositional evaluation on Japanese textual entailment and similarity
H Yanaka, K Mineshima
Transactions of the Association for Computational Linguistics 10, 1266-1284, 2022
262022
Exploring transitivity in neural NLI models through veridicality
H Yanaka, K Mineshima, K Inui
arXiv preprint arXiv:2101.10713, 2021
222021
On the multilingual ability of decoder-based pre-trained language models: Finding and controlling language-specific neurons
T Kojima, I Okimura, Y Iwasawa, H Yanaka, Y Matsuo
arXiv preprint arXiv:2404.02431, 2024
212024
Multimodal logical inference system for visual-textual entailment
R Suzuki, H Yanaka, M Yoshikawa, K Mineshima, D Bekki
arXiv preprint arXiv:1906.03952, 2019
192019
Do grammatical error correction models realize grammatical generalization?
M Mita, H Yanaka
arXiv preprint arXiv:2106.03031, 2021
182021
Assessing the generalization capacity of pre-trained language models through Japanese adversarial natural language inference
H Yanaka, K Mineshima
Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting …, 2021
152021
SyGNS: A systematic generalization testbed based on natural language semantics
H Yanaka, K Mineshima, K Inui
arXiv preprint arXiv:2106.01077, 2021
152021
Determining semantic textual similarity using natural deduction proofs
H Yanaka, K Mineshima, P Martínez-Gómez, D Bekki
arXiv preprint arXiv:1707.08713, 2017
92017
Compositional semantics and inference system for temporal order based on Japanese CCG
T Sugimoto, H Yanaka
arXiv preprint arXiv:2204.09245, 2022
62022
Neural sentence generation from formal semantics
K Manome, M Yoshikawa, H Yanaka, P Martínez-Gómez, K Mineshima, ...
Proceedings of the 11th International Conference on Natural Language …, 2018
52018
Topic modeling for short texts with large language models
T Doi, M Isonuma, H Yanaka
Proceedings of the 62nd Annual Meeting of the Association for Computational …, 2024
42024
Analyzing social biases in japanese large language models
H Yanaka, N Han, R Kumon, J Lu, M Takeshita, R Sekizawa, T Kato, ...
arXiv preprint arXiv:2406.02050, 2024
42024
Jamp: Controlled Japanese temporal inference dataset for evaluating generalization capacity of language models
T Sugimoto, Y Onoe, H Yanaka
arXiv preprint arXiv:2306.10727, 2023
42023
Logical inference for counting on semi-structured tables
T Kurosawa, H Yanaka
arXiv preprint arXiv:2204.07803, 2022
42022
Medical Visual Textual Entailment for Numerical Understanding of Vision-and-Language Models
H Yanaka, Y Nakamura, Y Chida, T Kurosawa
Proceedings of the 5th Clinical Natural Language Processing Workshop, 8-18, 2023
32023
Is Japanese CCGBank empirically correct? A case study of passive and causative constructions
D Bekki, H Yanaka
arXiv preprint arXiv:2302.14708, 2023
32023
Sustav trenutno ne može provesti ovu radnju. Pokušajte ponovo kasnije.
Članci 1–20