The 2020 bilingual, bi-directional webnlg+ shared task overview and evaluation results (webnlg+ 2020) TC Ferreira, C Gardent, N Ilinykh, C Van Der Lee, S Mille, D Moussallem, ... Proceedings of the 3rd International Workshop on Natural Language Generation …, 2020 | 136 | 2020 |
Meetup! a corpus of joint activity dialogues in a visual environment N Ilinykh, S Zarrieß, D Schlangen arXiv preprint arXiv:1907.05084, 2019 | 43 | 2019 |
Tell me more: A dataset of visual scene description sequences N Ilinykh, S Zarrieß, D Schlangen Proceedings of the 12th international conference on natural language …, 2019 | 28 | 2019 |
Attention as grounding: Exploring textual and cross-modal attention on entities and relations in language-and-vision transformer N Ilinykh, S Dobnik Findings of the Association for Computational Linguistics: ACL 2022, 4062-4073, 2022 | 18 | 2022 |
When an image tells a story: The role of visual and semantic information for generating paragraph descriptions N Ilinykh, S Dobnik Proceedings of the 13th International Conference on Natural Language …, 2020 | 17 | 2020 |
What does a language-and-vision transformer see: The impact of semantic information on visual representations N Ilinykh, S Dobnik Frontiers in Artificial Intelligence 4, 767971, 2021 | 12 | 2021 |
Look and answer the question: On the role of vision in embodied question answering N Ilinykh, Y Emampoor, S Dobnik Proceedings of the 15th International Conference on Natural Language …, 2022 | 11 | 2022 |
slurk–a lightweight interaction server for dialogue experiments and data collection D Schlangen, T Diekmann, N Ilinykh, S Zarrieß Proceedings of the 22nd Workshop on the Semantics and Pragmatics of Dialogue …, 2018 | 11 | 2018 |
The task matters: Comparing image captioning and task-based dialogical image description N Ilinykh, S Zarrieß, D Schlangen Proceedings of the 11th International Conference on Natural Language …, 2018 | 9 | 2018 |
How Vision Affects Language: Comparing Masked Self-Attention in Uni-Modal and Multi-Modal Transformer N Ilinykh, S Dobnik Proceedings of the 1st Workshop on Multimodal Semantic Representations (MMSR …, 2021 | 8 | 2021 |
A general benchmarking framework for text generation D Moussallem, P Kaur, T Ferreira, C van der Lee, A Shimorina, F Conrads, ... International Workshop on Natural Language Generation from the Semantic Web 2020, 2020 | 7 | 2020 |
What to refer to and when? Reference and re-reference in two language-and-vision tasks S Dobnik, N Ilinykh, A Karimi Proceedings of the 26th Workshop on the Semantics and Pragmatics of Dialogue …, 2022 | 5 | 2022 |
Meetup! a task for modelling visual dialogue D Schlangen, N Ilinykh, S Zarrieß Short Paper Proceedings of the 22nd Workshop on the Semantics and Pragmatics …, 2018 | 3 | 2018 |
Do decoding algorithms capture discourse structure in multi-modal tasks? A case study of image paragraph generation N Ilinykh, S Dobnik Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation …, 2022 | 2 | 2022 |
In search of meaning and its representations for computational linguistics S Dobnik, R Cooper, A Ek, B Noble, S Larsson, N Ilinykh, V Maraev, ... Proceedings of the 2022 CLASP Conference on (Dis) embodiment, 30-44, 2022 | 2 | 2022 |
Describe me an Aucklet: Generating Grounded Perceptual Category Descriptions B Noble, N Ilinykh arXiv preprint arXiv:2303.04053, 2023 | 1 | 2023 |
Proceedings of the 2024 CLASP Conference on Multimodality and Interaction in Language Learning A Qiu, B Noble, D Pagmar, V Maraev, N Ilinykh Proceedings of the 2024 CLASP Conference on Multimodality and Interaction in …, 2024 | | 2024 |
Computational Models of Language and Vision: Studies of Neural Models as Learners of Multi-Modal Knowledge N Ilinykh | | 2024 |
Proceedings of the 2023 CLASP Conference on Learning with Small Data E Breitholtz, S Lappin, S Loáiciga, N Ilinykh, S Dobnik The Association for Computational Linguistics, 2023 | | 2023 |
Context matters: evaluation of target and context features on variation of object naming N Ilinykh, S Dobnik Proceedings of the 1st Workshop on Linguistic Insights from and for …, 2023 | | 2023 |