Can pre-trained language models interpret similes as smart as human?

Q He, S Cheng, Z Li, R **e, Y **ao - arxiv preprint arxiv:2203.08452, 2022 - arxiv.org
Simile interpretation is a crucial task in natural language processing. Nowadays, pre-trained
language models (PLMs) have achieved state-of-the-art performance on many tasks …

Maps-kb: A million-scale probabilistic simile knowledge base

Q He, X Wang, J Liang, Y **ao - … of the AAAI Conference on Artificial …, 2023 - ojs.aaai.org
The ability to understand and generate similes is an imperative step to realize human-level
AI. However, there is still a considerable gap between machine intelligence and human …

Ring that bell: A corpus and method for multimodal metaphor detection in videos

K Alnajjar, M Hämäläinen, S Zhang - arxiv preprint arxiv:2301.01134, 2022 - arxiv.org
We present the first openly available multimodal metaphor annotated corpus. The corpus
consists of videos including audio and subtitles that have been annotated by experts …

[CITATION][C] MultiMetu: A Multimodal challenge Dataset for Large Language Models in Metaphor Understanding

S Yang, Z Du, Z Hao, W Liao, L Yang, H Lin, D Zhang