Guiding llm to fool itself: Automatically manipulating machine reading comprehension shortcut triggers

M Levy, S Ravfogel, Y Goldberg - arxiv preprint arxiv:2310.18360, 2023 - arxiv.org
Recent applications of LLMs in Machine Reading Comprehension (MRC) systems have
shown impressive results, but the use of shortcuts, mechanisms triggered by features …

Measuring and Improving Attentiveness to Partial Inputs with Counterfactuals

Y Elazar, B Paranjape, H Peng, S Wiegreffe… - arxiv preprint arxiv …, 2023 - arxiv.org
The inevitable appearance of spurious correlations in training datasets hurts the
generalization of NLP models on unseen data. Previous work has found that datasets with …

QAID: Question answering inspired few-shot intent detection

A Yehudai, M Vetzler, Y Mass, K Lazar… - arxiv preprint arxiv …, 2023 - arxiv.org
Intent detection with semantically similar fine-grained intents is a challenging task. To
address it, we reformulate intent detection as a question-answering retrieval task by treating …

Leveraging context for perceptual prediction using word embeddings

GA Carter, F Keller, P Hoffman - 2023 - osf.io
Pre-trained word embeddings have been used successfully in semantic NLP tasks to
represent words. However, there is continued debate as to how well they encode useful …

Analyzing pre-trained and fine-tuned language models

M Mosbach - 2023 - publikationen.sulb.uni-saarland.de
The field of natural language processing (NLP) has recently undergone a paradigm shift.
Since the introduction of transformer-based language models in 2018, the current …