Guiding llm to fool itself: Automatically manipulating machine reading comprehension shortcut triggers
Recent applications of LLMs in Machine Reading Comprehension (MRC) systems have
shown impressive results, but the use of shortcuts, mechanisms triggered by features …
shown impressive results, but the use of shortcuts, mechanisms triggered by features …
Measuring and Improving Attentiveness to Partial Inputs with Counterfactuals
The inevitable appearance of spurious correlations in training datasets hurts the
generalization of NLP models on unseen data. Previous work has found that datasets with …
generalization of NLP models on unseen data. Previous work has found that datasets with …
QAID: Question answering inspired few-shot intent detection
Intent detection with semantically similar fine-grained intents is a challenging task. To
address it, we reformulate intent detection as a question-answering retrieval task by treating …
address it, we reformulate intent detection as a question-answering retrieval task by treating …
Leveraging context for perceptual prediction using word embeddings
Pre-trained word embeddings have been used successfully in semantic NLP tasks to
represent words. However, there is continued debate as to how well they encode useful …
represent words. However, there is continued debate as to how well they encode useful …
Analyzing pre-trained and fine-tuned language models
M Mosbach - 2023 - publikationen.sulb.uni-saarland.de
The field of natural language processing (NLP) has recently undergone a paradigm shift.
Since the introduction of transformer-based language models in 2018, the current …
Since the introduction of transformer-based language models in 2018, the current …