Getting aligned on representational alignment
Biological and artificial information processing systems form representations that they can
use to categorize, reason, plan, navigate, and make decisions. How can we measure the …
use to categorize, reason, plan, navigate, and make decisions. How can we measure the …
Learning to deceive with attention-based explanations
Attention mechanisms are ubiquitous components in neural architectures applied to natural
language processing. In addition to yielding gains in predictive accuracy, attention weights …
language processing. In addition to yielding gains in predictive accuracy, attention weights …
ZuCo, a simultaneous EEG and eye-tracking resource for natural sentence reading
Abstract We present the Zurich Cognitive Language Processing Corpus (ZuCo), a dataset
combining electroencephalography (EEG) and eye-tracking recordings from subjects …
combining electroencephalography (EEG) and eye-tracking recordings from subjects …
Improving natural language processing tasks with human gaze-guided neural attention
A lack of corpora has so far limited advances in integrating human gaze data as a
supervisory signal in neural attention mechanisms for natural language processing (NLP) …
supervisory signal in neural attention mechanisms for natural language processing (NLP) …
Sequence labelling and sequence classification with gaze: Novel uses of eye‐tracking data for Natural Language Processing
Eye‐tracking data from reading provide a structured signal with a fine‐grained temporal
resolution which closely follows the sequential structure of the text. It is highly correlated with …
resolution which closely follows the sequential structure of the text. It is highly correlated with …
Do transformer models show similar attention patterns to task-specific human gaze?
Learned self-attention functions in state-of-the-art NLP models often correlate with human
attention. We investigate whether self-attention in large-scale pre-trained language models …
attention. We investigate whether self-attention in large-scale pre-trained language models …
Interpreting attention models with human visual attention in machine reading comprehension
While neural networks with attention mechanisms have achieved superior performance on
many natural language processing tasks, it remains unclear to which extent learned …
many natural language processing tasks, it remains unclear to which extent learned …
Fault detection and diagnosis using self-attentive convolutional neural networks for variable-length sensor data in semiconductor manufacturing
Nowadays, more attention has been placed on cost reductions and yield enhancement in
the semiconductor industry. During the manufacturing process, a considerable amount of …
the semiconductor industry. During the manufacturing process, a considerable amount of …
ZuCo 2.0: A dataset of physiological recordings during natural reading and annotation
We recorded and preprocessed ZuCo 2.0, a new dataset of simultaneous eye-tracking and
electroencephalography during natural reading and during annotation. This corpus contains …
electroencephalography during natural reading and during annotation. This corpus contains …
Multilingual language models predict human reading behavior
We analyze if large language models are able to predict patterns of human reading
behavior. We compare the performance of language-specific and multilingual pretrained …
behavior. We compare the performance of language-specific and multilingual pretrained …