Getting aligned on representational alignment

I Sucholutsky, L Muttenthaler, A Weller, A Peng… - arxiv preprint arxiv …, 2023 - arxiv.org
Biological and artificial information processing systems form representations that they can
use to categorize, reason, plan, navigate, and make decisions. How can we measure the …

Learning to deceive with attention-based explanations

D Pruthi, M Gupta, B Dhingra, G Neubig… - arxiv preprint arxiv …, 2019 - arxiv.org
Attention mechanisms are ubiquitous components in neural architectures applied to natural
language processing. In addition to yielding gains in predictive accuracy, attention weights …

ZuCo, a simultaneous EEG and eye-tracking resource for natural sentence reading

N Hollenstein, J Rotsztejn, M Troendle, A Pedroni… - Scientific data, 2018 - nature.com
Abstract We present the Zurich Cognitive Language Processing Corpus (ZuCo), a dataset
combining electroencephalography (EEG) and eye-tracking recordings from subjects …

Improving natural language processing tasks with human gaze-guided neural attention

E Sood, S Tannert, P Müller… - Advances in Neural …, 2020 - proceedings.neurips.cc
A lack of corpora has so far limited advances in integrating human gaze data as a
supervisory signal in neural attention mechanisms for natural language processing (NLP) …

Sequence labelling and sequence classification with gaze: Novel uses of eye‐tracking data for Natural Language Processing

M Barrett, N Hollenstein - Language and Linguistics Compass, 2020 - Wiley Online Library
Eye‐tracking data from reading provide a structured signal with a fine‐grained temporal
resolution which closely follows the sequential structure of the text. It is highly correlated with …

Do transformer models show similar attention patterns to task-specific human gaze?

O Eberle, S Brandl, J Pilot… - Proceedings of the 60th …, 2022 - aclanthology.org
Learned self-attention functions in state-of-the-art NLP models often correlate with human
attention. We investigate whether self-attention in large-scale pre-trained language models …

Interpreting attention models with human visual attention in machine reading comprehension

E Sood, S Tannert, D Frassinelli, A Bulling… - arxiv preprint arxiv …, 2020 - arxiv.org
While neural networks with attention mechanisms have achieved superior performance on
many natural language processing tasks, it remains unclear to which extent learned …

Fault detection and diagnosis using self-attentive convolutional neural networks for variable-length sensor data in semiconductor manufacturing

E Kim, S Cho, B Lee, M Cho - IEEE Transactions on …, 2019 - ieeexplore.ieee.org
Nowadays, more attention has been placed on cost reductions and yield enhancement in
the semiconductor industry. During the manufacturing process, a considerable amount of …

ZuCo 2.0: A dataset of physiological recordings during natural reading and annotation

N Hollenstein, M Troendle, C Zhang… - arxiv preprint arxiv …, 2019 - arxiv.org
We recorded and preprocessed ZuCo 2.0, a new dataset of simultaneous eye-tracking and
electroencephalography during natural reading and during annotation. This corpus contains …

Multilingual language models predict human reading behavior

N Hollenstein, F Pirovano, C Zhang, L Jäger… - arxiv preprint arxiv …, 2021 - arxiv.org
We analyze if large language models are able to predict patterns of human reading
behavior. We compare the performance of language-specific and multilingual pretrained …