[HTML][HTML] Attention in psychology, neuroscience, and machine learning

GW Lindsay - Frontiers in computational neuroscience, 2020 - frontiersin.org
Attention is the important ability to flexibly control limited computational resources. It has
been studied in conjunction with many other topics in neuroscience and psychology …

Concept whitening for interpretable image recognition

Z Chen, Y Bei, C Rudin - Nature Machine Intelligence, 2020 - nature.com
What does a neural network encode about a concept as we traverse through the layers?
Interpretability in machine learning is undoubtedly important, but the calculations of neural …

Vision transformer with progressive sampling

X Yue, S Sun, Z Kuang, M Wei… - Proceedings of the …, 2021 - openaccess.thecvf.com
Transformers with powerful global relation modeling abilities have been introduced to
fundamental computer vision tasks recently. As a typical example, the Vision Transformer …

Edge: Explaining deep reinforcement learning policies

W Guo, X Wu, U Khan, X **ng - Advances in Neural …, 2021 - proceedings.neurips.cc
With the rapid development of deep reinforcement learning (DRL) techniques, there is an
increasing need to understand and interpret DRL policies. While recent research has …

Differentiable patch selection for image recognition

JB Cordonnier, A Mahendran… - Proceedings of the …, 2021 - openaccess.thecvf.com
Neural Networks require large amounts of memory and compute to process high resolution
images, even when only a small part of the image is actually informative for the task at hand …

Attention is turing-complete

J Pérez, P Barceló, J Marinkovic - Journal of Machine Learning Research, 2021 - jmlr.org
Alternatives to recurrent neural networks, in particular, architectures based on self-attention,
are gaining momentum for processing input sequences. In spite of their relevance, the …

Neuroevolution of self-interpretable agents

Y Tang, D Nguyen, D Ha - Proceedings of the 2020 Genetic and …, 2020 - dl.acm.org
Inattentional blindness is the psychological phenomenon that causes one to miss things in
plain sight. It is a consequence of the selective attention in perception that lets us remain …

[HTML][HTML] This looks more like that: Enhancing self-explaining models by prototypical relevance propagation

S Gautam, MMC Höhne, S Hansen, R Jenssen… - Pattern Recognition, 2023 - Elsevier
Current machine learning models have shown high efficiency in solving a wide variety of
real-world problems. However, their black box character poses a major challenge for the …

Aoe-net: Entities interactions modeling with adaptive attention mechanism for temporal action proposals generation

K Vo, S Truong, K Yamazaki, B Raj, MT Tran… - International Journal of …, 2023 - Springer
Temporal action proposal generation (TAPG) is a challenging task, which requires localizing
action intervals in an untrimmed video. Intuitively, we as humans, perceive an action through …

Internally rewarded reinforcement learning

M Li, X Zhao, JH Lee, C Weber… - … on Machine Learning, 2023 - proceedings.mlr.press
We study a class of reinforcement learning problems where the reward signals for policy
learning are generated by a discriminator that is dependent on and jointly optimized with the …