[HTML][HTML] Attention in psychology, neuroscience, and machine learning
GW Lindsay - Frontiers in computational neuroscience, 2020 - frontiersin.org
Attention is the important ability to flexibly control limited computational resources. It has
been studied in conjunction with many other topics in neuroscience and psychology …
been studied in conjunction with many other topics in neuroscience and psychology …
Concept whitening for interpretable image recognition
What does a neural network encode about a concept as we traverse through the layers?
Interpretability in machine learning is undoubtedly important, but the calculations of neural …
Interpretability in machine learning is undoubtedly important, but the calculations of neural …
Vision transformer with progressive sampling
Transformers with powerful global relation modeling abilities have been introduced to
fundamental computer vision tasks recently. As a typical example, the Vision Transformer …
fundamental computer vision tasks recently. As a typical example, the Vision Transformer …
Edge: Explaining deep reinforcement learning policies
With the rapid development of deep reinforcement learning (DRL) techniques, there is an
increasing need to understand and interpret DRL policies. While recent research has …
increasing need to understand and interpret DRL policies. While recent research has …
Differentiable patch selection for image recognition
JB Cordonnier, A Mahendran… - Proceedings of the …, 2021 - openaccess.thecvf.com
Neural Networks require large amounts of memory and compute to process high resolution
images, even when only a small part of the image is actually informative for the task at hand …
images, even when only a small part of the image is actually informative for the task at hand …
Attention is turing-complete
Alternatives to recurrent neural networks, in particular, architectures based on self-attention,
are gaining momentum for processing input sequences. In spite of their relevance, the …
are gaining momentum for processing input sequences. In spite of their relevance, the …
Neuroevolution of self-interpretable agents
Inattentional blindness is the psychological phenomenon that causes one to miss things in
plain sight. It is a consequence of the selective attention in perception that lets us remain …
plain sight. It is a consequence of the selective attention in perception that lets us remain …
[HTML][HTML] This looks more like that: Enhancing self-explaining models by prototypical relevance propagation
Current machine learning models have shown high efficiency in solving a wide variety of
real-world problems. However, their black box character poses a major challenge for the …
real-world problems. However, their black box character poses a major challenge for the …
Aoe-net: Entities interactions modeling with adaptive attention mechanism for temporal action proposals generation
Temporal action proposal generation (TAPG) is a challenging task, which requires localizing
action intervals in an untrimmed video. Intuitively, we as humans, perceive an action through …
action intervals in an untrimmed video. Intuitively, we as humans, perceive an action through …
Internally rewarded reinforcement learning
We study a class of reinforcement learning problems where the reward signals for policy
learning are generated by a discriminator that is dependent on and jointly optimized with the …
learning are generated by a discriminator that is dependent on and jointly optimized with the …