Acquisition of chess knowledge in alphazero

T McGrath, A Kapishnikov, N Tomašev… - Proceedings of the …, 2022 - National Acad Sciences
We analyze the knowledge acquired by AlphaZero, a neural network engine that learns
chess solely by playing against itself yet becomes capable of outperforming human chess …

The influence of learning rule on representation dynamics in wide neural networks

B Bordelon, C Pehlevan - The Eleventh International Conference on …, 2022 - openreview.net
It is unclear how changing the learning rule of a deep neural network alters its learning
dynamics and representations. To gain insight into the relationship between learned …

Thalamic regulation of frontal interactions in human cognitive flexibility

A Hummos, BA Wang, S Drammis… - PLoS Computational …, 2022 - journals.plos.org
Interactions across frontal cortex are critical for cognition. Animal studies suggest a role for
mediodorsal thalamus (MD) in these interactions, but the computations performed and direct …

Globally gated deep linear networks

Q Li, H Sompolinsky - Advances in Neural Information …, 2022 - proceedings.neurips.cc
Abstract Recently proposed Gated Linear Networks (GLNs) present a tractable nonlinear
network architecture, and exhibit interesting capabilities such as learning with local error …

A rapid and efficient learning rule for biological neural circuits

E Sezener, A Grabska-Barwińska, D Kostadinov… - BioRxiv, 2021 - biorxiv.org
The dominant view in neuroscience is that changes in synaptic weights underlie learning. It
is unclear, however, how the brain is able to determine which synapses should change, and …

Kernelized information bottleneck leads to biologically plausible 3-factor hebbian learning in deep networks

R Pogodin, P Latham - Advances in Neural Information …, 2020 - proceedings.neurips.cc
The state-of-the art machine learning approach to training deep neural networks,
backpropagation, is implausible for real neural networks: neurons need to know their …

Credit assignment through broadcasting a global error vector

D Clark, LF Abbott, SY Chung - Advances in Neural …, 2021 - proceedings.neurips.cc
Backpropagation (BP) uses detailed, unit-specific feedback to train deep neural networks
(DNNs) with remarkable success. That biological neural circuits appear to perform credit …

[HTML][HTML] Satellite Remote Sensing Grayscale Image Colorization Based on Denoising Generative Adversarial Network

Q Fu, S **a, Y Kang, M Sun, K Tan - Remote Sensing, 2024 - mdpi.com
Aiming to solve the challenges of difficult training, mode collapse in current generative
adversarial networks (GANs), and the efficiency issue of requiring multiple samples for …

A foundational neural operator that continuously learns without forgetting

T Tripura, S Chakraborty - arxiv preprint arxiv:2310.18885, 2023 - arxiv.org
Machine learning has witnessed substantial growth, leading to the development of
advanced artificial intelligence models crafted to address a wide range of real-world …

Avoiding catastrophe: Active dendrites enable multi-task learning in dynamic environments

A Iyer, K Grewal, A Velu, LO Souza, J Forest… - Frontiers in …, 2022 - frontiersin.org
A key challenge for AI is to build embodied systems that operate in dynamically changing
environments. Such systems must adapt to changing task contexts and learn continuously …