Scalable agent alignment via reward modeling: a research direction

J Leike, D Krueger, T Everitt, M Martic, V Maini… - arxiv preprint arxiv …, 2018 - arxiv.org
One obstacle to applying reinforcement learning algorithms to real-world problems is the
lack of suitable reward functions. Designing such reward functions is difficult in part because …

Sample efficient adaptive text-to-speech

Y Chen, Y Assael, B Shillingford, D Budden… - arxiv preprint arxiv …, 2018 - arxiv.org
We present a meta-learning approach for adaptive text-to-speech (TTS) with few data.
During training, we learn a multi-speaker model using a shared conditional WaveNet core …

Beyond traditional threats: A persistent backdoor attack on federated learning

T Liu, Y Zhang, Z Feng, Z Yang, C Xu, D Man… - Proceedings of the …, 2024 - ojs.aaai.org
Backdoors on federated learning will be diluted by subsequent benign updates. This is
reflected in the significant reduction of attack success rate as iterations increase, ultimately …

Kernelized information bottleneck leads to biologically plausible 3-factor hebbian learning in deep networks

R Pogodin, P Latham - Advances in Neural Information …, 2020 - proceedings.neurips.cc
The state-of-the art machine learning approach to training deep neural networks,
backpropagation, is implausible for real neural networks: neurons need to know their …

A rapid and efficient learning rule for biological neural circuits

E Sezener, A Grabska-Barwińska, D Kostadinov… - BioRxiv, 2021 - biorxiv.org
The dominant view in neuroscience is that changes in synaptic weights underlie learning. It
is unclear, however, how the brain is able to determine which synapses should change, and …

Gated linear networks

J Veness, T Lattimore, D Budden… - Proceedings of the …, 2021 - ojs.aaai.org
This paper presents a new family of backpropagation-free neural architectures, Gated Linear
Networks (GLNs). What distinguishes GLNs from contemporary neural networks is the …

Globally gated deep linear networks

Q Li, H Sompolinsky - Advances in Neural Information …, 2022 - proceedings.neurips.cc
Abstract Recently proposed Gated Linear Networks (GLNs) present a tractable nonlinear
network architecture, and exhibit interesting capabilities such as learning with local error …

Gaussian gated linear networks

D Budden, A Marblestone, E Sezener… - Advances in …, 2020 - proceedings.neurips.cc
Abstract We propose the Gaussian Gated Linear Network (G-GLN), an extension to the
recently proposed GLN family of deep neural networks. Instead of using backpropagation to …

Associative compression networks for representation learning

A Graves, J Menick, A Oord - arxiv preprint arxiv:1804.02476, 2018 - arxiv.org
This paper introduces Associative Compression Networks (ACNs), a new framework for
variational autoencoding with neural networks. The system differs from existing variational …

Backpropagation-free graph neural networks

L Pasa, N Navarin, W Erb… - 2022 IEEE International …, 2022 - ieeexplore.ieee.org
We propose a class of neural models for graphs that do not rely on backpropagation for
training, thus making learning more biologically plausible and amenable to parallel …