Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Scalable agent alignment via reward modeling: a research direction
One obstacle to applying reinforcement learning algorithms to real-world problems is the
lack of suitable reward functions. Designing such reward functions is difficult in part because …
lack of suitable reward functions. Designing such reward functions is difficult in part because …
Sample efficient adaptive text-to-speech
We present a meta-learning approach for adaptive text-to-speech (TTS) with few data.
During training, we learn a multi-speaker model using a shared conditional WaveNet core …
During training, we learn a multi-speaker model using a shared conditional WaveNet core …
Beyond traditional threats: A persistent backdoor attack on federated learning
T Liu, Y Zhang, Z Feng, Z Yang, C Xu, D Man… - Proceedings of the …, 2024 - ojs.aaai.org
Backdoors on federated learning will be diluted by subsequent benign updates. This is
reflected in the significant reduction of attack success rate as iterations increase, ultimately …
reflected in the significant reduction of attack success rate as iterations increase, ultimately …
Kernelized information bottleneck leads to biologically plausible 3-factor hebbian learning in deep networks
The state-of-the art machine learning approach to training deep neural networks,
backpropagation, is implausible for real neural networks: neurons need to know their …
backpropagation, is implausible for real neural networks: neurons need to know their …
A rapid and efficient learning rule for biological neural circuits
The dominant view in neuroscience is that changes in synaptic weights underlie learning. It
is unclear, however, how the brain is able to determine which synapses should change, and …
is unclear, however, how the brain is able to determine which synapses should change, and …
Gated linear networks
This paper presents a new family of backpropagation-free neural architectures, Gated Linear
Networks (GLNs). What distinguishes GLNs from contemporary neural networks is the …
Networks (GLNs). What distinguishes GLNs from contemporary neural networks is the …
Globally gated deep linear networks
Abstract Recently proposed Gated Linear Networks (GLNs) present a tractable nonlinear
network architecture, and exhibit interesting capabilities such as learning with local error …
network architecture, and exhibit interesting capabilities such as learning with local error …
Gaussian gated linear networks
Abstract We propose the Gaussian Gated Linear Network (G-GLN), an extension to the
recently proposed GLN family of deep neural networks. Instead of using backpropagation to …
recently proposed GLN family of deep neural networks. Instead of using backpropagation to …
Associative compression networks for representation learning
This paper introduces Associative Compression Networks (ACNs), a new framework for
variational autoencoding with neural networks. The system differs from existing variational …
variational autoencoding with neural networks. The system differs from existing variational …
Backpropagation-free graph neural networks
We propose a class of neural models for graphs that do not rely on backpropagation for
training, thus making learning more biologically plausible and amenable to parallel …
training, thus making learning more biologically plausible and amenable to parallel …