Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
On the implicit bias in deep-learning algorithms
G Vardi - Communications of the ACM, 2023 - dl.acm.org
On the Implicit Bias in Deep-Learning Algorithms Page 1 DEEP LEARNING HAS been highly
successful in recent years and has led to dramatic improvements in multiple domains …
successful in recent years and has led to dramatic improvements in multiple domains …
Neural networks are convex regularizers: Exact polynomial-time convex optimization formulations for two-layer networks
We develop exact representations of training two-layer neural networks with rectified linear
units (ReLUs) in terms of a single convex program with number of variables polynomial in …
units (ReLUs) in terms of a single convex program with number of variables polynomial in …
Implicit regularization towards rank minimization in relu networks
We study the conjectured relationship between the implicit regularization in neural networks,
trained with gradient-based methods, and rank minimization of their weight matrices …
trained with gradient-based methods, and rank minimization of their weight matrices …
On the effective number of linear regions in shallow univariate relu networks: Convergence guarantees and implicit bias
We study the dynamics and implicit bias of gradient flow (GF) on univariate ReLU neural
networks with a single hidden layer in a binary classification setting. We show that when the …
networks with a single hidden layer in a binary classification setting. We show that when the …
Revealing the structure of deep neural networks via convex duality
We study regularized deep neural networks (DNNs) and introduce a convex analytic
framework to characterize the structure of the hidden layers. We show that a set of optimal …
framework to characterize the structure of the hidden layers. We show that a set of optimal …
Learning a neuron by a shallow relu network: Dynamics and implicit bias for correlated inputs
We prove that, for the fundamental regression task of learning a single neuron, training a
one-hidden layer ReLU network of any width by gradient flow from a small initialisation …
one-hidden layer ReLU network of any width by gradient flow from a small initialisation …
Global optimality beyond two layers: Training deep relu networks via convex programs
Understanding the fundamental mechanism behind the success of deep neural networks is
one of the key challenges in the modern machine learning literature. Despite numerous …
one of the key challenges in the modern machine learning literature. Despite numerous …
How do minimum-norm shallow denoisers look in function space?
Neural network (NN) denoisers are an essential building block in many common tasks,
ranging from image reconstruction to image generation. However, the success of these …
ranging from image reconstruction to image generation. However, the success of these …
Noisy interpolation learning with shallow univariate relu networks
Understanding how overparameterized neural networks generalize despite perfect
interpolation of noisy training data is a fundamental question. Mallinar et. al. 2022 noted that …
interpolation of noisy training data is a fundamental question. Mallinar et. al. 2022 noted that …
On margin maximization in linear and relu networks
The implicit bias of neural networks has been extensively studied in recent years. Lyu and Li
(2019) showed that in homogeneous networks trained with the exponential or the logistic …
(2019) showed that in homogeneous networks trained with the exponential or the logistic …