Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Neuroevolution in deep neural networks: Current trends and future challenges
A variety of methods have been applied to the architectural configuration and learning or
training of artificial deep neural networks (DNN). These methods play a crucial role in the …
training of artificial deep neural networks (DNN). These methods play a crucial role in the …
Meta-learning PINN loss functions
We propose a meta-learning technique for offline discovery of physics-informed neural
network (PINN) loss functions. We extend earlier works on meta-learning, and develop a …
network (PINN) loss functions. We extend earlier works on meta-learning, and develop a …
Dynamics of deep neural networks and neural tangent hierarchy
The evolution of a deep neural network trained by the gradient descent in the
overparametrization regime can be described by its neural tangent kernel (NTK)\cite …
overparametrization regime can be described by its neural tangent kernel (NTK)\cite …
Optimization of graph neural networks: Implicit acceleration by skip connections and more depth
Abstract Graph Neural Networks (GNNs) have been studied through the lens of expressive
power and generalization. However, their optimization properties are less well understood …
power and generalization. However, their optimization properties are less well understood …
How much over-parameterization is sufficient to learn deep ReLU networks?
A recent line of research on deep learning focuses on the extremely over-parameterized
setting, and shows that when the network width is larger than a high degree polynomial of …
setting, and shows that when the network width is larger than a high degree polynomial of …
Bounding the width of neural networks via coupled initialization a worst case analysis
A common method in training neural networks is to initialize all the weights to be
independent Gaussian vectors. We observe that by instead initializing the weights into …
independent Gaussian vectors. We observe that by instead initializing the weights into …
Robustness implies generalization via data-dependent generalization bounds
This paper proves that robustness implies generalization via data-dependent generalization
bounds. As a result, robustness and generalization are shown to be connected closely in a …
bounds. As a result, robustness and generalization are shown to be connected closely in a …
Six lectures on linearized neural networks
In these six lectures, we examine what can be learnt about the behavior of multi-layer neural
networks from the analysis of linear models. We first recall the correspondence between …
networks from the analysis of linear models. We first recall the correspondence between …
Subquadratic overparameterization for shallow neural networks
C Song, A Ramezani-Kebrya… - Advances in …, 2021 - proceedings.neurips.cc
Overparameterization refers to the important phenomenon where the width of a neural
network is chosen such that learning algorithms can provably attain zero loss in nonconvex …
network is chosen such that learning algorithms can provably attain zero loss in nonconvex …
Network size and size of the weights in memorization with two-layers neural networks
Abstract In 1988, Eric B. Baum showed that two-layers neural networks with threshold
activation function can perfectly memorize the binary labels of $ n $ points in general …
activation function can perfectly memorize the binary labels of $ n $ points in general …