Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Overview frequency principle/spectral bias in deep learning
Understanding deep learning is increasingly emergent as it penetrates more and more into
industry and science. In recent years, a research line from Fourier analysis sheds light on …
industry and science. In recent years, a research line from Fourier analysis sheds light on …
The geometry of feature space in deep learning models: a holistic perspective and comprehensive review
M Lee - Mathematics, 2023 - mdpi.com
As the field of deep learning experiences a meteoric rise, the urgency to decipher the
complex geometric properties of feature spaces, which underlie the effectiveness of diverse …
complex geometric properties of feature spaces, which underlie the effectiveness of diverse …
Empirical phase diagram for three-layer neural networks with infinite width
Substantial work indicates that the dynamics of neural networks (NNs) is closely related to
their initialization of parameters. Inspired by the phase diagram for two-layer ReLU NNs with …
their initialization of parameters. Inspired by the phase diagram for two-layer ReLU NNs with …
Embedding principle: a hierarchical structure of loss landscape of deep neural networks
We prove a general Embedding Principle of loss landscape of deep neural networks (NNs)
that unravels a hierarchical structure of the loss landscape of NNs, ie, loss landscape of an …
that unravels a hierarchical structure of the loss landscape of NNs, ie, loss landscape of an …
Mathematical introduction to deep learning: methods, implementations, and theory
This book aims to provide an introduction to the topic of deep learning algorithms. We review
essential components of deep learning algorithms in full mathematical detail including …
essential components of deep learning algorithms in full mathematical detail including …
Implicit regularization of dropout
It is important to understand how dropout, a popular regularization method, aids in achieving
a good generalization solution during neural network training. In this work, we present a …
a good generalization solution during neural network training. In this work, we present a …
Loss spike in training neural networks
In this work, we investigate the mechanism underlying loss spikes observed during neural
network training. When the training enters a region with a lower-loss-as-sharper (LLAS) …
network training. When the training enters a region with a lower-loss-as-sharper (LLAS) …
On the existence of global minima and convergence analyses for gradient descent methods in the training of deep neural networks
In this article we study fully-connected feedforward deep ReLU ANNs with an arbitrarily
large number of hidden layers and we prove convergence of the risk of the GD optimization …
large number of hidden layers and we prove convergence of the risk of the GD optimization …
Towards understanding the condensation of neural networks at initial training
Empirical works show that for ReLU neural networks (NNs) with small initialization, input
weights of hidden neurons (the input weight of a hidden neuron consists of the weight from …
weights of hidden neurons (the input weight of a hidden neuron consists of the weight from …
Phase diagram of initial condensation for two-layer neural networks
The phenomenon of distinct behaviors exhibited by neural networks under varying scales of
initialization remains an enigma in deep learning research. In this paper, based on the …
initialization remains an enigma in deep learning research. In this paper, based on the …