Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Optimal approximation rates for deep relu neural networks on sobolev and besov spaces
JW Siegel - Journal of Machine Learning Research, 2023 - jmlr.org
Let Ω=[0, 1] d be the unit cube in ℝ d. We study the problem of how efficiently, in terms of the
number of parameters, deep neural networks with the ReLU activation function can …
number of parameters, deep neural networks with the ReLU activation function can …
Polarity sampling: Quality and diversity control of pre-trained generative networks via singular values
Abstract We present Polarity Sampling, a theoretically justified plug-and-play method for
controlling the generation quality and diversity of any pre-trained deep generative network …
controlling the generation quality and diversity of any pre-trained deep generative network …
Robust adaptive learning control using spiking-based self-organizing emotional neural network for a class of nonlinear systems with uncertainties
S Hou, Z Qiu, Y Chu, X Luo, J Fei - Engineering Applications of Artificial …, 2024 - Elsevier
For second-order general nonlinear systems with uncertainties, a control scheme combining
fractional-order fast terminal sliding mode control (FOFTSMC) and self-organizing emotional …
fractional-order fast terminal sliding mode control (FOFTSMC) and self-organizing emotional …
Enhancing representation power of deep neural networks with negligible parameter growth for industrial applications
In industrial applications where computational resources are finite and data noises are
prevalent, the representation power of deep neural networks (DNNs) is crucial. Traditional …
prevalent, the representation power of deep neural networks (DNNs) is crucial. Traditional …
Spline representation and redundancies of one-dimensional ReLU neural network models
G Plonka, Y Riebe, Y Kolomoitsev - Analysis and Applications, 2023 - World Scientific
We analyze the structure of a one-dimensional deep ReLU neural network (ReLU DNN) in
comparison to the model of continuous piecewise linear (CPL) spline functions with arbitrary …
comparison to the model of continuous piecewise linear (CPL) spline functions with arbitrary …
Singular value perturbation and deep network optimization
We develop new theoretical results on matrix perturbation to shed light on the impact of
architecture on the performance of a deep network. In particular, we explain analytically …
architecture on the performance of a deep network. In particular, we explain analytically …
Wasserstein generative adversarial networks are minimax optimal distribution estimators
Wasserstein generative adversarial networks are minimax optimal distribution estimators
Page 1 The Annals of Statistics 2024, Vol. 52, No. 5, 2167–2193 https://doi.org/10.1214/24-AOS2430 …
Page 1 The Annals of Statistics 2024, Vol. 52, No. 5, 2167–2193 https://doi.org/10.1214/24-AOS2430 …
Sobolev-type embeddings for neural network approximation spaces
We consider neural network approximation spaces that classify functions according to the
rate at which they can be approximated (with error measured in L p) by ReLU neural …
rate at which they can be approximated (with error measured in L p) by ReLU neural …
ScaLES: Scalable Latent Exploration Score for Pre-Trained Generative Networks
We develop Scalable Latent Exploration Score (ScaLES) to mitigate over-exploration in
Latent Space Optimization (LSO), a popular method for solving black-box discrete …
Latent Space Optimization (LSO), a popular method for solving black-box discrete …
Optimal approximation rates for deep ReLU neural networks on Sobolev and Besov spaces
JW Siegel - arxiv preprint arxiv:2211.14400, 2022 - arxiv.org
Let $\Omega=[0, 1]^ d $ be the unit cube in $\mathbb {R}^ d $. We study the problem of how
efficiently, in terms of the number of parameters, deep neural networks with the ReLU …
efficiently, in terms of the number of parameters, deep neural networks with the ReLU …