Optimal approximation rates for deep relu neural networks on sobolev and besov spaces

JW Siegel - Journal of Machine Learning Research, 2023 - jmlr.org
Let Ω=[0, 1] d be the unit cube in ℝ d. We study the problem of how efficiently, in terms of the
number of parameters, deep neural networks with the ReLU activation function can …

Polarity sampling: Quality and diversity control of pre-trained generative networks via singular values

AI Humayun, R Balestriero… - Proceedings of the …, 2022 - openaccess.thecvf.com
Abstract We present Polarity Sampling, a theoretically justified plug-and-play method for
controlling the generation quality and diversity of any pre-trained deep generative network …

Robust adaptive learning control using spiking-based self-organizing emotional neural network for a class of nonlinear systems with uncertainties

S Hou, Z Qiu, Y Chu, X Luo, J Fei - Engineering Applications of Artificial …, 2024 - Elsevier
For second-order general nonlinear systems with uncertainties, a control scheme combining
fractional-order fast terminal sliding mode control (FOFTSMC) and self-organizing emotional …

Enhancing representation power of deep neural networks with negligible parameter growth for industrial applications

L Chen, L **, M Shang… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
In industrial applications where computational resources are finite and data noises are
prevalent, the representation power of deep neural networks (DNNs) is crucial. Traditional …

Spline representation and redundancies of one-dimensional ReLU neural network models

G Plonka, Y Riebe, Y Kolomoitsev - Analysis and Applications, 2023 - World Scientific
We analyze the structure of a one-dimensional deep ReLU neural network (ReLU DNN) in
comparison to the model of continuous piecewise linear (CPL) spline functions with arbitrary …

Singular value perturbation and deep network optimization

RH Riedi, R Balestriero, RG Baraniuk - Constructive Approximation, 2023 - Springer
We develop new theoretical results on matrix perturbation to shed light on the impact of
architecture on the performance of a deep network. In particular, we explain analytically …

Wasserstein generative adversarial networks are minimax optimal distribution estimators

A Stéphanovitch, E Aamari, C Levrard - The Annals of Statistics, 2024 - projecteuclid.org
Wasserstein generative adversarial networks are minimax optimal distribution estimators
Page 1 The Annals of Statistics 2024, Vol. 52, No. 5, 2167–2193 https://doi.org/10.1214/24-AOS2430 …

Sobolev-type embeddings for neural network approximation spaces

P Grohs, F Voigtlaender - Constructive Approximation, 2023 - Springer
We consider neural network approximation spaces that classify functions according to the
rate at which they can be approximated (with error measured in L p) by ReLU neural …

ScaLES: Scalable Latent Exploration Score for Pre-Trained Generative Networks

O Ronen, AI Humayun, R Balestriero… - arxiv preprint arxiv …, 2024 - arxiv.org
We develop Scalable Latent Exploration Score (ScaLES) to mitigate over-exploration in
Latent Space Optimization (LSO), a popular method for solving black-box discrete …

Optimal approximation rates for deep ReLU neural networks on Sobolev and Besov spaces

JW Siegel - arxiv preprint arxiv:2211.14400, 2022 - arxiv.org
Let $\Omega=[0, 1]^ d $ be the unit cube in $\mathbb {R}^ d $. We study the problem of how
efficiently, in terms of the number of parameters, deep neural networks with the ReLU …