Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Learning with norm constrained, over-parameterized, two-layer neural networks
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space
to model functions by neural networks as the curse of dimensionality (CoD) cannot be …
to model functions by neural networks as the curse of dimensionality (CoD) cannot be …
Unraveling attention via convex duality: Analysis and interpretations of vision transformers
Vision transformers using self-attention or its proposed alternatives have demonstrated
promising results in many image related tasks. However, the underpinning inductive bias of …
promising results in many image related tasks. However, the underpinning inductive bias of …
Cronos: Enhancing deep learning with scalable gpu accelerated convex neural networks
We introduce the CRONOS algorithm for convex optimization of two-layer neural networks.
CRONOS is the first algorithm capable of scaling to high-dimensional datasets such as …
CRONOS is the first algorithm capable of scaling to high-dimensional datasets such as …
Optimal sets and solution paths of relu networks
We develop an analytical framework to characterize the set of optimal ReLU neural networks
by reformulating the non-convex training problem as a convex program. We show that the …
by reformulating the non-convex training problem as a convex program. We show that the …
Efficient global optimization of two-layer relu networks: Quadratic-time algorithms and adversarial training
The nonconvexity of the artificial neural network (ANN) training landscape brings
optimization difficulties. While the traditional back-propagation stochastic gradient descent …
optimization difficulties. While the traditional back-propagation stochastic gradient descent …
Variation spaces for multi-output neural networks: Insights on multi-task learning and network compression
This paper introduces a novel theoretical framework for the analysis of vector-valued neural
networks through the development of vector-valued variation spaces, a new class of …
networks through the development of vector-valued variation spaces, a new class of …
Convex relaxations of relu neural networks approximate global optima in polynomial time
In this paper, we study the optimality gap between two-layer ReLU networks regularized with
weight decay and their convex relaxations. We show that when the training data is random …
weight decay and their convex relaxations. We show that when the training data is random …
The real tropical geometry of neural networks
We consider a binary classifier defined as the sign of a tropical rational function, that is, as
the difference of two convex piecewise linear functions. The parameter space of ReLU …
the difference of two convex piecewise linear functions. The parameter space of ReLU …
Fuzzy Adaptive Knowledge-Based Inference Neural Networks: Design and Analysis
A novel fuzzy adaptive knowledge-based inference neural network (FAKINN) is proposed in
this study. Conventional fuzzy cluster-based neural networks (FCBNNs) suffer from the …
this study. Conventional fuzzy cluster-based neural networks (FCBNNs) suffer from the …
Why line search when you can plane search? so-friendly neural networks allow per-iteration optimization of learning and momentum rates for every layer
We introduce the class of SO-friendly neural networks, which include several models used in
practice including networks with 2 layers of hidden weights where the number of inputs is …
practice including networks with 2 layers of hidden weights where the number of inputs is …