Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Convex optimization for trajectory generation: A tutorial on generating dynamically feasible trajectories reliably and efficiently
Reliable and efficient trajectory generation methods are a fundamental need for
autonomous dynamical systems. The goal of this article is to provide a comprehensive …
autonomous dynamical systems. The goal of this article is to provide a comprehensive …
A survey on some recent developments of alternating direction method of multipliers
DR Han - Journal of the Operations Research Society of China, 2022 - Springer
Recently, alternating direction method of multipliers (ADMM) attracts much attentions from
various fields and there are many variant versions tailored for different models. Moreover, its …
various fields and there are many variant versions tailored for different models. Moreover, its …
On the global convergence of gradient descent for over-parameterized models using optimal transport
Many tasks in machine learning and signal processing can be solved by minimizing a
convex function of a measure. This includes sparse spikes deconvolution or training a …
convex function of a measure. This includes sparse spikes deconvolution or training a …
Gradient descent maximizes the margin of homogeneous neural networks
In this paper, we study the implicit regularization of the gradient descent algorithm in
homogeneous neural networks, including fully-connected and convolutional neural …
homogeneous neural networks, including fully-connected and convolutional neural …
[PDF][PDF] Gradient descent only converges to minimizers
Gradient Descent Only Converges to Minimizers Page 1 JMLR: Workshop and Conference
Proceedings vol 49:1–12, 2016 Gradient Descent Only Converges to Minimizers Jason D. Lee …
Proceedings vol 49:1–12, 2016 Gradient Descent Only Converges to Minimizers Jason D. Lee …
First-order methods almost always avoid strict saddle points
We establish that first-order methods avoid strict saddle points for almost all initializations.
Our results apply to a wide variety of first-order methods, including (manifold) gradient …
Our results apply to a wide variety of first-order methods, including (manifold) gradient …
Algorithmic regularization in learning deep homogeneous models: Layers are automatically balanced
We study the implicit regularization imposed by gradient descent for learning multi-layer
homogeneous functions including feed-forward fully connected and convolutional deep …
homogeneous functions including feed-forward fully connected and convolutional deep …
From error bounds to the complexity of first-order descent methods for convex functions
This paper shows that error bounds can be used as effective tools for deriving complexity
results for first-order descent methods in convex minimization. In a first stage, this objective …
results for first-order descent methods in convex minimization. In a first stage, this objective …
Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward–backward splitting, and regularized Gauss–Seidel methods
In view of the minimization of a nonsmooth nonconvex function f, we prove an abstract
convergence result for descent methods satisfying a sufficient-decrease assumption, and …
convergence result for descent methods satisfying a sufficient-decrease assumption, and …
Proximal alternating minimization and projection methods for nonconvex problems: An approach based on the Kurdyka-Łojasiewicz inequality
We study the convergence properties of an alternating proximal minimization algorithm for
nonconvex structured functions of the type: L (x, y)= f (x)+ Q (x, y)+ g (y), where f and g are …
nonconvex structured functions of the type: L (x, y)= f (x)+ Q (x, y)+ g (y), where f and g are …