From error bounds to the complexity of first-order descent methods for convex functions
This paper shows that error bounds can be used as effective tools for deriving complexity
results for first-order descent methods in convex minimization. In a first stage, this objective …
results for first-order descent methods in convex minimization. In a first stage, this objective …
A single-loop smoothed gradient descent-ascent algorithm for nonconvex-concave min-max problems
Nonconvex-concave min-max problem arises in many machine learning applications
including minimizing a pointwise maximum of a set of nonconvex functions and robust …
including minimizing a pointwise maximum of a set of nonconvex functions and robust …
Rate of convergence of the Nesterov accelerated gradient method in the subcritical case α≤ 3
In a Hilbert space setting ℋ, given Φ: ℋ→ ℝ a convex continuously differentiable function,
and α a positive parameter, we consider the inertial dynamic system with Asymptotic …
and α a positive parameter, we consider the inertial dynamic system with Asymptotic …
Are we there yet? manifold identification of gradient-related proximal methods
In machine learning, models that generalize better often generate outputs that lie on a low-
dimensional manifold. Recently, several works have separately shown finite-time manifold …
dimensional manifold. Recently, several works have separately shown finite-time manifold …
Gap safe screening rules for sparsity enforcing penalties
In high dimensional regression settings, sparsity enforcing penalties have proved useful to
regularize the data-fitting term. A recently introduced technique called screening rules …
regularize the data-fitting term. A recently introduced technique called screening rules …
[BOOK][B] Sparse image and signal processing: Wavelets and related geometric multiscale analysis
This thoroughly updated new edition presents state of the art sparse and multiscale image
and signal processing. It covers linear multiscale geometric transforms, such as wavelet …
and signal processing. It covers linear multiscale geometric transforms, such as wavelet …
Mind the duality gap: safer rules for the lasso
Screening rules allow to early discard irrelevant variables from the optimization in Lasso
problems, or its derivatives, making solvers faster. In this paper, we propose new versions of …
problems, or its derivatives, making solvers faster. In this paper, we propose new versions of …
Convergence rates of inertial forward-backward algorithms
H Attouch, A Cabot - SIAM Journal on Optimization, 2018 - SIAM
In a Hilbert space \mathcalH, assuming (\alpha_k) a general sequence of nonnegative
numbers, we analyze the convergence properties of the inertial forward-backward algorithm …
numbers, we analyze the convergence properties of the inertial forward-backward algorithm …
Convergence of a relaxed inertial forward–backward algorithm for structured monotone inclusions
H Attouch, A Cabot - Applied Mathematics & Optimization, 2019 - Springer
In a Hilbert space HH, we study the convergence properties of a class of relaxed inertial
forward–backward algorithms. They aim to solve structured monotone inclusions of the form …
forward–backward algorithms. They aim to solve structured monotone inclusions of the form …
Asymptotic stabilization of inertial gradient dynamics with time-dependent viscosity
H Attouch, A Cabot - Journal of Differential Equations, 2017 - Elsevier
In a Hilbert space H, we study the asymptotic behavior, as time variable t goes to+∞, of
nonautonomous gradient-like inertial dynamics, with a time-dependent viscosity coefficient …
nonautonomous gradient-like inertial dynamics, with a time-dependent viscosity coefficient …