Modern regularization methods for inverse problems

M Benning, M Burger - Acta numerica, 2018 - cambridge.org
Regularization methods are a key tool in the solution of inverse problems. They are used to
introduce prior knowledge and allow a robust approximation of ill-posed (pseudo-) inverses …

Convex-concave backtracking for inertial Bregman proximal gradient algorithms in nonconvex optimization

MC Mukkamala, P Ochs, T Pock, S Sabach - SIAM Journal on Mathematics of …, 2020 - SIAM
Backtracking line-search is an old yet powerful strategy for finding better step sizes to be
used in proximal gradient algorithms. The main principle is to locally find a simple convex …

Optimal convergence rates for the proximal bundle method

M Díaz, B Grimmer - SIAM Journal on Optimization, 2023 - SIAM
We study convergence rates of the classic proximal bundle method for a variety of
nonsmooth convex optimization problems. We show that, without any modification, this …

A Bregman forward-backward linesearch algorithm for nonconvex composite optimization: superlinear convergence to nonisolated local minima

M Ahookhosh, A Themelis, P Patrinos - SIAM Journal on Optimization, 2021 - SIAM
We introduce Bella, a locally superlinearly convergent Bregman forward-backward splitting
method for minimizing the sum of two nonconvex functions, one of which satisfies a relative …

Gradient methods for problems with inexact model of the objective

FS Stonyakin, D Dvinskikh, P Dvurechensky… - … Optimization Theory and …, 2019 - Springer
We consider optimization methods for convex minimization problems under inexact
information on the objective function. We introduce inexact model of the objective, which as …

On quasi-Newton forward-backward splitting: proximal calculus and convergence

S Becker, J Fadili, P Ochs - SIAM Journal on Optimization, 2019 - SIAM
We introduce a framework for quasi-Newton forward-backward splitting algorithms (proximal
quasi-Newton methods) with a metric induced by diagonal ± rank-r symmetric positive …

Inexact model: A framework for optimization and variational inequalities

F Stonyakin, A Tyurin, A Gasnikov… - Optimization Methods …, 2021 - Taylor & Francis
In this paper, we propose a general algorithmic framework for the first-order methods in
optimization in a broad sense, including minimization problems, saddle-point problems and …

Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria

D Drusvyatskiy, AD Ioffe, AS Lewis - Mathematical Programming, 2021 - Springer
We consider optimization algorithms that successively minimize simple Taylor-like models of
the objective function. Methods of Gauss–Newton type for minimizing the composition of a …

Inertial proximal gradient methods with Bregman regularization for a class of nonconvex optimization problems

Z Wu, C Li, M Li, A Lim - Journal of Global Optimization, 2021 - Springer
This paper proposes an inertial Bregman proximal gradient method for minimizing the sum
of two possibly nonconvex functions. This method includes two different inertial steps and …

Convergence analysis for Bregman iterations in minimizing a class of Landau free energy functionals

C Bao, C Chen, K Jiang, L Qiu - SIAM Journal on Numerical Analysis, 2024 - SIAM
Finding stationary states of Landau free energy functionals has to solve a nonconvex infinite-
dimensional optimization problem. In this paper, we develop a Bregman distance based …