Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Modern regularization methods for inverse problems
Regularization methods are a key tool in the solution of inverse problems. They are used to
introduce prior knowledge and allow a robust approximation of ill-posed (pseudo-) inverses …
introduce prior knowledge and allow a robust approximation of ill-posed (pseudo-) inverses …
Convex-concave backtracking for inertial Bregman proximal gradient algorithms in nonconvex optimization
Backtracking line-search is an old yet powerful strategy for finding better step sizes to be
used in proximal gradient algorithms. The main principle is to locally find a simple convex …
used in proximal gradient algorithms. The main principle is to locally find a simple convex …
Optimal convergence rates for the proximal bundle method
We study convergence rates of the classic proximal bundle method for a variety of
nonsmooth convex optimization problems. We show that, without any modification, this …
nonsmooth convex optimization problems. We show that, without any modification, this …
A Bregman forward-backward linesearch algorithm for nonconvex composite optimization: superlinear convergence to nonisolated local minima
We introduce Bella, a locally superlinearly convergent Bregman forward-backward splitting
method for minimizing the sum of two nonconvex functions, one of which satisfies a relative …
method for minimizing the sum of two nonconvex functions, one of which satisfies a relative …
Gradient methods for problems with inexact model of the objective
We consider optimization methods for convex minimization problems under inexact
information on the objective function. We introduce inexact model of the objective, which as …
information on the objective function. We introduce inexact model of the objective, which as …
On quasi-Newton forward-backward splitting: proximal calculus and convergence
We introduce a framework for quasi-Newton forward-backward splitting algorithms (proximal
quasi-Newton methods) with a metric induced by diagonal ± rank-r symmetric positive …
quasi-Newton methods) with a metric induced by diagonal ± rank-r symmetric positive …
Inexact model: A framework for optimization and variational inequalities
In this paper, we propose a general algorithmic framework for the first-order methods in
optimization in a broad sense, including minimization problems, saddle-point problems and …
optimization in a broad sense, including minimization problems, saddle-point problems and …
Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria
We consider optimization algorithms that successively minimize simple Taylor-like models of
the objective function. Methods of Gauss–Newton type for minimizing the composition of a …
the objective function. Methods of Gauss–Newton type for minimizing the composition of a …
Inertial proximal gradient methods with Bregman regularization for a class of nonconvex optimization problems
This paper proposes an inertial Bregman proximal gradient method for minimizing the sum
of two possibly nonconvex functions. This method includes two different inertial steps and …
of two possibly nonconvex functions. This method includes two different inertial steps and …
Convergence analysis for Bregman iterations in minimizing a class of Landau free energy functionals
Finding stationary states of Landau free energy functionals has to solve a nonconvex infinite-
dimensional optimization problem. In this paper, we develop a Bregman distance based …
dimensional optimization problem. In this paper, we develop a Bregman distance based …